ignite service session issue

2016-04-22 Thread Zhengqingzheng
Hi there,
I am running the ignite service via ssh terminal.
I use command like this : ./ignite.sh  �CJ-Xms8g �CJ-Xmx8g  &
However, when I turn off the terminal, the ignite server is also closed.
Is there anyway to make sure the service running without depend on terminal 
session?

Cheers,
Kevin

[logo]郑清正Kevin Zheng| Research Engineer
华为软件技术有限公司Huawei Software Technologies Co.,Ltd. | 电信软件技术规划部Technology Planning 
Dept,CS
(Phone) 025-56620168 | (Mobile)17072565656 | (Fax) 025-56623561
南京市软件大道101号华为基地N4-3F-A190S 邮编:210012|HUAWEI Area N4-3F-A190S, Software Ave., 
Yuhuatai District,Nanjing 210012, P.R.China
[cid:image002.png@01CF9D2E.4FBF0950]



Re: ignite logging not captured in log file (log4j)

2016-04-22 Thread vkulichenko
If adding ignite-log4j module slows you down, then you're definitely
producing too much logs. Can you double check your settings?

-Vla 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-logging-not-captured-in-log-file-log4j-tp4334p4474.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: At client side unable to use excute transaction?

2016-04-22 Thread Ravi kumar Puri
How to materialize it?
On 23-Apr-2016 02:53, "Alexey Goncharuk"  wrote:

> Val,
>
> StringBuilder.append() is what javac generates for `"Read value: " + val`
> in the code.
>
> Ravi,
>
> Hibernate returned you a collection proxy for your map, however by the
> time control flow leaves the CacheStore your hibernate session gets closed
> and this collection proxy cannot be used anymore. You need to make sure
> that all objects (and fields in those objects) returned from the cache
> store get materialized before the session is closed.
>


Re: Performance Issue - Threads blocking

2016-04-22 Thread Matt Hoffman
(Inline)

On Fri, Apr 22, 2016, 4:26 PM vkulichenko 
wrote:
>
> Hi Matt,
>
> I'm confused. The locking does happen on per-entry level, otherwise it's
> impossible to guarantee data consistency. Two concurrent updates or reads
> for the same key will wait for each other on this lock. But this should
not
> cause performance degradation, unless you have very few keys and very high
> contention on them.
>

Based on his claim of a lot of threads waiting on the same locks, I assumed
that's what was happening -- high contention for a few cache keys. I don't
know his use case, but I can imagine cases with a fairly small number of
very "hot" entries.
It wouldn't necessarily require very few keys, right? Just high contention
on a few of them.

> The only thing I see here is that the value is deserialized on read. This
is
> done because JCache requires store-by-value semantics and thus we create a
> copy each time you get the value (by deserializing its binary
> representation). You can override this behavior by setting
> CacheConfiguration.setCopyOnRead(false) property, this should give you
> performance improvement. Only note that it's not safe to modify the
instance
> that you got from cache this way.
>

Do you think that would be a candidate for the "Performance tips" page in
the docs? I know I've referred to that page a few times recently myself.

> -Val
>
>
>
> --
> View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433p4465.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Detecting a node leaving the cluster

2016-04-22 Thread Ralph Goers
I have an application that is using Ignite for a clustered cache.  Each member 
of the cache will have connections open with a third party application. When a 
cluster member stops its connections must be re-established on other cluster 
members. 

I can do this manually if I have a way of detecting a node has left the 
cluster, but I am hoping that there is some other recommended way of handling 
this.

Any suggestions?

Ralph


Re: At client side unable to use excute transaction?

2016-04-22 Thread Alexey Goncharuk
Val,

StringBuilder.append() is what javac generates for `"Read value: " + val`
in the code.

Ravi,

Hibernate returned you a collection proxy for your map, however by the time
control flow leaves the CacheStore your hibernate session gets closed and
this collection proxy cannot be used anymore. You need to make sure that
all objects (and fields in those objects) returned from the cache store get
materialized before the session is closed.


Re: Client fails to connect - joinTimeout vs networkTimeout

2016-04-22 Thread vkulichenko
Binti,

It sounds like client were not able to connect due to instability on the
server, and increasing networkTimeout gave them a better chance to join, but
most likely they could not work properly anyway. Is I said, most likely this
was caused by memory issues.

Any particular reason you're starting several nodes on each box. I would
recommend to take a look at off-heap memory [1]. It will allow you to have
only 4 nodes will small heap sizes (e.g., 4GB per node) and store all the
data outside off-heap. In many cases it allows to solve memory issues
easier.

[1] https://apacheignite.readme.io/docs/off-heap-memory

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Client-fails-to-connect-joinTimeout-vs-networkTimeout-tp4419p4469.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: At client side unable to use excute transaction?

2016-04-22 Thread vkulichenko
Hi Ravi,

I'm confused by the trace. It shows that executeTransaction() method called
StringBuilder.append(), but I don't see this in the code you provided. Is it
the full trace? If so, are you sure that you properly rebuild the project?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/At-client-side-unable-to-use-excute-transaction-tp4447p4468.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite logging not captured in log file (log4j)

2016-04-22 Thread bintisepaha
I had set it on ERROR level. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-logging-not-captured-in-log-file-log4j-tp4334p4467.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite logging not captured in log file (log4j)

2016-04-22 Thread vkulichenko
Hi Binti,

Did you set DEBUG level of org.apache.ignite? If so, this will produce too
much output and will definitely slow you down. You should use DEBUG only if
it's needed, and preferably for several particular categories you need DEBUG
output for.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-logging-not-captured-in-log-file-log4j-tp4334p4466.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance Issue - Threads blocking

2016-04-22 Thread vkulichenko
Hi Matt,

I'm confused. The locking does happen on per-entry level, otherwise it's
impossible to guarantee data consistency. Two concurrent updates or reads
for the same key will wait for each other on this lock. But this should not
cause performance degradation, unless you have very few keys and very high
contention on them.

The only thing I see here is that the value is deserialized on read. This is
done because JCache requires store-by-value semantics and thus we create a
copy each time you get the value (by deserializing its binary
representation). You can override this behavior by setting
CacheConfiguration.setCopyOnRead(false) property, this should give you
performance improvement. Only note that it's not safe to modify the instance
that you got from cache this way.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433p4465.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Compute Grid API in C++/Scala

2016-04-22 Thread vkulichenko
Hi Arthi,

In Scala you can use Java API directly, they are fully compatible. C++
currently supports only Data Grid, but not the Compute Grid.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-API-in-C-Scala-tp4456p4464.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Compute Grid API in C++/Scala

2016-04-22 Thread Igor Sapego
Hello Arthi,

There is no yet Compute API in C++ client, but we are planning to
add it soon.

Best Regards,
Igor

On Fri, Apr 22, 2016 at 4:50 PM, arthi 
wrote:

> Hi Team,
>
> Is there a C++/Scala API for the compute grid?
>
> Thanks
> Arthi
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-API-in-C-Scala-tp4456.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: What will happend in case of run out of memory ?

2016-04-22 Thread vkulichenko
Hi,

Data will be persisted unless the cache update fails on the server (in this
case you will get an exception on client side). To avoid out of memory
issues, you can use evictions [1], that will remove less frequently used
entries from memory. If you need an evicted entry again, it will loaded from
the persistence store.

[1] https://apacheignite.readme.io/docs/evictions

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4462.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Queries

2016-04-22 Thread vkulichenko
Hi,

Generally speaking, continuous queries are designed to listen for data
updates, not for heavy computational tasks. Can you provide more information
about your use case? What are you trying to achieve?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Queries-tp4440p4461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite logging not captured in log file (log4j)

2016-04-22 Thread bintisepaha
Val, adding the dependency worked. The clients also started logging at root
level. We will have to modify the client log4j files to only log Error
level. Thanks, but we saw another issue now. When we tried to bring up the
grid (server nodes - ERROR level) and if old clients were already connected,
it was very slow to bring up the grid. I had to undo the change in our UAT
environment and remove the logging dependency.

Have you noticed this issue before?

Is my understanding correct, that if the ignite-log4j dependency is found on
the classpath any node will start logging ignite logs?

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-logging-not-captured-in-log-file-log4j-tp4334p4460.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Client fails to connect - joinTimeout vs networkTimeout

2016-04-22 Thread bintisepaha
2. I see your point, but setting joinTimeout looks like a good solution. Does
it work for you? 
joinTimeout was working earlier with 5 seconds, for some clients we had to
raise it. but eventually some clients could not connect at all with any
jointimeout. We had to remove joinTimeout and add networkTimeout and that
worked. But reading the API we could not understand why one works and not
the other. 

we have not yet tried making the networkTimeout change in production. 

is there anyway we can ping the cluster without ignition.start and see if it
is up? then we won't need the joinTimeout.

We will research into memory usage and get back to you. Are there any GC
recommendations for large clusters? 16 nodes - 12 GB each on 4 linux hosts.

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Client-fails-to-connect-joinTimeout-vs-networkTimeout-tp4419p4459.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error running nodes in .net and c++

2016-04-22 Thread Murthy Kakarlamudi
Hi,
   So I followed the suggestion that was made to run the server node in
java instead of in .NET. The java server node itself started fine and
loaded the data from SQL Server upon startup into the cache. Next I tried
to fire up the c++ node as a client to access the cache and ran into the
below error.

[10:30:20] Topology snapshot [ver=6, servers=1, clients=1, CPUs=4,
heap=2.3GB]

>>> Cache node started.

[10:30:20,175][SEVERE][exchange-worker-#38%null%][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=6,
minorTopVer=1], nodeId=dd8ffa22, evt=DISCOVERY_CUSTOM_EVT]
class org.apache.ignite.IgniteException: Spring application context
resource is not injected.
at
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:156)
at
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:96)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1243)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:956)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:523)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[10:30:20,187][SEVERE][exchange-worker-#38%null%][GridCachePartitionExchangeManager]
Failed to wait for completion of partition map exchange (preloading will
not start): GridDhtPartitionsExchangeFuture [dummy=false,
forcePreload=false, reassign=false, discoEvt=DiscoveryCustomEvent
[customMsg=DynamicCacheChangeBatch [reqs=[DynamicCacheChangeRequest
[deploymentId=8ea535e3451-d29afc27-9b4b-4125-bbf2-232c08daa0cb,
startCfg=CacheConfiguration [name=buCache,
storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictSync=false,
evictKeyBufSize=1024, evictSyncConcurrencyLvl=4, evictSyncTimeout=1,
evictFilter=null, evictMaxOverflowRatio=10.0, eagerTtl=true,
dfltLockTimeout=0, startSize=150, nearCfg=null, writeSync=PRIMARY_SYNC,
storeFactory=CacheJdbcPojoStoreFactory [batchSizw=512,
dataSrcBean=myDataSource, dialect=null, maxPoolSize=4, maxWriteAttempts=2,
parallelLoadCacheMinThreshold=512,
hasher=o.a.i.cache.store.jdbc.JdbcTypeDefaultHasher@f7a23d, dataSrc=null],
storeKeepBinary=false, loadPrevVal=false,
aff=o.a.i.cache.affinity.rendezvous.RendezvousAffinityFunction@583c4bbf,
cacheMode=PARTITIONED, atomicityMode=ATOMIC, atomicWriteOrderMode=PRIMARY,
backups=1, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288,
rebalanceBatchesPrefetchCount=2, offHeapMaxMem=-1, swapEnabled=false,
maxConcurrentAsyncOps=500, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
memMode=ONHEAP_TIERED,
affMapper=o.a.i.i.processors.cache.CacheDefaultBinaryAffinityKeyMapper@5f161423,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, readFromBackup=true,
nodeFilter=o.a.i.configuration.CacheConfiguration$IgniteAllNodesPredicate@77eb4727,
sqlSchema=null, sqlEscapeAll=false, sqlOnheapRowCacheSize=10240,
snapshotableIdx=false, cpOnRead=true, topValidator=null], cacheType=USER,
initiatingNodeId=dd8ffa22-a967-4996-8f38-d8eca80c52c3, nearCacheCfg=null,
clientStartOnly=true, stop=false, close=false, failIfExists=false,
template=false, exchangeNeeded=true, cacheFutTopVer=null,
cacheName=buCache]], clientNodes=null,
id=b08e06e3451-2b403421-9302-4d26-bcd2-68ba1c3007e0,
clientReconnect=false], affTopVer=AffinityTopologyVersion [topVer=6,
minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=dd8ffa22-a967-4996-8f38-d8eca80c52c3, addrs=[0:0:0:0:0:0:0:1,
127.0.0.1, 192.168.0.5, 2001:0:5ef5:79fd:c81:2c03:3f57:fffa,
2600:8806:0:8d00:0:0:0:1, 2600:8806:0:8d00:3ccf:1e94:1ab4:83a9,
2600:8806:0:8d00:8426:fb5a:96d:bfb4], sockAddrs=[LAPTOP-QIT4AVOG/
192.168.0.5:0, /0:0:0:0:0:0:0:1:0, LAPTOP-QIT4AVOG/192.168.0.5:0, /
127.0.0.1:0, LAPTOP-QIT4AVOG/192.168.0.5:0, /192.168.0.5:0, LAPTOP-QIT4AVOG/
192.168.0.5:0, /2001:0:5ef5:79fd:c81:2c03:3f57:fffa:0, LAPTOP-QIT4AVOG/
192.168.0.5:0, /2600:8806:0:8d00:0:0:0:1:0,
/2600:8806:0:8d00:3ccf:1e94:1ab4:83a9:0,
/2600:8806:0:8d00:8426:fb5a:96d:bf

Ignite Installation with Spark under CDH

2016-04-22 Thread mdolgonos
I'm trying to install and integrate Ignite with Spark under CDH by following
the recommendation at
https://apacheignite-fs.readme.io/docs/installation-deployment for
Standalone Deployment. I modified the existing $SPARK_HOME/conf/spark-env.sh
as recommended but the following command still can't recognize  the ignite
package
import org.apache.ignite.configuration._
Can somebody advise if I'm missing anything?
Thank you,



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Compute Grid API in C++/Scala

2016-04-22 Thread arthi
Hi Team,

Is there a C++/Scala API for the compute grid? 

Thanks
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-API-in-C-Scala-tp4456.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance Issue - Threads blocking

2016-04-22 Thread Matt Hoffman
I'm assuming you're seeing a lot of threads that are BLOCKED waiting on
that locked GridLocalCacheEntry (<70d32489> in that example you pasted
above). Looking at the code, it looks like it does block on individual
cache entries (so two reads of the same key within the same JVM will
block). In your particular example above, the thread in question is
publishing an EVT_CACHE_OBJECT_EXPIRED event. If you don't need that,
turning it off (along with EVT_CACHE_OBJECT_READ) will speed up the time
that the cache entry spends blocking other reads (and speed things up,
generally).
It's locking to make sure it's deserialized from swap once and expired once
(if necessary; looks like it was in this particular case).

matt

On Fri, Apr 22, 2016 at 8:07 AM, Vladimir Ozerov 
wrote:

> Hi,
>
> Could you please explain why do you think that the thread is blocked? I
> see it is in a RUNNABLE state.
>
> Vladimir.
>
> On Fri, Apr 22, 2016 at 2:41 AM, ccanning  wrote:
>
>> We seem to be having some serious performance issues after adding Apache
>> Ignite Local cache to our APIs'. Looking at a heap dump, we seem to have a
>> bunch of threads blocked by this lock:
>>
>> "ajp-0.0.0.0-8009-70" - Thread t@641
>>java.lang.Thread.State: RUNNABLE
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:166)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1486)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:1830)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.doReadMap(BinaryUtils.java:1813)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1597)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1646)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:643)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:714)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:537)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:117)
>> at
>>
>> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:280)
>> at
>>
>> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:145)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:276)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:159)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:92)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:862)
>> - locked <70d32489> (a
>> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet(GridCacheMapEntry.java:669)
>> at
>>
>> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.getAllInternal(GridLocalAtomicCache.java:587)
>> at
>>
>> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.get(GridLocalAtomicCache.java:483)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1378)
>> at
>>
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:864)
>> at
>> org.apache.ignite.cache.spring.SpringCache.get(SpringCache.java:52)
>>
>>  - locked <70d32489> (a
>> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
>>
>> Should this be causing blocking in a high-throughput API? Do you have any
>> pointers in how we could solve this issue?
>>
>> Thanks.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Ignite cache data size problem.

2016-04-22 Thread kevin.zheng
BTW, I created 4 + 3 nodes on two servers.
each node I called command like this ./ignite.sh  -J-Xmx8g -J-Xms8g 

kind regards,
Kevin



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4454.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


re: Ignite cache data size problem.

2016-04-22 Thread Zhengqingzheng
Hi Vladimir,

My table is very simple, it contains the following information
OrgId (varchar) ,  oId(varchar),  fnum(int), gId(number), msg(varchar), 
num(number), date(date)
the gId is the primary Index.
--

And the java class is defined as:

package com.huawei.soa.ignite.test;

import java.io.Serializable;
import java.math.BigDecimal;
import java.util.Date;

import org.apache.ignite.cache.query.annotations.QuerySqlField;

public class UniqueField implements Serializable
{
public String getOrgId()
{
return orgId;
}

public void setOrgId(String orgId)
{
this.orgId = orgId;
}

public String getOId()
{
return oId;
}

public void setOId(String oId)
{
this.oId = oId;
}

public String getGid()
{
return gId;
}

public void setGuid(String gId)
{
this.gId = gId;
}

public int getFNum()
{
return fNum;
}

public void setFNum(int fNum)
{
this.fNum = fNum;
}

public String getMsg()
{
return msg;
}

public void setMsg(String msg)
{
this.msg = msg;
}

public BigDecimal getNum()
{
return num;
}

public void setNum(BigDecimal num)
{
this.num = num;
}

public Date getDate()
{
return date;
}

public void setDate(Date date)
{
this.date = date;
}

@QuerySqlField
private String orgId;

@QuerySqlField(index=true)
private String oId;

@QuerySqlField(index=true)
private String gId;

@QuerySqlField
private int fNum;

@QuerySqlField
private String msg;

@QuerySqlField
private BigDecimal num;

@QuerySqlField
private Date date;

public UniqueField(){};

public UniqueField(
String orgId,
String oId,
String gId,
int fNum,
String msg,
BigDecimal num,
Date date
   ){
this.orgId=orgId;
this.oId=oId;
this.gId = gId;
this.fNum = fNum;
this.msg = msg;
this.num = num;
this.date = date;
}
}

--
My configuration file on the server side is listed as follows:





http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>
   
   
 
 
 

   






 
 
 
   



 
 
   
   

   
 












   


 
 
 
   

 
   

  
xxx.xxx.xxx.xxx:47500..47509
  
xxx.xxx.xxx.xxx:47500..47509
  
xxx.xxx.xxx.xxx:47500..47509 
  
xxx.xxx.xxx.xxx:47500..47509 

   
 

   
 





Kind regards,
Kevin

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com]
发送时间: 2016年4月22日 20:13
收件人: user@ignite.apache.org
主题: Re: Ignite cache data size problem.

Hi,

It looks like you have relatively small entries (somewhere about 60-70 bytes 
per key-value pair). Ignite also has some intrinsic overhead, which could be 
more than actual data in this case. However, I surely would not expect that it 
will not fit into 80GB.

Could you please share your key and value model classes and your XML 
configuration to investigate it further?

Vladimir.

On Fri, Apr 22, 2016 at 2:02 PM, Zhengqingzheng 
mailto:zhengqingzh...@huawei.com>> wrote:
Hi there,
I am trying to load a table with 47Million records, in which the

Re: Ignite cache data size problem.

2016-04-22 Thread Vladimir Ozerov
Hi,

It looks like you have relatively small entries (somewhere about 60-70
bytes per key-value pair). Ignite also has some intrinsic overhead, which
could be more than actual data in this case. However, I surely would not
expect that it will not fit into 80GB.

Could you please share your key and value model classes and your XML
configuration to investigate it further?

Vladimir.

On Fri, Apr 22, 2016 at 2:02 PM, Zhengqingzheng 
wrote:

> Hi there,
>
> I am trying to load a table with 47Million records, in which the data size
> is less than 3gb. However, When I load into the memory ( two vm with 48+32
> = 80gb), it is still crushed due to not enough memory space? This problem
> occurred when I instantiated 6 + 4 nodes.
>
> Why the cache model need so much space ( 3g vs 80g)?
>
> Any idea to explain this issue?
>
>
>
> Kind regards,
>
> Kevin
>
>
>


Re: Performance Issue - Threads blocking

2016-04-22 Thread Vladimir Ozerov
Hi,

Could you please explain why do you think that the thread is blocked? I see
it is in a RUNNABLE state.

Vladimir.

On Fri, Apr 22, 2016 at 2:41 AM, ccanning  wrote:

> We seem to be having some serious performance issues after adding Apache
> Ignite Local cache to our APIs'. Looking at a heap dump, we seem to have a
> bunch of threads blocked by this lock:
>
> "ajp-0.0.0.0-8009-70" - Thread t@641
>java.lang.Thread.State: RUNNABLE
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:166)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1486)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:1830)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadMap(BinaryUtils.java:1813)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1597)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1646)
> at
>
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:643)
> at
>
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:714)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:537)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:117)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:280)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:145)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:276)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:159)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:92)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:862)
> - locked <70d32489> (a
> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet(GridCacheMapEntry.java:669)
> at
>
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.getAllInternal(GridLocalAtomicCache.java:587)
> at
>
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.get(GridLocalAtomicCache.java:483)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1378)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:864)
> at
> org.apache.ignite.cache.spring.SpringCache.get(SpringCache.java:52)
>
>  - locked <70d32489> (a
> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
>
> Should this be causing blocking in a high-throughput API? Do you have any
> pointers in how we could solve this issue?
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: What will happend in case of run out of memory ?

2016-04-22 Thread tomk
Wait,
I don't understand..
I am writing data with write-through.

What in case when memory run out ?
These data are not persist in postgres ?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4450.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite cache data size problem.

2016-04-22 Thread Zhengqingzheng
Hi there,
I am trying to load a table with 47Million records, in which the data size is 
less than 3gb. However, When I load into the memory ( two vm with 48+32 = 
80gb), it is still crushed due to not enough memory space? This problem 
occurred when I instantiated 6 + 4 nodes.
Why the cache model need so much space ( 3g vs 80g)?
Any idea to explain this issue?

Kind regards,
Kevin



Re: What will happend in case of run out of memory ?

2016-04-22 Thread Alexei Scherbakov
Hi,

If OOM happens while persisting data, on example in JDBC driver, operation
will fail and update will be lost.

2016-04-22 12:15 GMT+03:00 tomk :

> Hello,
> I consider what will happen in case of out of memory ?
> I mean write-through mode. My data will lost ? I assume that it always save
> it into underlying database.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


At client side unable to use excute transaction?

2016-04-22 Thread Ravi Puri
server side
--
at server side i loaded my data with person and also configured with my
database..

cache.loadCache(null,100_000);
cache.size();// It returns length of cache.

client side
--
public static void main (String args[])
{
Ignition.setClientMode(true);
Ignite ignite=Ignition.start("ignite-example.xml");


IgniteCache cache =
ignite.getOrCreateCache("CacheName");
System.out.println(cache.size());// it also succesfully returns 
same
length 

executeTransaction(cache);// but her it shows error
}
private static void executeTransaction(IgniteCache 
cache) {
try (Transaction tx = Ignition.ignite().transactions().txStart()) {
Person val = cache.get(2);// here it show error.

 System.out.println("Read value: " + val);


 tx.commit();
}  
}

the error is:


Exception in thread "main" org.hibernate.LazyInitializationException: could
not initialize proxy - no Session
at
org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:186)
at
org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:545)
at
org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:124)
at
org.hibernate.collection.internal.PersistentMap.toString(PersistentMap.java:270)
at java.lang.String.valueOf(Unknown Source)
at java.lang.StringBuilder.append(Unknown Source)
at com.tcs.utility.vo.DeviceVO.toString(DeviceVO.java:139)
at java.lang.String.valueOf(Unknown Source)
at java.lang.StringBuilder.append(Unknown Source)
at com.tcs.vo.SessionManagementVO.toString(SessionManagementVO.java:171)
at java.lang.String.valueOf(Unknown Source)
at java.lang.StringBuilder.append(Unknown Source)
at com.demo.ClientSide.executeTransaction(ClientSide.java:29)
at com.demo.ClientSide.main(ClientSide.java:23)




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/At-client-side-unable-to-use-excute-transaction-tp4447.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


What will happend in case of run out of memory ?

2016-04-22 Thread tomk
Hello,  
I consider what will happen in case of out of memory ?
I mean write-through mode. My data will lost ? I assume that it always save
it into underlying database.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Write-behind and foreign keys in DB

2016-04-22 Thread Alexei Scherbakov
Hi,

You should not use write-behind mode in such scenario.
Concerning performance issues: are you using connection pooling library?
If no, I recommend you to use a library like c3p0 for connection pooling
and prepared statement caching.



2016-04-21 18:02 GMT+03:00 vetko :

> Hi All,
>
> I am experiencing an issue while trying to use write-behind on caches
> connected to tables which have foreign key constraints between them.
> Seemingly the write-behind mechanism is not executing the updates/inserts
> in
> a deterministic order, but rather is trying to push all the collected
> changes per each cache consecutively in some unknown order. But as we have
> foreign keys in the tables, the order of the operation matters, so parent
> objects should be inserted/updated first, and children only after that
> (otherwise foreign key violations are thrown from the DB).
>
> It seems that the current implementation is trying to workaround this
> problem on a trial-and-error basis
> (org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore:888), which
> means
> that it will periodically retry to flush the changes again and again for
> the
> caches in case of which a constraint violation occured. So the "child"
> cache
> will periodically retry to flush, until the "parent" cache gets flushed
> first. This ultimately will result in getting the data into the DB, but it
> also means a lot of unsuccessful tries in case of complex hierarchical
> tables, until the correct order is "found". This results in poor
> performance
> and unnecessary shelling of the DB.
>
> Do you have any suggestions how could I circumvent this issue?
>
> (Initially I were trying with write-through, but it resulted in VERY poor
> performance, because the CacheAbstractJdbcStore is seemingly opening a new
> prepared statement for each insert/update operation.)
>
> (Using Ignite 1.4)
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Write-behind-and-foreign-keys-in-DB-tp4415.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov