using dataStreamer inside Ignite service grid execute

2017-10-19 Thread shuangjiang.li
Hi, I am trying to implement StreamVisitor function inside service grid
execute function.I started ignite node and service deployment in the main
function, however, in the execute logic, there is no output message print
out. I tested the logic can be triggered when the StreamVisitor code is put
along with streamer, but there is no output inside the service grid
function.Any idea how I may be able to implement StreamVisitor inside
service grid?Thank you in advance.*In the main function:*TcpDiscoverySpi spi
= new TcpDiscoverySpi();TcpDiscoveryVmIpFinder ipFinder = new
TcpDiscoveryVmIpFinder();ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47509"));spi.setIpFinder(ipFinder);IgniteConfiguration
cfg = new IgniteConfiguration();cfg.setDiscoverySpi(spi);// config
serviceServiceConfiguration svcCfg = new
ServiceConfiguration();svcCfg.setCacheName("realtime_data_cache");svcCfg.setName("realtime_data_cache");svcCfg.setMaxPerNodeCount(1);svcCfg.setTotalCount(1);svcCfg.setService(new
RemoteDataServiceImpl());System.out.println("step
0");cfg.setServiceConfiguration(svcCfg);Ignite ignite =
Ignition.start(cfg);ignite.getOrCreateCache("realtime_data_cache");IgniteDataStreamer dataStreamer =
ignite.dataStreamer("realtime_data_cache");dataStreamer.autoFlushFrequency(10
* 1000);dataStreamer.allowOverwrite(true);// streamer data ingestion logic
here*In the service impl.*   @Overridepublic void
execute(ServiceContext ctx) {stmr =
ignite.dataStreamer("realtime_data_cache");while
(!ctx.isCancelled()) {stmr.receiver(new StreamVisitor() {@Overridepublic void
apply(IgniteCache cache, Map.Entry e) {RealtimeData vo = JSON.parseObject(""
+ e.getValue(), RealtimeData.class);if (vo.getDelay() ==
0) {System.out.println(vo.getDelay() +
e.getValue());}}});}   
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: odbc and apache ignite/ c++ clients

2017-10-19 Thread Anirudha Jadhav
thanks!

On Wed, Oct 18, 2017 at 5:50 AM, Igor Sapego  wrote:

> Hi,
>
> ODBC does not have JNI dependency. It only depends on C and C++ standard
> libraries.
>
> If you only need KV and SQL lookups, ODBC driver should be enough for you.
>
> Best Regards,
> Igor
>
> On Wed, Oct 18, 2017 at 6:06 AM, Anirudha Jadhav  wrote:
>
>> i went through this,
>>
>> having a JNI dependency is a no-go our our apps. (quoted below from the
>> link you gave above ).
>>
>> hence want to know if the ODBC driver also has ignite JAVA dependency.
>>
>> any other C/C++ options are welcome.
>>
>> Current we launch ignite clusters as spring-boot managed services, but
>> need polyglot client acess
>>
>>- Ignite С++ starts the JVM in the same process and communicates with
>>it via JNI
>>
>>
>>
>>
>> On Tue, Oct 17, 2017 at 11:00 PM, Alexey Kuznetsov > > wrote:
>>
>>> Hi,
>>>
>>> You can start from docs: https://apacheignite-cpp.readme.io/docs
>>>
>>> On Wed, Oct 18, 2017 at 9:54 AM, Anirudha Jadhav 
>>> wrote:
>>>
 I need a way to access ignite IMDG/DB from c/c++.

 The JNI c++ clients is not an option for us.

 i am currently exploring performance between redis / memchached c/c++
 clients

 Q: does ignite ODBC drivers also have a JNI dependency?

 Q: what the best approach to access remote ignite grids from c++ with
 KV and SQl lookups?


 thanks a lot,

 --
 Ani

>>>
>>>
>>>
>>> --
>>> Alexey Kuznetsov
>>>
>>
>>
>>
>> --
>> Anirudha P. Jadhav
>>
>
>


-- 
Anirudha P. Jadhav


unsubscribe

2017-10-19 Thread Anand Kumar Sankaran




Grid freezing

2017-10-19 Thread smurphy
threaddump.tdump
  

I am using Ignite v2.1 and my code using Optimistic/Serializable
transactions and is locking up. When it does, there are a lot of `WARNING:
Found long running transaction` and `WARNING: Found long running cache
future` messages in the logs (see below). Eventually the grid freezes and
there are a lot of "IgniteInterruptedException: Got interrupted while
waiting for future to complete." (see below). Also, see attached thread dump
when the grid is finally stopped.

Using visor, I could see the active threads increase until all are consumed.
I boosted the thread pool count to 100 and saw the active threads rise
steadily until all were consumed. At theat point, the grid freezes up.

Using visor ping multiple times, the servers always ping successfully but
the client fails intermittently. 

LONG RUNNING TRANSACTION:

2017-10-19 13:03:06,698 WARN  [ ] grid-timeout-worker-#15%null%  
{Log4JLogger.java:480} - Found long running transaction
[startTime=12:30:33.985, curTime=13:03:06.682, tx=GridNearTxLocal
[mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
[entries=[IgniteTxEntry [key=KeyCacheObjectImpl [part=126,
val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
hasValBytes=true], cacheId=1481046058, txKey=IgniteTxKey
[key=KeyCacheObjectImpl [part=126, val=GridCacheInternalKeyImpl
[name=SCAN_IDX, grpName=default-ds-group], hasValBytes=true],
cacheId=1481046058], val=[op=TRANSFORM, val=null], prevVal=[op=NOOP,
val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=[IgniteBiTuple
[val1=GetAndIncrementProcessor [], val2=[Ljava.lang.Object;@49d9dc6c]],
ttl=-1, conflictExpireTime=-1, conflictVer=null, explicitVer=null,
dhtVer=null, filters=[], filtersPassed=false, filtersSet=true,
entry=GridDhtDetachedCacheEntry [super=GridDistributedCacheEntry
[super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=126,
val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
hasValBytes=true], val=null, startVer=1508434155232, ver=GridCacheVersion
[topVer=119473256, order=1508434155232, nodeOrder=2243], hash=-375255214,
extras=null, flags=0]]], prepared=0, locked=false,
nodeId=36b4d422-d011-4f77-919a-b8ffb089614b, locMapped=false,
expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
serReadVer=null, xidVer=GridCacheVersion [topVer=119473256,
order=1508434155230, nodeOrder=2243]]], explicitLock=false, dhtVer=null,
last=false, nearEntries=0, clientFirst=false,
node=36b4d422-d011-4f77-919a-b8ffb089614b]], nearLocallyMapped=false,
colocatedLocallyMapped=false, needCheckBackup=null, hasRemoteLocks=false,
thread=,
mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
[entries=[IgniteTxEntry [key=KeyCacheObjectImpl [part=126,
val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
hasValBytes=true], cacheId=1481046058, txKey=IgniteTxKey
[key=KeyCacheObjectImpl [part=126, val=GridCacheInternalKeyImpl
[name=SCAN_IDX, grpName=default-ds-group], hasValBytes=true],
cacheId=1481046058], val=[op=TRANSFORM, val=null], prevVal=[op=NOOP,
val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=[IgniteBiTuple
[val1=GetAndIncrementProcessor [], val2=[Ljava.lang.Object;@49d9dc6c]],
ttl=-1, conflictExpireTime=-1, conflictVer=null, explicitVer=null,
dhtVer=null, filters=[], filtersPassed=false, filtersSet=true,
entry=GridDhtDetachedCacheEntry [super=GridDistributedCacheEntry
[super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=126,
val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
hasValBytes=true], val=null, startVer=1508434155232, ver=GridCacheVersion
[topVer=119473256, order=1508434155232, nodeOrder=2243], hash=-375255214,
extras=null, flags=0]]], prepared=0, locked=false,
nodeId=36b4d422-d011-4f77-919a-b8ffb089614b, locMapped=false,
expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
serReadVer=null, xidVer=GridCacheVersion [topVer=119473256,
order=1508434155230, nodeOrder=2243]]], explicitLock=false, dhtVer=null,
last=false, nearEntries=0, clientFirst=false,
node=36b4d422-d011-4f77-919a-b8ffb089614b]], super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
depEnabled=false, txState=IgniteTxImplicitSingleStateImpl [init=true,
recovery=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
[topVer=119473256, order=1508434155230, nodeOrder=2243], writeVer=null,
implicit=true, loc=true, threadId=109, startTime=1508434233985,
nodeId=cb0332bd-1c18-4695-a75f-0d0ed4637b55, startVer=GridCacheVersion
[topVer=119473256, order=1508434155230, nodeOrder=2243], endVer=null,
isolation=READ_COMMITTED, concurrency=OPTIMISTIC, timeout=0,
sysInvalidate=false, sys=true, plc=2, commitVer=GridCacheVersion
[topVer=119473256, order=1508434155230, nodeOrder=2243], finalizing=NONE,
invalidParts=null, state=PREPARING, 

RE: Error with ScanQuery

2017-10-19 Thread Raymond Wilson
Was there anything in the logs that looked out of place?



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, October 17, 2017 5:25 PM
*To:* user 
*Subject:* Fwd: Error with ScanQuery



Resending to the user@ list.



-- Forwarded message --
From: *Raymond Wilson* 
Date: Mon, Oct 16, 2017 at 8:35 PM
Subject: RE: Error with ScanQuery
To: Dmitriy Setrakyan 
Cc: ptupit...@apache.org

Dmitry, Pavel,



Attached are the logs from running the system from scratch, activating the
cluster and performing the operation that leads to the exception, then
shutting down the grid.



The WindowsFormsApplication1 log is the context which produces the
exception.



Thanks,

Raymond.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Tuesday, October 17, 2017 12:22 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Error with ScanQuery



Below is output from ex.ToString()



---



---

Apache.Ignite.Core.Binary.BinaryObjectException: Requesting mapping from
grid failed for [platformId=1, typeId=349095370] --->
Apache.Ignite.Core.Common.JavaException: class
org.apache.ignite.binary.BinaryObjectException: Requesting mapping from
grid failed for [platformId=1, typeId=349095370]



at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:126)



at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutStream(PlatformTargetProxyImpl.java:155)



Caused by: java.lang.ClassNotFoundException: Requesting mapping from grid
failed for [platformId=1, typeId=349095370]



at
org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:383)



at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:120)



... 1 more







   --- End of inner exception stack trace ---



   at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.Error(Void*
target, Int32 errType, SByte* errClsChars, Int32 errClsCharsLen, SByte*
errMsgChars, Int32 errMsgCharsLen, SByte* stackTraceChars, Int32
stackTraceCharsLen, Void* errData, Int32 errDataLen)



   at
Apache.Ignite.Core.Impl.Unmanaged.IgniteJniNativeMethods.TargetInStreamOutStream(Void*
ctx, Void* target, Int32 opType, Int64 inMemPtr, Int64 outMemPtr)



   at Apache.Ignite.Core.Impl.PlatformTarget.DoOutInOp[TR](Int32 type,
Action`1 outAction, Func`2 inAction)



   at Apache.Ignite.Core.Impl.Binary.BinaryProcessor.GetTypeName(Int32 id)



   at Apache.Ignite.Core.Impl.Binary.Marshaller.GetDescriptor(Boolean
userType, Int32 typeId, Boolean requiresType, String typeName, Type
knownType)



   at Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadFullObject[T](Int32
pos, Type typeOverride)



   at Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res,
Type typeOverride)



   at Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T](Type
typeOverride)



   at
Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadBinaryObject[T](Boolean
doDetach)



   at Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res,
Type typeOverride)



   at Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T](Type
typeOverride)



   at Apache.Ignite.Core.Impl.Cache.Query.QueryCursor`2.Read(BinaryReader
reader)



   at
Apache.Ignite.Core.Impl.Cache.Query.AbstractQueryCursor`1.ConvertGetBatch(IBinaryStream
stream)



   at Apache.Ignite.Core.Impl.PlatformTarget.DoInOp[T](Int32 type, Func`2
action)



   at
Apache.Ignite.Core.Impl.Cache.Query.AbstractQueryCursor`1.RequestBatch()



   at Apache.Ignite.Core.Impl.Cache.Query.AbstractQueryCursor`1.MoveNext()



   at VSS.Raptor.IgnitePOC.TestApp.Form1.button1_Click(Object sender,
EventArgs e) in
C:\Dev\VSS.Raptor.IgnitePOC\WindowsFormsApplication1\Form1.cs:line 254

---

OK

---



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
]
*Sent:* Tuesday, October 17, 2017 12:17 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Error with ScanQuery



Hi Dmitry,



I don’t seem to get any Java exceptions reported in the log.



Below is the inner exception detail from the IDE error dialog:



class org.apache.ignite.binary.BinaryObjectException: Requesting mapping
from grid failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:126)

at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutStream(PlatformTargetProxyImpl.java:155)

Caused by: java.lang.ClassNotFoundException: 

Re: Continuous update Data Grid Cache

2017-10-19 Thread blackfield
Resurrecting this old thread.

What is the current recommendation for the OP use case?

Does Ignite support/plan to support aliases for cache name?

I think this is really great feature to have to minimize client
interruption/pain to switch to different cache.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Clients reconnection problem

2017-10-19 Thread daniels
I have 1 server and 3 client nodes,I use caching and messaging.
When I scale server node to 0,and then to 1 ,as a result some clients cannot
reconnect.

I have exception from client nodes

ERROR  1 --- [er-#2%%] o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Failed to
send message: null
 java.io.IOException: Failed to get acknowledge for message:
TcpDiscoveryClientMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=12qwrji-d--fsadas, verifierNodeId=null, topVer=0,
pendingIdx=0, failedNodes=null, isClient=true]]
at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1246)
~[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[ignite-core-2.0.0.jar:2.0.0]
 2017-10-19 13:35:09.197  WARN [um-role-service,,,] 1 --- [-#228%sys_grid%]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Failed to reconnect to cluster
(will retry): class o.a.i.IgniteCheckedException: Failed to deserialize
object with given class loader: sun.misc.Launcher$AppClassLoader@5c647e05



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Query performance against table with/out backup

2017-10-19 Thread blackfield
Here, I am trying to ascertain that I set backup == 2 properly as I mentioned
above that I do not see query performance difference between backup ==1 and
backup == 2. 

I want to make sure that I configure my cache properly.

When I set the backup==2 (to have three copies), I notice the following via
visor.

The Affinity Backups is still equal to 1. Is this a different property than
number of backups? If it is not, how do one see the number of backups a
cache is configured for?

Invoking "cache -a" to see the detail cache stat, with backup==2, under the
size column, the sum of entries on all nodes is equal to the number of rows
in the table * 2.  It appears this is the case for backup >= 1?

As in, only one set of backup will be stored in off heap regardless the
number of backups are specified?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Node and cluster maintenance best practice

2017-10-19 Thread blackfield
Hello,

I searched this topic, I can't seem to find it anywhere. Apology in advance
if this is covered somewhere.. 

In multi nodes cluster and persistence is enabled, what is the best practice
to bring down a node for maintenance (e.g applying OS patches, hardware
upgrade, etc)?

If one just stops ignite node, what would happen if the node is in the
middle of processing a query/update etc? Will the client get exception? Is
there a graceful way to perform this to avoid no service interruption?


Somewhat related to the above, is there a best practice in bringing down a
cluster for maintenance (e.g. due to issue, ignite upgrade, etc)? 

For example, if we configure our cluster with TcpDiscoveryVmIpFinder and
provide lists of IP addresses of just few nodes, should these nodes be shut
down last?


Note: I am aware of GridGain's rolling upgrade capability, this question is
for Ignite specific.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException: Not enough memory allocated

2017-10-19 Thread pradeepchanumolu
Hi Sergey, 

Changing the configuration to the one you proposed did the trick. I am no
longer hitting the OutOfMemoryException error. Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ComputeGrid API in C++

2017-10-19 Thread asingh
Hi Igor

I am running Red Hat Linux 6, with gcc 4.4.7

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ComputeGrid API in C++

2017-10-19 Thread Igor Sapego
Hi,

Are you running Linux or Windows? What is the your compiler?

Best Regards,
Igor

On Thu, Oct 19, 2017 at 5:47 PM, asingh  wrote:

> compute-example.xml
>  t1405/compute-example.xml>
>
> I have attached the xml file as well. As you can see from the various
> commented out sections, I have been trying different things to make it
> work.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Question Ignite Memory Consumption / Object size (Apache Ignite .NET)

2017-10-19 Thread Pavel Tupitsyn
Hi Mario,

See https://apacheignite.readme.io/docs/capacity-planning

> Each field I have marked with the attribute [QuerySqlField]  and some
fields are indexed
This is most likely the case.

1) Have you tried loading data without enabling Ignite SQL (e.g. do not
configure CacheConfiguration.QueryEntities)?
2) Can you attach the class? How many fields are there?

Thanks,
Pavel

On Thu, Oct 19, 2017 at 1:57 PM, Elmers, Mario (M)  wrote:

> Hello,
>
>
>
> I try to estimate the needed RAM for my application. I have created 3
> nodes by starting only the Apache.Ignite.exe.
>
>
>
> All is done with C# and Apache.Ignite 2.2
>
>
>
> Then I created a data loader application which loads up the whole data of
> my logfiles.
>
>
>
> The size of all my logfiles are 7.5 GB. When I loaded it up to the Ignite
> cluster all together need more than 32GB of RAM.
>
>
>
> My cache is configured as partioned with 0 backups. So I  hat thinked that
> the cluster will need not much more than 16 GB of RAM.
>
> Because the file are normal ASCII files which converted to UTF-8 it will
> twice the amount of data needed to store.
>
>
>
> The class file I have created has for each field of the logentry one
> field. Each field I have marked with the attribute [QuerySqlField]  and
> some fields are indexed.
>
> The key is of type Guid.
>
>
>
> Can someone explain why the amount is 4x greater than the raw data ?
>
>
>
> Thanks & regards
>
>
>
> Mario
>


Re: ComputeGrid API in C++

2017-10-19 Thread asingh
compute-example.xml
 
 

I have attached the xml file as well. As you can see from the various
commented out sections, I have been trying different things to make it work.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ComputeGrid API in C++

2017-10-19 Thread asingh
Thanks for the links!

So, now I am trying to run the CPP example.
I have compiled CPP version of ignite and running that on two servers. The
two instances of ignite are able to recognize each other:

[10:20:49] Ignite node started OK (id=638a866d)
[10:20:49] Topology snapshot [ver=1, servers=1, clients=0, CPUs=40,
heap=0.89GB]
[10:20:55] Topology snapshot [ver=2, servers=2, clients=0, CPUs=80,
heap=1.8GB]


But when I try to run the CPP example, I get this message from the example:

An error occurred: org.apache.ignite.IgniteException : C++ compute job is
not registered on the node (did you compile your program without
-rdynamic?). [jobTypeId=160893980], class org.apache.ignite.IgniteException:
C++ compute job is not registered on the node (did you compile your program
without -rdynamic?). [jobTypeId=160893980]

Below is the message is see from one of the ignite/ignite processes:
[10:21:32,877][SEVERE][pub-#97%null%][GridJobWorker] Failed to execute job
[jobId=7a5b2053f51-9a565515-196e-4fe0-807d-b48c58b2126a,
ses=GridJobSessionImpl [ses=GridTaskSessionImpl
[taskName=o.a.i.i.processors.platform.compute.PlatformBalancingSingleClosureTask,
dep=GridDeployment [ts=1508422855449, depMode=SHARED,
clsLdr=sun.misc.Launcher$AppClassLoader@5889949a,
clsLdrId=2c322053f51-0e956c0d-17ae-4662-a762-8c52c4b74524, userVer=0,
loc=true,
sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap,
pendingUndeploy=false, undeployed=false, usage=1],
taskClsName=o.a.i.i.processors.platform.compute.PlatformBalancingSingleClosureTask,
sesId=6a5b2053f51-9a565515-196e-4fe0-807d-b48c58b2126a,
startTime=1508422892858, endTime=9223372036854775807,
taskNodeId=9a565515-196e-4fe0-807d-b48c58b2126a,
clsLdr=sun.misc.Launcher$AppClassLoader@5889949a, closed=false, cpSpi=null,
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false,
subjId=9a565515-196e-4fe0-807d-b48c58b2126a, mapFut=IgniteFuture
[orig=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=117005517]], execName=null],
jobId=7a5b2053f51-9a565515-196e-4fe0-807d-b48c58b2126a]]
class org.apache.ignite.IgniteException: C++ compute job is not registered
on the node (did you compile your program without -rdynamic?).
[jobTypeId=160893980]

I modified the example to call RegisterComputeFunc:

IGNITE_EXPORTED_CALL void IgniteModuleInit(ignite::IgniteBindingContext&
context)
{
  IgniteBinding binding = context.GetBinding();
  binding.RegisterComputeFunc();
}

Any help would be much appreciated!
Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.

2017-10-19 Thread Ray
Hi Dmitriy,

Thanks for the reply.

I know the eviction is automatic, but does eviction happen only when the
memory is full?
>From the log, I didn't see any "Page evictions started, this will affect
storage performance" message.

So my guess is that memory is not fully used up and no eviction happened.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.

2017-10-19 Thread Dmitry Pavlov
Hi Ray,

I plan to look again to the dumps.

I can note that evictions of records from memory to disk does not need to
be additionally configured, it works automatically.

Therefore, yes, increasing volume of RAM for data region will allow to
evict records from memory less often.

Sincerely,
Dmitriy Pavlov

ср, 18 окт. 2017 г. в 18:58, Ray :

> Hi Dmitriy,
>
> Thanks for the answers.
>
> The cluster is stable during the data ingestion, no node joining or leaving
> happened.
> I've been monitoring the cluster's topology and cache entries numbers from
> visor the whole time.
> I'm also confused why rebalancing is triggered, from visor I can see that
> every node has nearly the same amount of entries.
>
> Did you find anything from the thread dump and log files?
>
> As the eviction is not triggered so the memory in default region is not
> used
> up, right?
> Why increasing available memory will help with upload speed?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteOutOfMemoryException: Not enough memory allocated

2017-10-19 Thread Sergey Chugunov
Hello Pradeep,

I managed to reproduce the same behavior; its root cause is in
your configuration.

It is not obvious but when you add this piece:

   





 

then property *defaultMemoryPolicySize* is ignored and another value is
used: in Ignite2.2 it is 20% of total physical memory on machine (looking
at 27 Gigs reported in stack trace I think it is exactly your case).

To fix your configuration please do the following: get rid of
*defaultMemoryPolicySize*
property and set up MemoryConfiguration like this:


 






 
 



 
 
  

Thanks,
Sergey.

On Wed, Oct 18, 2017 at 9:45 PM, pradeepchanumolu 
wrote:

> I am hitting the following exception in the Ignite Client when trying to
> load
> data into Ignite Cache. The exception says that the default policy size is
> 27GB but on the server configuration I have set it to much higher value.
> Here is the snippet of the server configuration.
>
> 
>  
>
>  
>
> 
> 
> 
>  class="org.apache.ignite.configuration.MemoryPolicyConfiguration">
>  value="true"/>
> 
> 
>  
>  
>   
>
> Can someone explain why I am hitting this exception? I have 10 servers
> running on separate machines with 100GB of Heap memory allocated to each
> server. The data I am loading is just 350GB and I am hitting this
> exception.
>
>
> Caused by: class
> org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException:
> Failed to locally write to cache (all transaction entries will be
> invalidated, however there was a window when entries for this transaction
> were visible to others): GridNearTxLocal
> [mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
> [entries=[IgniteTxEntry [key=KeyCacheObjectImpl [part=695,
> val=162247185594843443, hasValBytes=true], cacheId=-1015733162,
> txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=695,
> val=162247185594843443,
> hasValBytes=true], cacheId=-1015733162], val=[op=CREATE,
> val=CacheObjectByteArrayImpl [arrLen=288]], prevVal=[op=NOOP, val=null],
> oldVal=[op=NOOP, val=null], entryProcessorsCol=null, ttl=-1,
> conflictExpireTime=-1, conflictVer=null, explicitVer=null, dhtVer=null,
> filters=[], filtersPassed=false, filtersSet=true, entry=GridDhtCacheEntry
> [rdrs=[], part=695, super=GridDistributedCacheEntry
> [super=GridCacheMapEntry
> [key=KeyCacheObjectImpl [part=695, val=162247185594843443,
> hasValBytes=true], val=null, startVer=1508548585654, ver=GridCacheVersion
> [topVer=119776747, order=1508548585654, nodeOrder=313], hash=-1519359033,
> extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc
> [locs=[GridCacheMvccCandidate [nodeId=c63d5132-30ba-483a-
> 8f16-bd450c19d70f,
> ver=GridCacheVersion [topVer=119776747, order=1508548585653,
> nodeOrder=313],
> threadId=687, id=39334849, topVer=AffinityTopologyVersion [topVer=449,
> minorTopVer=0], reentry=null,
> otherNodeId=c63d5132-30ba-483a-8f16-bd450c19d70f,
> otherVer=GridCacheVersion
> [topVer=119776747, order=1508548585653, nodeOrder=313],
> mappedDhtNodes=null,
> mappedNearNodes=null, ownerVer=null, serOrder=null, key=KeyCacheObjectImpl
> [part=695, val=162247185594843443, hasValBytes=true],
> masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_
> implicit=1|dht_local=1|near_local=0|removed=0|read=0,
> prevVer=null, nextVer=null]], rmts=null]], flags=2]]], prepared=1,
> locked=false, nodeId=c63d5132-30ba-483a-8f16-bd450c19d70f,
> locMapped=false,
> expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
> serReadVer=null, xidVer=GridCacheVersion [topVer=119776747,
> order=1508548585653, nodeOrder=313]]], explicitLock=false, dhtVer=null,
> last=false, nearEntries=0, clientFirst=false,
> node=c63d5132-30ba-483a-8f16-bd450c19d70f]], nearLocallyMapped=false,
> colocatedLocallyMapped=true, needCheckBackup=null, hasRemoteLocks=false,
> thread=data-streamer-#494%null%, mappings=IgniteTxMappingsSingleImpl
> [mapping=GridDistributedTxMapping [entries=[IgniteTxEntry
> [key=KeyCacheObjectImpl [part=695, val=162247185594843443,
> hasValBytes=true], cacheId=-1015733162, txKey=IgniteTxKey
> [key=KeyCacheObjectImpl [part=695, val=162247185594843443,
> hasValBytes=true], cacheId=-1015733162], val=[op=CREATE,
> val=CacheObjectByteArrayImpl [arrLen=288]], prevVal=[op=NOOP, val=null],
> oldVal=[op=NOOP, val=null], entryProcessorsCol=null, ttl=-1,
> conflictExpireTime=-1, conflictVer=null, explicitVer=null, dhtVer=null,
> filters=[], filtersPassed=false, filtersSet=true, entry=GridDhtCacheEntry
> [rdrs=[], part=695, 

Question Ignite Memory Consumption / Object size (Apache Ignite .NET)

2017-10-19 Thread Elmers, Mario (M)
Hello,

I try to estimate the needed RAM for my application. I have created 3 nodes by 
starting only the Apache.Ignite.exe.

All is done with C# and Apache.Ignite 2.2

Then I created a data loader application which loads up the whole data of my 
logfiles.

The size of all my logfiles are 7.5 GB. When I loaded it up to the Ignite 
cluster all together need more than 32GB of RAM.

My cache is configured as partioned with 0 backups. So I  hat thinked that the 
cluster will need not much more than 16 GB of RAM.
Because the file are normal ASCII files which converted to UTF-8 it will twice 
the amount of data needed to store.

The class file I have created has for each field of the logentry one field. 
Each field I have marked with the attribute [QuerySqlField]  and some fields 
are indexed.
The key is of type Guid.

Can someone explain why the amount is 4x greater than the raw data ?

Thanks & regards

Mario


Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

2017-10-19 Thread Evgenii Zhuravlev
Could you also try to set LD_LIBRARY_PATH variable with path to the folder
with native libraries?

2017-10-17 17:56 GMT+03:00 C Reid :

> I just tried, got the same:
> "Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable"
> "java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.
> buildSupportsSnappy()Z"
>
> I also tried adding all related native library under one of folders under
> jdk where all *.so are located. But ignite just couldn't load them, it's
> strange.
> --
> *From:* Evgenii Zhuravlev 
> *Sent:* 17 October 2017 21:25
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
> compression
>
> Have you tried to remove from path libraries from ${HADOOP_HOME}/lib/native
> and add only /usr/lib64/ folder?
>
> 2017-10-17 12:18 GMT+03:00 C Reid :
>
>> Tried, and did not work.
>>
>> --
>> *From:* Evgenii Zhuravlev 
>> *Sent:* 17 October 2017 16:41
>> *To:* C Reid
>> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
>> compression
>>
>> I'd recommend adding /usr/lib64/ to JAVA_LIBRARY_PATH
>>
>> Evgenii
>>
>> 2017-10-17 11:29 GMT+03:00 C Reid :
>>
>>> Yes, IgniteNode runs on the DataNode machine.
>>>
>>> [had...@hadoop-offline033.dx.momo.com ignite]$ echo $HADOOP_HOME
>>> /opt/hadoop-2.8.1-all
>>> [had...@hadoop-offline033.dx.momo.com ignite]$ echo $IGNITE_HOME
>>> /opt/apache-ignite-hadoop-2.2.0-bin
>>>
>>> and in ignite.sh
>>> JVM_OPTS="${JVM_OPTS} -Djava.library.path=${HADOOP_H
>>> OME}/lib/native:/usr/lib64/libsnappy.so.1:${HADOOP_HOME}/lib
>>> /native/libhadoop.so"
>>>
>>> But exception is thrown as mentioned.
>>> --
>>> *From:* Evgenii Zhuravlev 
>>> *Sent:* 17 October 2017 15:44
>>>
>>> *To:* user@ignite.apache.org
>>> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
>>> compression
>>>
>>> Do you run Ignite on the same machine as hadoop?
>>>
>>> I'd recommend you to check these env variables:
>>> IGNITE_HOME, HADOOP_HOME and JAVA_LIBRARY_PATH. JAVA_LIBRARY_PATH
>>> should contain a path to the folder of libsnappy files.
>>>
>>> Evgenii
>>>
>>> 2017-10-17 8:45 GMT+03:00 C Reid :
>>>
 Hi Evgenii,

 Checked, as shown:

 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the
 custom-built native-hadoop library...
 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the native-hadoop
 library
 17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize
 native-bzip2 library system-native, will use pure-Java version
 17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded &
 initialized native-zlib library
 Native library checking:
 hadoop:  true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so
 zlib:true /lib64/libz.so.1
 snappy:  true /usr/lib64/libsnappy.so.1
 lz4: true revision:10301
 bzip2:   false
 openssl: true /usr/lib64/libcrypto.so

 --
 *From:* Evgenii Zhuravlev 
 *Sent:* 17 October 2017 13:34
 *To:* user@ignite.apache.org
 *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
 compression

 Hi,

 Have you checked "hadoop checknative -a" ? What it shows for snappy?

 Evgenii

 2017-10-17 7:12 GMT+03:00 C Reid :

> Hi all igniters,
>
> I have tried many ways to include native jar and snappy jar, but
> exceptions below kept thrown. (I'm sure the hdfs and yarn support snappy 
> by
> running job in yarn framework with SnappyCodec.) Hopes to get some helps
> and suggestions from community.
>
> [NativeCodeLoader] Unable to load native-hadoop library for your
> platform... using builtin-java classes where applicable
>
> and
>
> java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeC
> odeLoader.buildSupportsSnappy()Z
> at org.apache.hadoop.util.NativeC
> odeLoader.buildSupportsSnappy(Native Method)
> at org.apache.hadoop.io.compress.
> SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
> at org.apache.hadoop.io.compress.
> SnappyCodec.getCompressorType(SnappyCodec.java:136)
> at org.apache.hadoop.io.compress.
> CodecPool.getCompressor(CodecPool.java:150)
> at org.apache.hadoop.io.compress.
> CompressionCodec$Util.createOutputStreamWithCodecPool(Compre
> ssionCodec.java:131)
> at org.apache.hadoop.io.compress.
> SnappyCodec.createOutputStream(SnappyCodec.java:101)
> at org.apache.hadoop.mapreduce.li
>