Re: Input data is no significant change in multi-threading

2017-04-19 Thread woo charles
When I call addData() in streamer. this data will send & buffer in server
node. is that correct?
If I correct, this data will buffer in random server node or only the one
it directly connected?

2017-04-19 18:33 GMT+08:00 Andrey Mashenkov :

> It may have effect if you prepare data for streamer (call addData) slowly
> and it is possible to utilize more resources for it. Of course remote nodes
> should be able to bear pressure of data.
> Performance can increased, but usually slightly as network will be a
> bottleneck.
>
>
> On Wed, Apr 19, 2017 at 12:29 PM, woo charles 
> wrote:
>
>> Is that mean the performance of input data will not affect if I use 2 
>> IgniteDataStreamer(2
>> client program) to input data as they use the same queue in remote nodes?
>>
>> 2017-04-19 10:02 GMT+08:00 Andrey Mashenkov :
>>
>>> Hi Woo,
>>>
>>> IgniteDataStreamer uses per node buffer to make bulk cache updates that
>>> shows much better throughput than single updates.
>>> Also, IgniteDataStreamer send jobs to remote nodes, to utilize multiple
>>> threads on remote nodes.
>>>
>>> In multi-node grid IgniteDataStreamer usually shows better results than
>>> single updates in from multiple threads.
>>>
>>>
>>>
>>> On Wed, Apr 19, 2017 at 4:30 AM, woo charles 
>>> wrote:
>>>
 When I try to input data(80 table, each 1 records) to a cluster
 with 3 server node(each 2 gb), it only has a small change in time if multi
 thread is performed
 (ie. max decrease from 8s to 6.5s if using IgniteCache)

 Is it normal?

 Also, I found that multi thread do not affect the data input speed in
 IgniteDataStreamer.

 Is it true?






>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


FailureDetectionTimeOut not working

2017-04-19 Thread smriti.aggarwal
Hi Team,

I want to configure failureDetectionTimeOut so that I can customize after how 
long the clients get disconnected, in case of server failure.

Just for testing purposes, I brought up one server and one client in a cluster, 
and had below property set:

Heres my config:




http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util-2.0.xsd";>




















127.0.0.1:47500




















Whats happening is that when I bring down my server, the client gets 
disconnected before the failureDetectionTimeOut has passed.



I brought down the server @ 8:42:50, and the client gets disconnected within 10 
seconds. Here are the logs (from client):



Apr 20, 2017 8:52:52 AM org.apache.ignite.logger.java.JavaLogger warning

WARNING: Connect timed out (consider increasing 'failureDetectionTimeout' 
configuration property) [addr=/0:0:0:0:0:0:0:1:47100, 
failureDetectionTimeout=2]

Apr 20, 2017 8:52:53 AM org.apache.ignite.logger.java.JavaLogger warning

WARNING: Connect timed out (consider increasing 'failureDetectionTimeout' 
configuration property) [addr=/127.0.0.1:47100, failureDetectionTimeout=2]

Apr 20, 2017 8:52:54 AM org.apache.ignite.logger.java.JavaLogger warning

WARNING: Connect timed out (consider increasing 'failureDetectionTimeout' 
configuration property) 
[addr=NYKDWMVDI012486.INTRANET.BARCAPINT.com/10.136.138.135:47100, 
failureDetectionTimeout=2]

Apr 20, 2017 8:52:54 AM org.apache.ignite.logger.java.JavaLogger warning

WARNING: Failed to connect to a remote node (make sure that destination node is 
alive and operating system firewall is disabled on local and remote hosts) 
[addrs=[/0:0:0:0:0:0:0:1:47100, /127.0.0.1:47100, 
NYKDWMVDI012486.INTRANET.BARCAPINT.com/10.136.138.135:47100]]

Apr 20, 2017 8:52:58 AM org.apache.ignite.logger.java.JavaLogger error

SEVERE: Failed to reconnect to cluster (consider increasing 'networkTimeout' 
configuration property) [networkTimeout=5000]

Apr 20, 2017 8:53:03 AM org.apache.ignite.logger.java.JavaLogger info

INFO:



>>> +-+

>>> Ignite ver. 1.7.3#20161110-sha1:10582ae13b52d679a5827b409328a452ead2f1aa 
>>> stopped OK

>>> +-+

>>> Grid uptime: 00:00:21:509





javax.cache.CacheException: class 
org.apache.ignite.IgniteClientDisconnectedException: Failed to ping node, 
client node disconnected.

 at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1507)

 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2138)

 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1338)

 at 
org.gridgain.examples.Smriti.CacheLocalstore.CachePut.addEmpToCache(CachePut.java:68)

 at 
org.gridgain.examples.Smriti.CacheLocalstore.CachePut.main(CachePut.java:34)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke(Method.java:601)

 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

Caused by: class org.apache.ignite.IgniteClientDisconnectedException: Failed to 
ping node, client node disconnected.

 at 
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:841)

 at 
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:839)

 ... 10 more

Caused by: class 
org.apache.ignite.internal.IgniteClientDisconnectedCheckedException: Failed to 
ping node, client node disconnected.

 at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.pingNode(GridDiscoveryManager.java:1423)

at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:846)

at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(Gri

How to do write-behind caching?

2017-04-19 Thread Ricky Nauvaldy
Hello, can you provide a tutorial for* write-behind* caching? (similar to
the one on this page https://dzone.com/articles/apache-ignite-how-to-
read-data-from-persistent-sto but for *write-behind*). My configuration is
the same as in the example provided in that page above (MySQL database),
with the transaction process is based on CacheTransactionExample.java
provided in Ignite Examples, but this time I'm trying to learn how to do
*write-behind*. I've already succeed in doing *write through*, but when I
add the *write behind* configuration, it just won't write. What response
should I expect when I use write behind? Will it wait for 5 seconds before
it writes (e.g. using setWriteBehindFlushFrequency(5000) and
setWriteBehindFlushSize(0))?

Thanks for any help that you can provide.


Re: Index

2017-04-19 Thread javastuff....@gmail.com
I tried REST API and H2 debug console. Do not see index, not sure what is
wrong.

I am creating cache pragmatically and can not use annotations. Below is the
sample based on CacheQueryExample.java. Removed annotations from Person.java
to create Person2.java

 
   /CacheConfiguration personCacheCfg = new
CacheConfiguration<>(PERSON_CACHE);
personCacheCfg.setCacheMode(CacheMode.PARTITIONED);
Collection indx_fields = new ArrayList<>();
indx_fields.add("salary");
LinkedHashMap fields = new LinkedHashMap<>();
fields.put("id",Long.class.getName());
fields.put("salary",Double.class.getName());
QueryIndex idx = new QueryIndex();
idx.setName("SALARY_IDX");
idx.setFieldNames(indx_fields, true); 
Collection idxCollection = new ArrayList<>();
idxCollection.add(idx);
QueryEntity ent = new QueryEntity();
ent.setKeyType(Long.class.getName());
ent.setValueType(Person2.class.getName());
Collection entCollection = new ArrayList<>();
ent.setIndexes(idxCollection);
ent.setFields(fields);
entCollection.add(ent);
personCacheCfg.setQueryEntities(entCollection);
IgniteCache personCache =
ignite.getOrCreateCache(personCacheCfg);/

With this I was expecting to see index named SALARY_IDX on column SALARY.
Can you hep identifying the issue here?

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Index-tp11969p12095.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-19 Thread bintisepaha
This is positive news Andrey. Thanks a lot.

Please keep us posted about reproducing this. We are definitely not using
node filters...and we suspect topology changes to be causing issues, but
irrespective of that, we are not able to reproduce it. we also do not see
deadlock issues reported anywhere. the last time we got a key lock last
week, we did not see the NPE but only topology change for client.

Thanks,
Binti




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p12094.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-19 Thread Andrey Gura
Hi,

I've reproduced the problem and have exatly the same stack traces for
NullPointerException and IgniteTxTimeoutCheckedException that you
mentioned early.

But my case looks too complex. I started three nodes and cache1 on
nodes N1, N2 and N3, and cache2 on nodes N1 and N2. After it deadlock
was created between nodes N1 and N2 where both caches are
participiants (without transaction timeout). And finally I try to
update key (that participiates in deadlock) for cache2 from N3 in
transaction with timeout. As result deadlock detection receives
message from nodes N1 or/and N2 that contains information about cache1
that isn't started on N3. This leads to NPE and timeout.

I think that similar situation can happen in case of deadlock between
two caches on server nodes and attempt to update key from client node.
Will try this idea tomorrow hopefully.

This problem should exist on any Ignite version starting from 1.7.

On Mon, Apr 17, 2017 at 11:46 PM, bintisepaha  wrote:
> Looking further, I see this in the failed exception stack trace. The topology
> did change but it is only a client that joined, do you think that has any
> correlation to the key being locked?
>
> [INFO ] 2017-04-13 14:15:44.360 [pub-#44%DataGridServer-Production%]
> OrderHolderSaveRunnable - Updating PositionKey: PositionId [fundAbbrev=BVI,
> clearBrokerId=12718, insIid=679675, strategy=AFI, traderId=6531,
> valueDate=19000101]
> [14:15:46] Topology snapshot [ver=1980, servers=16, clients=82, CPUs=273,
> heap=850.0GB]
> [ERROR] 2017-04-13 14:15:54.348 [pub-#44%DataGridServer-Production%]
> OrderHolderSaveRunnable - Received Exception - printing on Entry
> javax.cache.CacheException: class
> org.apache.ignite.transactions.TransactionTimeoutException: Failed to
> acquire lock within provided timeout for transaction [timeout=1,
> tx=GridNearTxLocal [mappings=IgniteTxMappingsImpl [],
> nearLocallyMapped=false, colocatedLocallyMapped=false, needCheckBackup=null,
> hasRemoteLocks=true, thread=pub-#44%DataGridServer-Production%,
> mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
> depEnabled=false, txState=IgniteTxStateImpl [activeCacheIds=GridLongList
> [idx=2, arr=[2062286236,812449097]], txMap={IgniteTxKey
> [key=KeyCacheObjectImpl [val=OrderKey [traderId=6531, orderId=12382604],
> hasValBytes=true], cacheId=2062286236]=IgniteTxEntry [key=KeyCacheObjectImpl
> [val=OrderKey [traderId=6531, orderId=12382604], hasValBytes=true],
> cacheId=2062286236, partId=-1, txKey=IgniteTxKey [key=KeyCacheObjectImpl
> [val=OrderKey [traderId=6531, orderId=12382604], hasValBytes=true],
> cacheId=2062286236], val=[op=READ, val=CacheObjectImpl [val=TradeOrder
> [orderKey=OrderKey [traderId=6531, orderId=12382604], insIid=679675,
> clearBrokerId=12718, strategy=AFI, time=2017-04-13 13:30:00.0,
> settlement=2017-04-19 00:00:00.0, quantity=-6800.0, insType=STK, version=1,
> userId=3081, created=2017-04-13 13:29:47.831, status=open, allocFund=STD,
> isAlloc=Y, clearAgent=MSCOEPB, execBroker=DBKSE, initiate=L,
> notes=ClOrdId[20170413-Y47D580RHH99], allocRule=H2L, comType=T, comTurn=N,
> comImplied=N, trdCur=USD, trdFreeze=N, kindFlag=, lastRepo=, exCpn=,
> generatedTime=Thu Apr 13 14:15:02 EDT 2017, batchMatchFlag=N,
> commission=0.003, trdRate=1.0, gross=, delivInstruct=null, startflys=3,
> parentId=null, linkId=null, repo=N, repoRate=null, repoCalendar=null,
> repoStartDate=null, repoEndDate=null, xiid=null, quantityCurr=null,
> masterOrderId=null, unfilledQty=800.0, avgFillPrice=18.0021324, psRuleId=6,
> origDate=2017-04-13 00:00:00.0, postingId=2, executingUserId=5647,
> repoCloseDate=1900-01-01 00:00:00.0, repoPrice=0.0, directFxFlag=N, tax=0.0,
> fixStatusId=58, txnTypeId=0, yield=null, valueDate=null,
> interestOnlyRepoFlag=null, orderGroupId=0, fundingDate=2017-04-19
> 00:00:00.0, execBrokerId=12038, branchBrokerId=7511, fillOrigUserId=3081,
> initialMargin=null, cmmsnChgUserId=0, cmmsnChgReasonId=0, fixingSourceId=0,
> orderDesignationId=0, riskRewardId=0, placementTime=2017-04-13 13:29:47.657,
> initialInvestment=0.0, equityFxBrokerTypeId=0, execBranchBrokerId=0,
> createUserId=3081, targetAllocFlag=N, pvDate=null, pvFactor=null, pvId=0,
> executionTypeId=0, borrowScheduleId=0, borrowScheduleTypeId=0,
> marketPrice=null, interestAccrualDate=null, sourceAppId=103,
> initiatingUserId=6531, isDiscretionary=Y, traderBsssc=S, clearingBsssc=S,
> executingBsssc=S, shortsellBanApproverUserId=null, intendedQuantity=-7600.0,
> lastUpdated=2017-04-13 14:15:02.147, traderStrategyId=24686,
> businessDate=2017-04-13 00:00:00.0, firstExecutionTime=2017-04-13
> 13:29:47.657, doNotBulkFlag=null, trimDb=trim_grn, trades=[Trade
> [tradeKey=TradeKey [tradeId=263603637, tradeId64=789971421, traderId=6531],
> orderId=12382604, ftbId=2023850, quantity=-985.0, fundAbbrev=TRCP,
> subfundAbbrev=TRCP_EDAB, date=Thu 

Re: Slow on 1st time query "SQL Join"

2017-04-19 Thread afedotov
Hi,
I believe it will be kept unless the cache is explicitly destroyed.

Kind regards,
Alex.

On Tue, Apr 18, 2017 at 11:52 AM, woo charles [via Apache Ignite Users] <
ml-node+s70518n12021...@n6.nabble.com> wrote:

> Hi,
>
> For calling the function createMissingCaches(), it will create a cache in
> client node.
> Is this cache permanent exist in client node or will drop after sometime?
>
> Thanks & Best regards,
> Charles
>
>
> 2017-04-13 21:09 GMT+08:00 afedotov <[hidden email]
> >:
>
>> Created a ticket for the issue IGNITE-4957
>> 
>>
>> Kind regards,
>> Alex.
>>
>> On Thu, Apr 13, 2017 at 3:18 PM, Sergi Vladykin [via Apache Ignite Users]
>> <[hidden email] >
>> wrote:
>>
>>> Alex, I think we have to create an issue in Jira. This behavior looks
>>> suboptimal to me.
>>>
>>> Sergi
>>>
>>> 2017-04-13 15:12 GMT+03:00 afedotov <[hidden email]
>>> >:
>>>
 Charles,

 In your case, you have many caches registered on server nodes (debug
 shows me about 80) which are to be created on
 the client. These caches don't participate in common activities,
 instead they are used to prepare execution plan.
 As for now, a separate request is sent to remote nodes for each
 unregistered cache.
 You can use client mode, but you need to register all the missing
 caches before running queries.
 After loading missing caches explicitly the time reduced to 50-60ms in
 client mode.

 I've used a trick to avoid declaring all the caches in configuration
 file or calling getOrCreateCache for each of them.
 To give it a try, just add the line below before the queries execution
 logic in your example.
 IgniteStorage.getInstance().getIgniteCache(Quote.class.getName(), 0
 ).unwrap(IgniteCacheProxy.class).context().kernalContext().
 cache().createMissingCaches();

 Kind regards,
 Alex.

 On Thu, Apr 13, 2017 at 10:08 AM, woo charles [via Apache Ignite Users]
 <[hidden email] >
 wrote:

> "If you select some data from the joined table it leads to all dynamic
> cache creation requests being sent" <-- is that mean the client node will
> copy the selected table to local?
>
> If I set false to client mode, it will become server node and
> start participate in caching, compute execution, stream processing, etc.,
> Then it will affect the performance of my client program. How can I
> prevent this?
>
> 2017-04-13 14:24 GMT+08:00 afedotov <[hidden email]
> >:
>
>> Hi,
>>
>> Sergi, when join query is called from client
>> It leads to createMissingCaches being called which makes a remote
>> requests of a dynamic cache creation for each registered but not enabled
>> cache and since there is a cache for each entity there are many requests 
>> to
>> server nodes.
>>
>> Charles,
>> If you select some data from the joined table it leads to all dynamic
>> cache creation requests being sent therefore allowing to skip these on 
>> the
>> next query runs.
>> To disable client mode in your example just pass false to
>> Ignition.setClientMode(true)
>>
>> Kind regards,
>> Alex
>>
>>
>>
>> 13 апр. 2017 г. 5:22 AM пользователь "woo charles [via Apache Ignite
>> Users]" <[hidden email]
>> > написал:
>>
>> I found that this time can be reduced to a value below 100ms if I
>> already selected some data from join query related table.
>> For example,
>> if I run 2 query "select * from Quote where stock_id = xxx" & "Select
>> * from StockInfo where stock_id = xxx"  first and then run the join 
>> query,
>> the time for 1st join query will become similar to other(around 10
>> -20 ms).
>> Why will it happen?
>>
>> Also, How to run queries from a server node? I had try
>> "ignite.compute().run()" but it doesn't work.
>>
>>
>> thanks& best regards,
>> Charles
>>
>> 2017-04-13 0:48 GMT+08:00 Sergi Vladykin <[hidden email]
>> >:
>>
>>> Alex,
>>>
>>> Why do we have such a huge difference between client nodes and
>>> server nodes? Looks like we should fix it if possible. Even 7 seconds 
>>> looks
>>> too much for me.
>>>
>>> Sergi
>>>
>>> 2017-04-12 18:11 GMT+03:00 afedotov <[hidden email]
>>> >:
>>>
 Hi Charles,

 You are running the query from a client node what implies
 additional network rou

Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-19 Thread javastuff....@gmail.com
Thank you. Last question -
Why it showing up with 1.9 and not earlier? 

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048p12091.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: WebConsole local install (v1.9) help needed

2017-04-19 Thread Kenan Dalley
It looks like it's working now.  I think it was a proxy issue.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/WebConsole-local-install-v1-9-help-needed-tp12086p12090.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2017-04-19 Thread npordash
Thanks, Dmitry!

This is using Ignite 1.9. The stack trace is pretty straight forward:



Putting things into caches works just fine (f.e. instances of that Namespace
class), but pulling them out does not since the cache is only taking
Ignite's classloader into account. For the time being I've had to resort to
a hack where read operations use a binary view of the cache and then I have
to cast entries to BinaryObjectImpl or BinaryObjectOffheapImpl in order to
access (or copy) the backing array and then delegate to
Marshaller.unumarshal(byte[], ClassLoader) so the class can be found.

-Nick



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObjectImpl-deserializeValue-with-specific-ClassLoader-tp12055p12087.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


WebConsole local install (v1.9) help needed

2017-04-19 Thread Kenan Dalley
I'm having trouble with the WebConsole on my local Windows environment.  I've
done "npm install --no-optional" for both the "backend" and "frontend" and
"npm start" for the "backend".  However, whenever I try to do "npm run dev"
for the "frontend", it doesn't look like it finishes.  It gets to the
following location and then stops:

...
Child html-webpack-plugin for "index.html":
 AssetSize  Chunks   Chunk Names
index.html  577 kB   0
Child worker:
 Asset Size  Chunks Chunk
Names
d94d222ee707c638a9f6.worker.js  1.46 MB   0  [emitted]  main
d94d222ee707c638a9f6.worker.js.map   1.8 MB   0  [emitted]  main
webpack: Compiled successfully.


I've let it sit there for 5-10 minutes without any progress whatsoever. 
Neither "http://localhost:9000"; nor "http://127.0.0.1:9000"; work because the
webserver doesn't look like it's been started.

Now what?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/WebConsole-local-install-v1-9-help-needed-tp12086.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread rishi007bansod
Hi Below is our architecture,

1. Ignite receives data via Kafka Streamer
2. Tuple Extractor is implemented in ignite code
Everything works fine till this step.
3. We stop kafka. No error yet.
4. We kill 2 instance (out of n instance) of ignite. 
5. Kafka Consumer remapping also happens without any issue.
6. Cache rebalancing also seems completed (via log)
6. Post 2 mins, kafka is again started.
And then we get the below error
class org.apache.ignite.IgniteCheckedException: Failed to finish operation
(too many remaps): 32

Even after Cache rebalancing and consumer mapping seems completed and when
ignite receives new data through kafka then only error is coming.
So not able to understand what remap ignite is doing?

One point just for info, we are using Data streamer to add data in one cache
and ignite visitor to process data.
All caches are partitioned and have backup set to 1.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12085.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Disable WriteBehind

2017-04-19 Thread waterg
Hi, This is a very short test run on a small dataset on development laptop
and is expected behavior. Is this log helpful or were you looking for
something else?

On Wed, Apr 19, 2017 at 4:55 AM, Nikolai Tikhonov-2 [via Apache Ignite
Users]  wrote:

> Hi,
> I see again in the logs that node(app) lives only 5 seconds:
>
> *[15:36:04,897][INFO][main][GridDiscoveryManager] Topology snapshot
> [ver=1, servers=1, clients=0, CPUs=4, heap=0.5GB]*
> *[15:36:05,729][INFO][main][GridDeploymentLocalStore] Class locally
> deployed: class
> org.apache.ignite.configuration.CacheConfiguration$IgniteAllNodesPredicate*
> *[15:36:09,213][INFO][exchange-worker-#25%null%][GridCacheProcessor]
> Stopped cache: stagingCache*
>
> Is it expected behaviour?
>
> On Wed, Apr 19, 2017 at 1:43 AM, waterg <[hidden email]
> > wrote:
>
>> Hi Nikolai, looks like that was a wrong log file.
>> I have reran the app and here's a new log file.
>> Appreciate your help.
>>
>> Jessie
>>
>> On Mon, Apr 17, 2017 at 3:01 AM, Nikolai Tikhonov-2 [via Apache Ignite
>> Users] <[hidden email]
>> > wrote:
>>
>>> Hi,
>>>
>>> I see in logs that node lives 10 seconds and you have configured only
>>> system caches. Did you run you test on this setup?
>>>
>>> On Sat, Apr 15, 2017 at 2:42 AM, waterg <[hidden email]
>>> > wrote:
>>>
 Hi, please see the log attached.

 On Fri, Apr 14, 2017 at 9:07 AM, Nikolai Tikhonov-2 [via Apache Ignite
 Users] <[hidden email]
 > wrote:

> Yes, Ignite.log.
>
> On Fri, Apr 14, 2017 at 6:54 PM, waterg <[hidden email]
> > wrote:
>
>> Hi Nikolai, No topology changes. We have 2 server nodes and a client
>> node.
>>
>> What kind of log you refer to? Ignite log?
>>
>> Jessie
>>
>> On Fri, Apr 14, 2017 at 6:52 AM, Nikolai Tikhonov-2 [via Apache
>> Ignite Users] <[hidden email]
>> > wrote:
>>
>>> Hello!
>>>
>>> Was topology stable? Could you share full logs for this case?
>>>
>>> On Thu, Apr 13, 2017 at 8:36 PM, waterg <[hidden email]
>>> > wrote:
>>>
 Hello Nikolai,

 Thank you for your reply.
 I'm working a simplified maven project, to reproduce.
 Btw, with this configuration below, we did observed batch updatein
 persistent store.

 
 
 
 
 
 
 

 However as soon as we add the cache.remove() in,
 we start to see the behavior changed to a lot of batch operations
 with a few records.
 Is there any reasons for this? Does cache.remove trigger flushing
 out to persistent layer?
 Thank you for your help!

 [1492104394638]---Datebase BATCH upsert:1 entries
 successful 
 [1492104394772]---Datebase BATCH upsert:3 entries
 successful 
 [1492104395042]---Datebase BATCH upsert:1 entries
 successful 
 [1492104395170]---Datebase BATCH DELETE:1 entries
 successful 
 [1492104395452]---Datebase BATCH upsert:1 entries
 successful 
 [1492104395587]---Datebase BATCH upsert:1 entries
 successful 


 On Tue, Apr 11, 2017 at 4:39 AM, Nikolai Tikhonov-2 [via Apache
 Ignite Users] <[hidden email]
 > wrote:

> > If I disable writeThrough, would a put operation on the cache
> still succeed?
> Yes, sure. If write Through enabled than entries will be
> propagated to underlying store too.
>
> > If so, the get operation would return the same result as if the
> writeThrough were enabled, correct?
> You're right. But if you configure expire or eviction policy then
> a get operations might be miss.
>
> Could you share simple maven project which can reproduce the
> behaviour?
>
> On Mon, Apr 10, 2017 at 9:59 PM, waterg <[hidden email]
> > wrote:
>
>> Thank you for reply Nikolai. I have a more complex nested if-else
>> logic than the condition 1 and condition 2 here. They are based on 
>> the
>> results of SQLQueries from cache only. We don't use any conditions 
>> based on
>> querying persistent store. These two are examples of different put 
>> and
>>>

Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread rishi007bansod
Hi, the backup is in our case is 1.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12083.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite errors in log

2017-04-19 Thread Rishi Yagnik
Hello Andrew,

Thanks.. will dig further on it and will keep you posted.

Thank you for all your help,
Rishi

On Wed, Apr 19, 2017 at 9:06 AM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi Rishi,
>
> This means your node found another node with different 
> 'java.net.preferIPv4Stack'
> value.
> Please, check if there is no other node running?
>
> On Wed, Apr 19, 2017 at 4:58 AM, Rishi Yagnik 
> wrote:
>
>> Hello Andrew,
>>
>> I have applied IPv4 setting on both ignite instance, now seeing following
>> warning in the logs -
>>
>> [20:51:17,791][WARN ][disco-event-worker-#52%WebGrid%][GridDiscoveryManager]
>> Local node's value of 'java.net.preferIPv4Stack' system property differs
>> from remote node's (all nodes in topology should have identical value)
>> [locPreferIpV4=true, rmtPreferIpV4=null, locId8=1404bc51, rmtId8=4924c585,
>> rmtAddrs=[Remote_host_name/0:0:0:0:0:0:0:1%lo, /127.0.0.1,
>> /REMOTE_Host_IP]]
>> [20:51:35,986][WARN ][disco-event-worker-#52%WebGrid%][GridDiscoveryManager]
>> Local node's value of 'java.net.preferIPv4Stack' system property differs
>> from remote node's (all nodes in topology should have identical value)
>> [locPreferIpV4=true, rmtPreferIpV4=null, locId8=1404bc51, rmtId8=b8f3fa13,
>> rmtAddrs=[remote_host_name/0:0:0:0:0:0:0:1%lo, /127.0.0.1,
>> /REMOTE_Host_IP]]
>>
>> so now wondering what this error log suggests ?
>>
>> Thanks,
>> Rishi
>>
>> On Tue, Apr 18, 2017 at 4:36 PM, ignite_user2016 
>> wrote:
>>
>>> Thanks, Andrew.
>>>
>>> I enabled the ipv4 stack option and see if that resolves our problem.
>>>
>>> I will keep you posted, thank you for all your help.
>>>
>>>
>>> On Tue, Apr 18, 2017 at 1:41 PM, Andrew Mashenkov [via Apache Ignite
>>> Users] <[hidden email]
>>> > wrote:
>>>
 Hi Rishi,

 Would you please check if both nodes has same either
 java.net.preferIPv4Stack=true or java.net.preferIPv6Stack=true option?
 "remote_host/remote_host:47102" looks weird. Would you also check if
 all dns names are correctly resolved on both nodes?

 On Tue, Apr 18, 2017 at 9:29 PM, Rishi Yagnik <[hidden email]
 > wrote:

> I checked the port and all ports are open between 2 ignite instances.
>
> There is something more going on, will provide the log soon.
>
> On Tue, Apr 18, 2017 at 12:00 PM, ignite_user2016 <[hidden email]
> > wrote:
>
>> we use Ignite 1.7, yes application is communicating between hosts and
>> client application.
>>
>> I will provide the logs soon.
>>
>> Thanks,
>>
>> On Tue, Apr 18, 2017 at 11:09 AM, Evgenii Zhuravlev [via Apache
>> Ignite Users] <[hidden email]
>> > wrote:
>>
>>> Which version of ignite do you use?
>>>
>>> It looks like nodes discovered each other via 47500+ ports, but they
>>> can't communicate through 47100+ ports. Did you opened these ports? Are 
>>> you
>>> sure that all nodes was used in your application?
>>>
>>> Also, it would be helpful if you provided full logs, it's nearly
>>> impossible to understand what really happens on the nodes without them.
>>>
>>> 2017-04-18 18:39 GMT+03:00 ignite_user2016 <[hidden email]
>>> >:
>>>
 Yes, I see this logs in both the host.

 The log entries I have provided, here is my config file -

 http://www.springframework.org/schema/beans";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xmlns:util="http://www.springframework.org/schema/util";
xsi:schemaLocation="
 http://www.springframework.org/schema/beans
 http://www.springframework.org
 /schema/beans/spring-beans.xsd
 http://www.springframework.org/schema/util
 http://www.springframework.org/schema/util/spring-util.xsd
 ">




 

 

 
 
 
 

 
 
 
 
 >>> value="config/log4j.xml"/>
 
 
 
 

 
 
 
 
 
 



 
 
 
 
 
 
 
 
 
  

Re: what is the right way to run tests against Cassandra module?

2017-04-19 Thread Igor Rudyak
Hi Alexei,

Tests for Cassandra module should be executed just using standard maven
approach: mvn clean test

Igor

On Apr 19, 2017 6:40 AM, "Alexei Kaigorodov" 
wrote:

> the document DEVNOTES.txt says to run maven with argument
> -Dtest=%TEST_PATTERN%
> but cassandra test are placed in package org.apache.ignite.tests, where
> many other tests reside.
> When I run maven test from modules/cassandra directory, only 2 tests
> started.
> When I run tests from Intellij Idea, it says 4 test started and 7 tests
> was not started, and I cannot figure out why.
>
> Please advise.
>
> thanks,
> Alexei
>


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2017-04-19 Thread dkarachentsev
Hi Nick,

Please attach stack trace, and what version of Ignite do you use?

Thanks!
-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObjectImpl-deserializeValue-with-specific-ClassLoader-tp12055p12080.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread vdpyatkov
Although, I'm not right (it should by remaper).
Could you provide reproducer?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12079.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread vdpyatkov
Hi,

How many backups (o.a.i.configuration.CacheConfiguration#setBackups) do you
use?
If your cluster does not contain backups, a batch will not remaped, until
rebalance finished.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12078.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite errors in log

2017-04-19 Thread Andrey Mashenkov
Hi Rishi,

This means your node found another node with different
'java.net.preferIPv4Stack'
value.
Please, check if there is no other node running?

On Wed, Apr 19, 2017 at 4:58 AM, Rishi Yagnik  wrote:

> Hello Andrew,
>
> I have applied IPv4 setting on both ignite instance, now seeing following
> warning in the logs -
>
> [20:51:17,791][WARN ][disco-event-worker-#52%WebGrid%][GridDiscoveryManager]
> Local node's value of 'java.net.preferIPv4Stack' system property differs
> from remote node's (all nodes in topology should have identical value)
> [locPreferIpV4=true, rmtPreferIpV4=null, locId8=1404bc51, rmtId8=4924c585,
> rmtAddrs=[Remote_host_name/0:0:0:0:0:0:0:1%lo, /127.0.0.1,
> /REMOTE_Host_IP]]
> [20:51:35,986][WARN ][disco-event-worker-#52%WebGrid%][GridDiscoveryManager]
> Local node's value of 'java.net.preferIPv4Stack' system property differs
> from remote node's (all nodes in topology should have identical value)
> [locPreferIpV4=true, rmtPreferIpV4=null, locId8=1404bc51, rmtId8=b8f3fa13,
> rmtAddrs=[remote_host_name/0:0:0:0:0:0:0:1%lo, /127.0.0.1,
> /REMOTE_Host_IP]]
>
> so now wondering what this error log suggests ?
>
> Thanks,
> Rishi
>
> On Tue, Apr 18, 2017 at 4:36 PM, ignite_user2016 
> wrote:
>
>> Thanks, Andrew.
>>
>> I enabled the ipv4 stack option and see if that resolves our problem.
>>
>> I will keep you posted, thank you for all your help.
>>
>>
>> On Tue, Apr 18, 2017 at 1:41 PM, Andrew Mashenkov [via Apache Ignite
>> Users] <[hidden email]
>> > wrote:
>>
>>> Hi Rishi,
>>>
>>> Would you please check if both nodes has same either
>>> java.net.preferIPv4Stack=true or java.net.preferIPv6Stack=true option?
>>> "remote_host/remote_host:47102" looks weird. Would you also check if
>>> all dns names are correctly resolved on both nodes?
>>>
>>> On Tue, Apr 18, 2017 at 9:29 PM, Rishi Yagnik <[hidden email]
>>> > wrote:
>>>
 I checked the port and all ports are open between 2 ignite instances.

 There is something more going on, will provide the log soon.

 On Tue, Apr 18, 2017 at 12:00 PM, ignite_user2016 <[hidden email]
 > wrote:

> we use Ignite 1.7, yes application is communicating between hosts and
> client application.
>
> I will provide the logs soon.
>
> Thanks,
>
> On Tue, Apr 18, 2017 at 11:09 AM, Evgenii Zhuravlev [via Apache Ignite
> Users] <[hidden email]
> > wrote:
>
>> Which version of ignite do you use?
>>
>> It looks like nodes discovered each other via 47500+ ports, but they
>> can't communicate through 47100+ ports. Did you opened these ports? Are 
>> you
>> sure that all nodes was used in your application?
>>
>> Also, it would be helpful if you provided full logs, it's nearly
>> impossible to understand what really happens on the nodes without them.
>>
>> 2017-04-18 18:39 GMT+03:00 ignite_user2016 <[hidden email]
>> >:
>>
>>> Yes, I see this logs in both the host.
>>>
>>> The log entries I have provided, here is my config file -
>>>
>>> http://www.springframework.org/schema/beans";
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>>xmlns:util="http://www.springframework.org/schema/util";
>>>xsi:schemaLocation="
>>> http://www.springframework.org/schema/beans
>>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>> http://www.springframework.org/schema/util
>>> http://www.springframework.org/schema/util/spring-util.xsd";>
>>>
>>>
>>>
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>> >> value="config/log4j.xml"/>
>>> 
>>> 
>>> 
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>>
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>>
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 

what is the right way to run tests against Cassandra module?

2017-04-19 Thread Alexei Kaigorodov
the document DEVNOTES.txt says to run maven with argument
-Dtest=%TEST_PATTERN%
but cassandra test are placed in package org.apache.ignite.tests, where
many other tests reside.
When I run maven test from modules/cassandra directory, only 2 tests
started.
When I run tests from Intellij Idea, it says 4 test started and 7 tests was
not started, and I cannot figure out why.

Please advise.

thanks,
Alexei


Re: Disable WriteBehind

2017-04-19 Thread Nikolai Tikhonov
Hi,
I see again in the logs that node(app) lives only 5 seconds:

*[15:36:04,897][INFO][main][GridDiscoveryManager] Topology snapshot [ver=1,
servers=1, clients=0, CPUs=4, heap=0.5GB]*
*[15:36:05,729][INFO][main][GridDeploymentLocalStore] Class locally
deployed: class
org.apache.ignite.configuration.CacheConfiguration$IgniteAllNodesPredicate*
*[15:36:09,213][INFO][exchange-worker-#25%null%][GridCacheProcessor]
Stopped cache: stagingCache*

Is it expected behaviour?

On Wed, Apr 19, 2017 at 1:43 AM, waterg 
wrote:

> Hi Nikolai, looks like that was a wrong log file.
> I have reran the app and here's a new log file.
> Appreciate your help.
>
> Jessie
>
> On Mon, Apr 17, 2017 at 3:01 AM, Nikolai Tikhonov-2 [via Apache Ignite
> Users] <[hidden email]
> > wrote:
>
>> Hi,
>>
>> I see in logs that node lives 10 seconds and you have configured only
>> system caches. Did you run you test on this setup?
>>
>> On Sat, Apr 15, 2017 at 2:42 AM, waterg <[hidden email]
>> > wrote:
>>
>>> Hi, please see the log attached.
>>>
>>> On Fri, Apr 14, 2017 at 9:07 AM, Nikolai Tikhonov-2 [via Apache Ignite
>>> Users] <[hidden email]
>>> > wrote:
>>>
 Yes, Ignite.log.

 On Fri, Apr 14, 2017 at 6:54 PM, waterg <[hidden email]
 > wrote:

> Hi Nikolai, No topology changes. We have 2 server nodes and a client
> node.
>
> What kind of log you refer to? Ignite log?
>
> Jessie
>
> On Fri, Apr 14, 2017 at 6:52 AM, Nikolai Tikhonov-2 [via Apache Ignite
> Users] <[hidden email]
> > wrote:
>
>> Hello!
>>
>> Was topology stable? Could you share full logs for this case?
>>
>> On Thu, Apr 13, 2017 at 8:36 PM, waterg <[hidden email]
>> > wrote:
>>
>>> Hello Nikolai,
>>>
>>> Thank you for your reply.
>>> I'm working a simplified maven project, to reproduce.
>>> Btw, with this configuration below, we did observed batch updatein
>>> persistent store.
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>> However as soon as we add the cache.remove() in,
>>> we start to see the behavior changed to a lot of batch operations
>>> with a few records.
>>> Is there any reasons for this? Does cache.remove trigger flushing
>>> out to persistent layer?
>>> Thank you for your help!
>>>
>>> [1492104394638]---Datebase BATCH upsert:1 entries
>>> successful 
>>> [1492104394772]---Datebase BATCH upsert:3 entries
>>> successful 
>>> [1492104395042]---Datebase BATCH upsert:1 entries
>>> successful 
>>> [1492104395170]---Datebase BATCH DELETE:1 entries
>>> successful 
>>> [1492104395452]---Datebase BATCH upsert:1 entries
>>> successful 
>>> [1492104395587]---Datebase BATCH upsert:1 entries
>>> successful 
>>>
>>>
>>> On Tue, Apr 11, 2017 at 4:39 AM, Nikolai Tikhonov-2 [via Apache
>>> Ignite Users] <[hidden email]
>>> > wrote:
>>>
 > If I disable writeThrough, would a put operation on the cache
 still succeed?
 Yes, sure. If write Through enabled than entries will be propagated
 to underlying store too.

 > If so, the get operation would return the same result as if the
 writeThrough were enabled, correct?
 You're right. But if you configure expire or eviction policy then a
 get operations might be miss.

 Could you share simple maven project which can reproduce the
 behaviour?

 On Mon, Apr 10, 2017 at 9:59 PM, waterg <[hidden email]
 > wrote:

> Thank you for reply Nikolai. I have a more complex nested if-else
> logic than the condition 1 and condition 2 here. They are based on the
> results of SQLQueries from cache only. We don't use any conditions 
> based on
> querying persistent store. These two are examples of different put and
> other operations may happen based on what conditions are met.
>
> If I disable writeThrough, would a put operation on the cache
> still succeed? If so, the get operation would return the same result 
> as if
> the writeThrough were enabled, correct?
>
>
>
>
> On Mon, Apr 10, 2017 at 9:53 AM, Nikolai Tikhonov-2 [via Apache
> Ignite Users] <[hidden email]
> 

Re: Use of Ignition.allGrids()

2017-04-19 Thread jpmoore40
OK that all makes sense. Thanks Andrew



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Use-of-Ignition-allGrids-tp12071p12074.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Input data is no significant change in multi-threading

2017-04-19 Thread Andrey Mashenkov
It may have effect if you prepare data for streamer (call addData) slowly
and it is possible to utilize more resources for it. Of course remote nodes
should be able to bear pressure of data.
Performance can increased, but usually slightly as network will be a
bottleneck.


On Wed, Apr 19, 2017 at 12:29 PM, woo charles 
wrote:

> Is that mean the performance of input data will not affect if I use 2 
> IgniteDataStreamer(2
> client program) to input data as they use the same queue in remote nodes?
>
> 2017-04-19 10:02 GMT+08:00 Andrey Mashenkov :
>
>> Hi Woo,
>>
>> IgniteDataStreamer uses per node buffer to make bulk cache updates that
>> shows much better throughput than single updates.
>> Also, IgniteDataStreamer send jobs to remote nodes, to utilize multiple
>> threads on remote nodes.
>>
>> In multi-node grid IgniteDataStreamer usually shows better results than
>> single updates in from multiple threads.
>>
>>
>>
>> On Wed, Apr 19, 2017 at 4:30 AM, woo charles 
>> wrote:
>>
>>> When I try to input data(80 table, each 1 records) to a cluster with
>>> 3 server node(each 2 gb), it only has a small change in time if multi
>>> thread is performed
>>> (ie. max decrease from 8s to 6.5s if using IgniteCache)
>>>
>>> Is it normal?
>>>
>>> Also, I found that multi thread do not affect the data input speed in
>>> IgniteDataStreamer.
>>>
>>> Is it true?
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Use of Ignition.allGrids()

2017-04-19 Thread Andrey Mashenkov
Hi,

There are number of methods with confising names.  Actually, gridName means
ignite instance name. It will be fixed in 2.0 release.

1. Ignition.allGrids() return local JVM node instances. If you need all
cluster nodes, see Ignite.cluster() method.
2. Both of you nodes belongs to same cluster as they found each other via
DiscoverySPI.
By default, cache is created in PARTITIONED mode and its data distributed
among grid nodes. That is why one nodes see cache created on other node.

If you need to create separate grids then DiscoverySPI should be configured
with certain IpFinder. See [1] for details.

[1] https://apacheignite.readme.io/docs/clustering


On Wed, Apr 19, 2017 at 12:55 PM, jpmoore40 
wrote:

> Hi,
>
> I'm a bit confused about the purpose of the Ignition.allGrids() method, and
> how also the naming of grids using the IgniteConfiguration works.
>
> I started a  node using Ignition.getOrStart() with an IgniteConfiguration
> with name set to grid1, then create a cache and add some values to it. I
> then start a second node with name grid2. In the process for this second
> node I call Ignition.allGrids() which returns a single Ignite grid instance
> with name grid2. That grid instance contains the cache I created in the
> first process.
>
> So my questions are:
>
> 1. Why does allGrids() not return two instances, one for grid1 and one for
> grid2?
> 2. Why does the grid2 instance contain the cache when it was created in
> grid1.
>
> I assumes that the grid name was to allow you to create separate
> partitioned
> grids and that the caches would only be available in the grid in which they
> are created, but this doesn't seem to be the case. So I'm not sure what the
> purpose of the grid name or the allGrids() method is (since it only ever
> seems to return the current instance).
>
> Thanks
> Jon
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Use-of-Ignition-allGrids-tp12071.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Use of Ignition.allGrids()

2017-04-19 Thread jpmoore40
Hi,

I'm a bit confused about the purpose of the Ignition.allGrids() method, and
how also the naming of grids using the IgniteConfiguration works.

I started a  node using Ignition.getOrStart() with an IgniteConfiguration
with name set to grid1, then create a cache and add some values to it. I
then start a second node with name grid2. In the process for this second
node I call Ignition.allGrids() which returns a single Ignite grid instance
with name grid2. That grid instance contains the cache I created in the
first process.

So my questions are:

1. Why does allGrids() not return two instances, one for grid1 and one for
grid2?
2. Why does the grid2 instance contain the cache when it was created in
grid1.

I assumes that the grid name was to allow you to create separate partitioned
grids and that the caches would only be available in the grid in which they
are created, but this doesn't seem to be the case. So I'm not sure what the
purpose of the grid name or the allGrids() method is (since it only ever
seems to return the current instance).

Thanks
Jon



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Use-of-Ignition-allGrids-tp12071.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Input data is no significant change in multi-threading

2017-04-19 Thread woo charles
Is that mean the performance of input data will not affect if I use 2
IgniteDataStreamer(2
client program) to input data as they use the same queue in remote nodes?

2017-04-19 10:02 GMT+08:00 Andrey Mashenkov :

> Hi Woo,
>
> IgniteDataStreamer uses per node buffer to make bulk cache updates that
> shows much better throughput than single updates.
> Also, IgniteDataStreamer send jobs to remote nodes, to utilize multiple
> threads on remote nodes.
>
> In multi-node grid IgniteDataStreamer usually shows better results than
> single updates in from multiple threads.
>
>
>
> On Wed, Apr 19, 2017 at 4:30 AM, woo charles 
> wrote:
>
>> When I try to input data(80 table, each 1 records) to a cluster with
>> 3 server node(each 2 gb), it only has a small change in time if multi
>> thread is performed
>> (ie. max decrease from 8s to 6.5s if using IgniteCache)
>>
>> Is it normal?
>>
>> Also, I found that multi thread do not affect the data input speed in
>> IgniteDataStreamer.
>>
>> Is it true?
>>
>>
>>
>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: Index

2017-04-19 Thread Alexey Kuznetsov
Hi!

You may try to use
1) https://apacheignite.readme.io/docs/rest-api#cache-metadata

2) org.apache.ignite.internal.visor.cache.VisorCacheMetadataTask (or see
how it is implemented).
But, PLEASE NOTE, this is internal API and may be changed in future
versions of Ignite.


On Tue, Apr 18, 2017 at 12:42 AM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> Webconsole is the only way? Visor or JMX or logs or sample JAVA API.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Index-tp11969p12004.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov


Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread rushi_rashi
1. So considering example from post 1, does it mean that when an ignite
instance was killed, Data streamer had some data which it was going to put
in to cache but that cache's instance was killed and hence the error might
have occurred.
2. If not, than can you throw some light on the remap error from post 1
because we are trying to check fault tolerance but whenever ignite instance
is killed then the remap error occurs. We are unable to identify the root
cause.

Thanks,
Rushi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12068.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-19 Thread Andrey Mashenkov
Hi Sam,

Ignite use messages for inter-node communication.
You have to configure TcpCommunicationSPI in IgniteConfiguretion. Method
TcpCommunicationSPI.setMessageQueueLimit() is what you need.

On Wed, Apr 19, 2017 at 11:13 AM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> What do you mean by messages? I am not using Ignite messaging. Are these
> messages of rebalancing during topology change?
> How do I configure it to avoid potential OOME?
>
> Thanks,
> -Sam
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-
> 0-potential-OOMEs-tp12048p12066.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-19 Thread javastuff....@gmail.com
What do you mean by messages? I am not using Ignite messaging. Are these
messages of rebalancing during topology change? 
How do I configure it to avoid potential OOME?

Thanks,
-Sam




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048p12066.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.