Re: copyOnRead to false

2017-09-05 Thread steve.hostett...@gmail.com
HI,I am using 1.9 and according to your explanation neither does it  make sense to me. Thanks anyway for explaining the precise semantics. I will try to change some parameters of the tests to see if I can come up with some meaningful explanation.Thanks againSteve Original Message Subject: Re: copyOnRead to falseFrom: Evgenii Zhuravlev To: user@ignite.apache.orgCC: Hi,I want to clarify about usage copyOnRead=False:As far as I know, It should help in 1.x releases, with the default configuration, when entries stored in onheap and in 2.x releases, with _onHeapCacheEnabled_=true(it's not default). I think we should mention this in the documentation.Also, it shouldn't affect behavior in 2.x without enabled on heap cache, so, It's not clear to me why you get performance degradation.Evgenii2017-09-05 10:36 GMT+03:00 steve.hostettler :Hello,

thanks for the answer. The benchmark is actually our application stressed
with several volumes. Some quite complex to describe. However, for these
benchmarks we are only using one node.

Basically we are loading a set of caches from the database, do a lot of
querying both ScanQuery (on BinaryObjects) and SQLQueries.

Most of what we are doing is read only with lot of computations (at least we
segregated the caches that are r/w)

Based on what you described, I should witness an performance improvment.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: POJO field having wrapper type, mapped to cassandra table are getting initialized to respective default value of primitive type instead of null if column value is null

2017-09-05 Thread Dmitriy Setrakyan
Cross-sending to user@ as well.

On Tue, Sep 5, 2017 at 10:44 PM, kotamrajuyashasvi <
kotamrajuyasha...@gmail.com> wrote:

> Hi
>
> I'm using ignite with cassandra as persistent store. I have a POJO class
> mapped to cassandra table. I have used
> ignite-cassandra-store/KeyValuePersistenceSettings xml bean to map POJO to
> cassandra table. In the POJO one of the fields is Integer (wrapper class)
> mapped to int column in cassandra table. When I load any row having this
> int
> field as null in cassandra, I'm getting that respective field in POJO as 0,
> which is default value of primitive type int. Same is the case when using
> other wrapper classes. How can I get that field as null when the actual
> column field is null in cassandra, since wrapper object can be null.
>
> I found a work around by using custom class extending CacheStoreAdapter and
> using this class in cache configuration in cacheStoreFactory
> property,instead of using  ignite-cassandra-store. This class overrides
> load,write and delete methods. In load method I connect to cassandra
> database using Datastax driver, and load respective row depending upon the
> key passed as parameter to load, and then create a new POJO whose fields
> are
> set to the fields of row returned from cassandra and return the POJO.
> During
> this process I make a check if the int field that I mentioned above is null
> in cassandra by using Row.isNull method of Datastaxdriver and if its not
> null only then I set POJO field to the field value returned from cassandra,
> else it will remain as null.
>
> Is it a bug in ignite-cassandra-store, where I cannot retain null value of
> cassandra table field for primitive types mapped to wrapper classes in POJO
> in ignite? The reason I have used wrapper class objects is to identify if
> its null in cassandra or not, but there seems no way to differentiate
> between primitive type default value and null when using
> ignite-cassandra-store.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: IgniteDataStreamer.addData - behavior for a FULL_SYNC Cache

2017-09-05 Thread Dmitriy Setrakyan
On Tue, Sep 5, 2017 at 10:51 AM, mcherkasov  wrote:

> I think javadoc is the best source for this:
>
>  /**
>  * Flag indicating that Ignite should wait for write or commit replies
> from all nodes.
>  * This behavior guarantees that whenever any of the atomic or
> transactional writes
>  * complete, all other participating nodes which cache the written data
> have been updated.
>  */
>
> so with FULL_SYNC client node will wait for data will be saved on the
> primary node and backup node.
> if you have REPLICATED cache that means you have 1 primary node and all
> other nodes in the cluster
> store backups, so in your case, you lost one backup and that's it. Data was
> saved.
>

I don't think this is exactly true. The client node is calling addData(...)
method and will not wait for anything, IgniteDataStreamer is completely
asynchronous.

However, the primary-key server node will wait for the backup server nodes
to be updated before responding back to the client.

In case of REPLICATED cache, the primary node will wait until all other
nodes are updated, so essentially, all nodes are guaranteed to have the
latest state. If one of the nodes crashes, then other nodes will still have
the state.


>
> You right that now you have cluster consists of only 1 node, but you can
> start a new node or even hundred nodes,
> and data will be replicated to all new nodes.


>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Message Queue Size

2017-09-05 Thread ignite_user2016
hello igniters,

When I start up the console, I get this warning -

Message queue limit is set to 0 which may lead to potential OOMEs when
running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message
queues growth on sender and receiver sides.

I am wondering how can I set the message queue limit ? any example would be
helpful here.

Thanks..

Rishi



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


With onHeapCacheEnabled = false, BinaryOnHeapOutputStream is still used, why?

2017-09-05 Thread John Wilson
Hi,

I'm running the CacheAPIExample below with no on-heap caching, locally
using Intellij.

The stack frame shows that the entry I put is written on heap (using
BinaryOnheapOutputStream) and not off-heap (using BinaryOffheapOutputStream).
What's going on?

try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache API example started.");

CacheConfiguration cfg = new
CacheConfiguration<>();
cfg.setOnheapCacheEnabled(false);
cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setName(CACHE_NAME);

// Auto-close cache at the end of the example.
try (IgniteCache cache =
ignite.getOrCreateCache(cfg)) {
// Demonstrate atomic map operations.
cache.put(999, "777");
}
finally {
// Distributed cache could be removed from cluster only by
#destroyCache() call.
ignite.destroyCache(CACHE_NAME);
}
}

Stack Frame:

 at org.apache.ignite.internal.util.GridUnsafe.putByte(GridUnsafe.java:394)
 *at
org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.unsafeWriteByte(BinaryHeapOutputStream.java:142)*
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.writeIntFieldPrimitive(BinaryWriterExImpl.java:999)
 at
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:554)
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:206)
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
 at
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
 at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshal(CacheObjectBinaryProcessorImpl.java:732)
 at
org.apache.ignite.internal.processors.cache.KeyCacheObjectImpl.valueBytes(KeyCacheObjectImpl.java:78)
 at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1682)
 - locked <0xfc5> (a
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2462)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1944)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1797)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1689)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1170)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:659)
 at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2334)
 at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2311)
 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1005)
 at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:872)
 at
org.apache.ignite.examples.datagrid.CacheApiExample.main(CacheApiExample.java:56)


Re: Issue with starting Ignite node on AWS

2017-09-05 Thread vkulichenko
Most likely ignite-aws module is not enabled. For standalone node started
using ignite.sh, move 'ignite-aws' folder from 'libs/optional' to 'libs'
prior to node start. For embedded, add 'ignite-aws' Maven dependency along
with 'ignite-core', 'ignite-spring' and any other that you might use.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with starting Ignite node on AWS

2017-09-05 Thread vkulichenko
Most likely ignite-aws module is not enabled. For standalone node started
using ignite.sh, move 'ignite-aws' folder from 'libs/optional' to 'libs'
prior to node start. For embedded, add 'ignite-aws' Maven dependency along
with 'ignite-core', 'ignite-spring' and any other that you might use.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with starting Ignite node on AWS

2017-09-05 Thread Dave Harvey
I'm also a newbie, but I'm running 2.1.0 and I seem to be hitting the same
problem, which sounds like it was fixed a long time ago.Is there
something else going on?
I've uploaded the 2 lines I pass when creating the EC2 instance from the AMI
as well as the config file I'm using, as well as the full output from docker
logs. errs.log
 
xxx.txt   
config.xml
  

Caused by: org.springframework.beans.factory.CannotLoadBeanClassException:
Cannot find class
[org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder] for
bean with name
'org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder#71623278'
defined in URL [https://s3.amazonaws.com/jc-ignite-trial/example-cache.xml];
nested exception is java.lang.ClassNotFoundException:
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder
at
org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1385)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 28 more
Caused by: java.lang.ClassNotFoundException:
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Page Locking

2017-09-05 Thread John Wilson
Thanks Mikhail! If I may ask two additional questions:


   1. Is there any difference between data page eviction and check pointing
   (dirty pages being written to disk) when persistent store is enabled? My
   understanding is yes there is a difference: check pointing is a periodical
   process of writing dirty pages while data page eviction is evicting pages
   to make more room in memory. I'm right?
   2. Data page eviction works by going through each entry and evicting
   entries which are not locked in active transactions. However, when we write
   out pages to disk in a checkpoint process, do we write out the entire page
   as a chunk, or we write out all entries one-by-one?

Thanks

On Tue, Sep 5, 2017 at 11:03 AM, mcherkasov  wrote:

> Hi John,
>
> it's for internal use only, for example, a page can be locked for a
> checkpoint, to avoid writes for the page while we writing it to a disk.
>
> These bytes are used by OffheapReadWriteLock class, you can look at its
> usage if want learns more about this.
>
> Thanks,
> Mikhail.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: client thread dumps

2017-09-05 Thread ignite_user2016
Thank you for all your help ...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query event buffering OOME

2017-09-05 Thread mcherkasov
Hi Michal,

Those buffers are required to make sure that all messages are delivered to
all subscribers and delivered in right order.
However I agree, 1M is a relatively large number for this.

I will check this question with Continuous Query experts and will update you
tomorrow.

Thanks,
Mikhail.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Page Locking

2017-09-05 Thread mcherkasov
Hi John,

it's for internal use only, for example, a page can be locked for a
checkpoint, to avoid writes for the page while we writing it to a disk.

These bytes are used by OffheapReadWriteLock class, you can look at its
usage if want learns more about this.

Thanks,
Mikhail.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite compute job continuation documentation

2017-09-05 Thread mcherkasov
Hi Anton,

We have only docs that you mentioned.

Do you have questions about continuation jobs?

Thanks,
Mikhail.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteDataStreamer.addData - behavior for a FULL_SYNC Cache

2017-09-05 Thread mcherkasov
I think javadoc is the best source for this:

 /**
 * Flag indicating that Ignite should wait for write or commit replies
from all nodes.
 * This behavior guarantees that whenever any of the atomic or
transactional writes
 * complete, all other participating nodes which cache the written data
have been updated.
 */

so with FULL_SYNC client node will wait for data will be saved on the
primary node and backup node.
if you have REPLICATED cache that means you have 1 primary node and all
other nodes in the cluster
store backups, so in your case, you lost one backup and that's it. Data was
saved.

You right that now you have cluster consists of only 1 node, but you can
start a new node or even hundred nodes,
and data will be replicated to all new nodes.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Affinity and String key (was: SQLQuery with simple Join return no results)

2017-09-05 Thread Roger Fischer (CW)
Hi Denis,

I don’t quite understand your comment “If you use Strings as the keys you won’t 
get affinity collocation set up properly”.

I have an object with a plain String key, named switchId, and another object 
with a composite key, of which one field is switchId (of type String). I am 
using switchId as the affinity key, and it seems to work fine without 
distributed queries. Was this a coincident and I should re-test?

My understanding is that caches with no explicit affinity key use the key to 
distribute objects. For the plain-key cache this would be switchId.

So, if I have two caches, both with switchId as the key, I should get automatic 
collocation (no setup required). Correct?

And for the third cache, with the switchId field in the composite key as the 
affinity key, I should also get collocation (with the other two caches). 
Correct?

What am I missing?

BTW, I think there was a typo in the response (first sentence), and you meant 
to say “set up properly and _non_-distributed joins will return an incomplete 
result.”.

Roger

PS: My configuration (all XML):


































…






















…

PortKey.java:

public class PortKey implements Serializable {

private UUIDid;// port-id; PK
private UUIDswitchId;  // affinity key; not really part of PK

…



From: Denis Magda [mailto:dma...@apache.org]
Sent: Friday, September 01, 2017 3:07 PM
To: user@ignite.apache.org
Subject: Re: SQLQuery with simple Join return no results

If you use Strings as the keys you won’t get affinity collocation set up 
properly and distributed joins will return an incomplete result. One of the 
keys have to comprise a “parent” class key that will be an affinity key. Look 
at the example here:
https://apacheignite.readme.io/docs/affinity-collocation#section-collocate-data-with-data

As for the NON collocated joins suggested by Roger (qry.setDistributedJoins( 
true)), I would use them only if it’s impossible to set up the collocation 
between 2 entities. That’s not your case from what I see. NON collocated joins 
are slower than collocated ones.

—
Denis


On Sep 1, 2017, at 2:53 PM, Roger Fischer (CW) 
mailto:rfis...@brocade.com>> wrote:

Hi Matt,

are the objects to join collocated, ie. do they have the same affinity key? If 
yes, it should work (it worked for me).

If no, you need to enable distributed joins for the query. See the middle line.

   SqlFieldsQuery qry = new SqlFieldsQuery( stmt);
   qry.setDistributedJoins( true);
   queryCursor = aCache.query( qry);

Roger

-Original Message-
From: matt [mailto:goodie...@gmail.com]
Sent: Friday, September 01, 2017 1:52 PM
To: user@ignite.apache.org
Subject: SQLQuery with simple Join return no results

I have 2 caches defined, both with String keys, and classes that make use of 
the Ignite annotations for indexes and affinity. I've got 3 different nodes 
running, and the code I'm using to populate the cache w/test data works, and I 
can see each node is updated with its share of the data. My index types are set 
on the caches as well.

If I do a ScanQuery, I can see that all of the fields and IDs are correct, 
Ignite returns them all. But when doing a SqlQuery, I get nothing back.
Ignite is not complaining about the query, it's just returning an empty cursor.

If I remove the Join, results are returned.

So I'm wondering if this is related to the way I've set up my affinity mapping. 
It's basically setup like the code below... and the query looks like this:

"from B, A WHERE B.id = A.bID"

Any ideas on what I'm d

ignite compute job continuation documentation

2017-09-05 Thread anton solovev
Hi folks,

Is there any documentation of continuation support for grid job context ?

Except ComputeFibonacciContinuationExample and javadocs


Re: client thread dumps

2017-09-05 Thread Evgenii Zhuravlev
Hi,

I think it's a collection of objects from previous exchanges. By
default, GridCachePartitionExchangeManager stores 1000 objects, but since
version 2.1 it's possible to reduce this size.

I would recommend you to update to version 2.1 and try to reduce this
property:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html#IGNITE_EXCHANGE_HISTORY_SIZE

Evgenii

2017-09-05 17:57 GMT+03:00 ignite_user2016 :

> Hello Evgenii,
>
> We only used ignite for 2nd level cache, mostly we used get operation on
> cache however we monitor with Ignite visor console, not sure if that would
> make difference.
>
> Do you know what could be the reason for leak which shows on heap dump ?
>
> One instance of
> "org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager"
> loaded by "sun.misc.Launcher$AppClassLoader @ 0x88003d40" occupies
> 375,647,072 (83.93%) bytes. The memory is accumulated in one instance of
> "java.util.LinkedList" loaded by "".
>
> Keywords
> java.util.LinkedList
> sun.misc.Launcher$AppClassLoader @ 0x88003d40
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeMana
> ger
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteDataStreamer.addData - behavior for a FULL_SYNC Cache

2017-09-05 Thread userx
Hi Mikhail,

I am just trying to understand the behavior of addData method in conjunction
with FULL_SYNC.

If we have just one server node left, then we are not really replicating,
isn't it ? So let's say we have to persist 2 entries, and after 1 write and
replication, one of the server goes down, then eventually the second write
is just written to one server and replication does not come into the
picture.

Again, I am just trying to understand whats the responsibility of addData
keeping in mind it throws exception in its declaration.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: copyOnRead to false

2017-09-05 Thread Evgenii Zhuravlev
Hi,

I want to clarify about usage copyOnRead=False:
As far as I know, It should help in 1.x releases, with the default
configuration, when entries stored in onheap and in 2.x releases, with
onHeapCacheEnabled=true(it's not default). I think we should mention this
in the documentation.

Also, it shouldn't affect behavior in 2.x without enabled on heap cache,
so, It's not clear to me why you get performance degradation.

Evgenii

2017-09-05 10:36 GMT+03:00 steve.hostettler :

> Hello,
>
> thanks for the answer. The benchmark is actually our application stressed
> with several volumes. Some quite complex to describe. However, for these
> benchmarks we are only using one node.
>
> Basically we are loading a set of caches from the database, do a lot of
> querying both ScanQuery (on BinaryObjects) and SQLQueries.
>
> Most of what we are doing is read only with lot of computations (at least
> we
> segregated the caches that are r/w)
>
> Based on what you described, I should witness an performance improvment.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: client thread dumps

2017-09-05 Thread ignite_user2016
Hello Evgenii,

We only used ignite for 2nd level cache, mostly we used get operation on
cache however we monitor with Ignite visor console, not sure if that would
make difference.

Do you know what could be the reason for leak which shows on heap dump ? 

One instance of
"org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager"
loaded by "sun.misc.Launcher$AppClassLoader @ 0x88003d40" occupies
375,647,072 (83.93%) bytes. The memory is accumulated in one instance of
"java.util.LinkedList" loaded by "".

Keywords
java.util.LinkedList
sun.misc.Launcher$AppClassLoader @ 0x88003d40
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteDataStreamer.addData - behavior for a FULL_SYNC Cache

2017-09-05 Thread Mikhail Cherkasov
Hi,

it works as expected, with REPLICATED cache you can't lose you data while
you have at least 1 server node alive.

Why do you think it should throw an exception?

Thanks,
Mikhail.

On Tue, Sep 5, 2017 at 5:05 PM, userx  wrote:

> Hi all,
>
> I am using IgniteDataStreamer to write to a cache. As a part of my testing,
> I started 2 servers on a local node and 1 client locally. I put everything
> in debug mode in eclipse, and put a debug point where I am calling
> IgniteDataStreamer.addData(). After that i let 2-3 entries to be written to
> the cache, I then stop at the same debug point before I let it write the
> next entry. Just at that time, I kill one of the two servers (I see some
> java.net.SocketException in the client log) and then let the client
> continue
> to write rest of the entries, since one of the servers is still running. In
> spite of the fact that my cache is in 'REPLICATED' mode and the
> cachesyncwrite mode is FULL_SYNC, addData method did not throw an exception
> and completed successfully.
>
> Why should it be possible with the given cache modes ? FULL_SYNC shouldn't
> let it complete if one of the servers go down, isn't it ?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Thanks,
Mikhail.


Re: Task management - MapReduce & ForkJoin performance penalty

2017-09-05 Thread Evgenii Zhuravlev
But of course, it could be changed. The community didn't decide yet if wiki
doesn't have information about it.

2017-09-05 17:46 GMT+03:00 Evgenii Zhuravlev :

> I think it was planned at the end of October.
>
> Evgenii
>
> 2017-09-05 17:41 GMT+03:00 ihorps :
>
>> hi, @ezhuravlev
>>
>> This is what I'm looking for, many thanks!
>>
>> Some hints when v2.3 is planned to be release (I can't find it on wiki)?
>>
>> I'd rather wait for this API in Ignite then implementing it by myself an
>> throw it later such as I'm in evaluation/prototype phase now.
>>
>> Best regards,
>> ihorps
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Task management - MapReduce & ForkJoin performance penalty

2017-09-05 Thread Evgenii Zhuravlev
I think it was planned at the end of October.

Evgenii

2017-09-05 17:41 GMT+03:00 ihorps :

> hi, @ezhuravlev
>
> This is what I'm looking for, many thanks!
>
> Some hints when v2.3 is planned to be release (I can't find it on wiki)?
>
> I'd rather wait for this API in Ignite then implementing it by myself an
> throw it later such as I'm in evaluation/prototype phase now.
>
> Best regards,
> ihorps
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite.active(true) blocking forever

2017-09-05 Thread slava.koptilin
Hi,

I am sorry for the delay.

I was able to reproduce this issue, and it looks like a bug.
I created a jira ticket in order to track this
https://issues.apache.org/jira/browse/IGNITE-6274

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Task management - MapReduce & ForkJoin performance penalty

2017-09-05 Thread ihorps
hi, @ezhuravlev

This is what I'm looking for, many thanks!

Some hints when v2.3 is planned to be release (I can't find it on wiki)? 

I'd rather wait for this API in Ignite then implementing it by myself an
throw it later such as I'm in evaluation/prototype phase now.

Best regards,
ihorps



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Task management - MapReduce & ForkJoin performance penalty

2017-09-05 Thread Evgenii Zhuravlev
Hi,

Here is a ticket for exactly what you want, it's in progress right now:
https://issues.apache.org/jira/browse/IGNITE-5037

If you don't want to wait till it will be implemented, you can use
affinityCall(...) or affinityRun(...) and somehow reduce result after it
will be returned.

Evgenii

2017-09-04 21:52 GMT+03:00 ihorps :

> hi @ezhuravlev
>
> Thank you for your reply, very appreciated!
>
> I can confirm that by adding real business logic to Jobs it's actually
> scales horizontally quite well and by adding more nodes the whole task
> finishes just faster.
>
> One more think, which I'm looking now on is running tasks with help of
> MapReduce API in collocated fashion. As far as I understood from
> documentation ( Collocate Computing and Data
>   ) this
> is
> possible only by calling affinityCall(...) or affinityRun(...), which take
> IgniteCallable or IgniteRunnable.
> I'd like to create a ComputeTask (ComputeTaskAdapter or
> ComputeTaskSplitAdapter), which would spawn ComputeJob with affinity key
> (let's say in constructor) and execute them on node with co-located data.
>
> So is this possible to do such somehow? I couldn't find for now how it can
> be done in elegant way...
>
> Thank you in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


IgniteDataStreamer.addData - behavior for a FULL_SYNC Cache

2017-09-05 Thread userx
Hi all,

I am using IgniteDataStreamer to write to a cache. As a part of my testing,
I started 2 servers on a local node and 1 client locally. I put everything
in debug mode in eclipse, and put a debug point where I am calling
IgniteDataStreamer.addData(). After that i let 2-3 entries to be written to
the cache, I then stop at the same debug point before I let it write the
next entry. Just at that time, I kill one of the two servers (I see some
java.net.SocketException in the client log) and then let the client continue
to write rest of the entries, since one of the servers is still running. In
spite of the fact that my cache is in 'REPLICATED' mode and the
cachesyncwrite mode is FULL_SYNC, addData method did not throw an exception
and completed successfully.

Why should it be possible with the given cache modes ? FULL_SYNC shouldn't
let it complete if one of the servers go down, isn't it ?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: Is there any way to listener to a specific remote cache's update events ?

2017-09-05 Thread aa...@tophold.com
Great, thanks this seem what we need, we will have a try,  thanks you Nikolay!



Regards
Aaron


aa...@tophold.com
 
From: Nikolay Izhikov
Date: 2017-09-05 19:38
To: user
Subject: Re: Is there any way to listener to a specific remote cache's update 
events ?
Hello, Aaron.
 
I think continuous query is what you need:
 
https://apacheignite.readme.io/docs/continuous-queries#section-local-listener
 
You can also use Ignite as jcache implementation and register jcache 
listener on IgniteCache:
 
https://static.javadoc.io/javax.cache/cache-api/1.0.0/javax/cache/Cache.html#registerCacheEntryListener(javax.cache.configuration.CacheEntryListenerConfiguration)
 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html
 
05.09.2017 14:26, aa...@tophold.com пишет:
> hi All,
> 
> I look around the event section try to find a way to catch a specific 
> cache's update events, and persist those event to a historical database.
> 
> We have a instance update a market data Ignite cache; on another side we 
> need persist all those historical events.
> 
> The listener side host in a standalone machine, it does not care data in 
> cache only the updates.
> 
> we try several ways, local or remote but seem only the cache data node 
> got the events, while the remote not;
> 
> also this cache data node may have multiple caches. while we only want 
> monitor one of them,
> 
> Another way we try to define a specific topic and manually trigger a 
> publish event, seem can work.
> 
> Thanks for your time!
> 
> Regards
> Aaron
> 
> aa...@tophold.com


Continuous Query event buffering OOME

2017-09-05 Thread Frajt, Michal
Hi,

We are using a simple replicated Ignite cache with a few continuous queries. 
Recently we were running into an OOME after several days of running it without 
a restart. The histogram shows that most of the heap is utilized by the 
buffered/cached continuous query entries. The code analysis shows that each 
continuous query requires a buffer with a batch having a cache for 1000 
continuous query entries. It would be understandable but all gets multiplied by 
the number of partitions which defaults to 1024 (or 512 for replicated cache 
mode). A single continuous query can then over the time cache up-to 
1000x1024=>~1M entries including the entry payload (key, newVal, oldVal). We 
did not try to understand all the details around the 
CacheContinuousQueryEventBuffer class but we feel there might be a chance to 
clear some cached entries earlier. Currently there is a clean at the batch 
being full condition only. This allows a continuous queries which don't produce 
many events to grow to the full size over a long time. We feel it would be much 
better to have a dynamic buffer size to cover certain time only instead of the 
fixed length. Additionally it would partially solve the problem with the 
per-partition buffering as each partition would dynamically buffer it's time 
amount required only (obviously with the uniform distribution they would be 
more or less buffering the same amount).

Please note that we might be wrong but we feel there is no reason to buffer 1M 
entries representing many days of events for each and every continuous query. 
We tried to help it by using a less number of partitions but we are still 
looking for a detailed explanation of the high heap memory requirement coming 
from the continuous query entries buffering/caching.

Best regards,
Michal

Hidden configuration of the entries size
  private static final int BUF_SIZE = 
IgniteSystemProperties.getInteger("IGNITE_CONTINUOUS_QUERY_SERVER_BUFFER_SIZE", 
1000);

Clear all when full
 if (pos == entries.length - 1) {
   Arrays.fill(entries, null);

Heap
   1:729468 2955637048  [B
   2:  14963642  957673088  
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryEntry
   3: 40700  163451200  
[Lorg.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryEntry;
   4:951580   33617168  [C
   5:951355   22832520  java.lang.String
   6:362093   14483720  
org.apache.ignite.internal.binary.BinaryObjectImpl
   7:300481   12019240  
com.cbksec.flow.history.shared.document.DataId
   8:3637378729688  
org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
   9:3004837211592  
com.cbksec.flow.history.shared.document.DataVersion
  10:3004817211544  
com.cbksec.flow.history.shared.document.DocumentId
  11:3620935793488  
org.apache.ignite.internal.processors.cache.CacheObjectByteArrayImpl

Reduce number of partitions to 32
   new RendezvousAffinityFunction(false, 32);



Re: Is there any way to listener to a specific remote cache's update events ?

2017-09-05 Thread Nikolay Izhikov

Hello, Aaron.

I think continuous query is what you need:

https://apacheignite.readme.io/docs/continuous-queries#section-local-listener

You can also use Ignite as jcache implementation and register jcache 
listener on IgniteCache:


https://static.javadoc.io/javax.cache/cache-api/1.0.0/javax/cache/Cache.html#registerCacheEntryListener(javax.cache.configuration.CacheEntryListenerConfiguration)

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html

05.09.2017 14:26, aa...@tophold.com пишет:

hi All,

I look around the event section try to find a way to catch a specific 
cache's update events, and persist those event to a historical database.


We have a instance update a market data Ignite cache; on another side we 
need persist all those historical events.


The listener side host in a standalone machine, it does not care data in 
cache only the updates.


we try several ways, local or remote but seem only the cache data node 
got the events, while the remote not;


also this cache data node may have multiple caches. while we only want 
monitor one of them,


Another way we try to define a specific topic and manually trigger a 
publish event, seem can work.


Thanks for your time!

Regards
Aaron

aa...@tophold.com


Is there any way to listener to a specific remote cache's update events ?

2017-09-05 Thread aa...@tophold.com
hi All, 

I look around the event section try to find a way to catch a specific cache's 
update events, and persist those event to a historical database. 

We have a instance update a market data Ignite cache; on another side we need 
persist all those historical events. 

The listener side host in a standalone machine, it does not care data in cache 
only the updates. 

we try several ways, local or remote but seem only the cache data node got the 
events, while the remote not;  

also this cache data node may have multiple caches. while we only want monitor 
one of them, 

Another way we try to define a specific topic and manually trigger a publish 
event, seem can work.   

Thanks for your time!

Regards
Aaron


aa...@tophold.com


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-05 Thread afedotov
Hi,

FYI. Created tickets related to the subject:
1) https://issues.apache.org/jira/browse/IGNITE-6265
2) https://issues.apache.org/jira/browse/IGNITE-6266
3) https://issues.apache.org/jira/browse/IGNITE-6268



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: About Apache Ignite Partitioned Cache

2017-09-05 Thread Sabyasachi Biswas
Thanks Evgenii and Denis .

On Tue, Sep 5, 2017 at 9:57 AM, Denis Mekhanikov 
wrote:

> > The backups which is mentioned in the documentation, how do I define
> which node is the primary node and which node is the backup node.
>
> You can either define your own affinity function or use
> RendezvousAffinityFunction and set a backup filter using
> setAffinityBackupFilter
> 
>  method. Here you can find a use-case for it: https://www.youtube.com/
> watch?time_continue=801&v=u8BFLDfOdy8.
>
> пн, 4 сент. 2017 г. в 20:48, ezhuravlev :
>
>> >The backups which is mentioned in the documentation, how do I define
>> which
>> node is the primary node and >which node is the backup node.
>>
>> It defined by Affinity Function, you can read about it here:
>> https://apacheignite.readme.io/docs/affinity-collocation#
>> section-affinity-function
>>
>> >I will have a four node cluster in two data centers , is it normal to
>> think
>> each data center would have one >primary node and one backup node. What
>> would happen in the scenario that the primary node on one dc >cannot reach
>> the primary node on the other dc?
>>
>> I think that you not fully understand how partitioned caches work. Node
>> can
>> be primary for the part of the partitions, while other nodes will have
>> other
>> parts as primary. Here is basic information about Partitioned cache that
>> should be enough to start:
>> https://apacheignite.readme.io/docs/cache-modes#section-partitioned-mode
>>
>> >I assume that org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi is to
>> be
>> used for the dsicoverySpi property >even for Partitioned Caches. Are there
>> special properties that should be filled in for the partitioned cache
>> >except for addresses and ports?
>>
>> discoverySpi doesn't affect caches at all, so, you don't need to change
>> anything at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
>> configuration
>>
>> >Most of the caches I am using is using uuid as keys , in that case how
>> would affinity collocation of the keys >work? Unfortunately I cannot
>> change
>> the key structure in a short notice.
>>
>> Do you want to collocate Data with Data or Compute with Data? In both
>> cases
>> you can find information on this page:
>> https://apacheignite.readme.io/docs/affinity-collocation
>>
>>
>> Evgenii
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Serialization exception in Ignite compute job

2017-09-05 Thread begineer
HI,
Thanks. Its clear now. And it solved the issue. Thanks again



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Specifying location of persistent storage location

2017-09-05 Thread Yakov Zhdanov
Raymond, I think most of Ignite users run on ipv4. So, issues with v6 in
this case are hardly possible =)

--Yakov


RE: Specifying location of persistent storage location

2017-09-05 Thread Raymond Wilson
Hi Pavel,



Thanks for the pointer on how to have both the XML and .Net client
configuration supplied to Ignite.



Raymond.



*From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
*Sent:* Tuesday, September 5, 2017 8:56 PM
*To:* d...@ignite.apache.org
*Cc:* user@ignite.apache.org; Dmitriy Setrakyan 
*Subject:* Re: Specifying location of persistent storage location



Ignite.NET does not have IgniteConfiguration.ConsistentId, here is the
ticket:

https://issues.apache.org/jira/browse/IGNITE-6249



Workaround is to use Spring XML for that particular property (keep
everything else in .NET, configs will be merged):

https://apacheignite-net.readme.io/docs/configuration#section-spring-xml



On Tue, Sep 5, 2017 at 11:51 AM, Yakov Zhdanov  wrote:

+ dev
Pavel Tupitsin, can you please check that
org.apache.ignite.configuration.IgniteConfiguration#setConsistentId has it
platform counterpart? I could not find it.

Raymond, you can explicitly set a bind address for Ignite with public
string Localhost { get; set; }. This will make consistent ID to use only 1
address. Also I would suggest you disable ipv6 if you don't use it.

Igniters, I think Ignite needs to do these checks and reports:

1. Output the store path and tell its (1) size or state that it is empty
and (2) last data file modification date.
2. Output warning if there are other non-empty storage folders under work
directory with their sizes and dates.

--Yakov

2017-09-05 4:07 GMT+03:00 Raymond Wilson :

> Dmitriy,
>
>
>
> I set up an XML file based on the default one and added the two elements
> you noted.
>
>
>
> However, this has brought up an issue in that the XML file and an
> IgniteConfiguration instance can’t both be provided to the
Ignition.Start()
> call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
> and set LocalAddress to “127.0.0.1” and LocalPort to 47500.
>
>
>
> This did change the name of the persistence folder to be “127_0_0_1_47500”
> as you suggested.
>
>
>
> While this resolves my current issue with the folder name changing, it
> still seems fragile as network configuration aspects of the server Ignite
> is running on have a direct impact on an internal aspect of its
> configuration (ie: the location where to store the persisted data). A DHCP
> IP lease renewal or an internal DNS domain change or an internal IT
> department change to using IPv6 addressing (among other things) could
cause
> problems when a node restarts and decides the location of its data is
> different.
>
>
>
> Do you know how GridGain manage this in their enterprise deployments using
> persistence?
>
>
>
> Thanks,
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:41 AM
>
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location

>
>
>
>
>
> On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names
I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>
>
>
>
>
> Yes, exactly. My suggestions will ensure that you explicitly bind to the
> same address every time.
>
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Raymond.
>
>
>

> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I

RE: Specifying location of persistent storage location

2017-09-05 Thread Raymond Wilson
Hi Yakov,



Yes, Dmitriy walked me through how setting the LocalHost results in the
folder name for the persistent data to be fixed. I also fixed the port
number as this is also an aspect of the folder name.



Is there a known issue with IPv6 interfaces on a server hosting Ignite?



Thanks,
Raymond



*From:* Yakov Zhdanov [mailto:yzhda...@apache.org]
*Sent:* Tuesday, September 5, 2017 8:51 PM
*To:* user@ignite.apache.org; d...@ignite.apache.org
*Cc:* Dmitriy Setrakyan 
*Subject:* Re: Specifying location of persistent storage location



+ dev
Pavel Tupitsin, can you please check that
org.apache.ignite.configuration.IgniteConfiguration#setConsistentId has it
platform counterpart? I could not find it.

Raymond, you can explicitly set a bind address for Ignite with public
string Localhost { get; set; }. This will make consistent ID to use only 1
address. Also I would suggest you disable ipv6 if you don't use it.



Igniters, I think Ignite needs to do these checks and reports:



1. Output the store path and tell its (1) size or state that it is empty
and (2) last data file modification date.

2. Output warning if there are other non-empty storage folders under work
directory with their sizes and dates.


--Yakov



2017-09-05 4:07 GMT+03:00 Raymond Wilson :

Dmitriy,



I set up an XML file based on the default one and added the two elements
you noted.



However, this has brought up an issue in that the XML file and an
IgniteConfiguration instance can’t both be provided to the Ignition.Start()
call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
and set LocalAddress to “127.0.0.1” and LocalPort to 47500.



This did change the name of the persistence folder to be “127_0_0_1_47500”
as you suggested.



While this resolves my current issue with the folder name changing, it
still seems fragile as network configuration aspects of the server Ignite
is running on have a direct impact on an internal aspect of its
configuration (ie: the location where to store the persisted data). A DHCP
IP lease renewal or an internal DNS domain change or an internal IT
department change to using IPv6 addressing (among other things) could cause
problems when a node restarts and decides the location of its data is
different.



Do you know how GridGain manage this in their enterprise deployments using
persistence?



Thanks,
Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:41 AM


*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location





On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
wrote:

Hi,



It’s possible this could cause change in the folder name, though I do not
think this is an issue in my case. Below are three different folder names I
have seen. All use the same port number, but differ in terms of the IPV6
address (I have also seen variations where the IPv6 address is absent in
the folder name).

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
,

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500



I start the nodes in my local setup in a well defined order so I would
expect the port to be the same. I did once start a second instance by
mistake and did see the port number incremented in the folder name.



Are you suggesting the two changes you note below will result in the same
folder name being chosen every time, unlike above?





Yes, exactly. My suggestions will ensure that you explicitly bind to the
same address every time.











Thanks,

Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:17 AM
*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location







On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
wrote:

Hi,



I definitely have not had more than one server node running at the same
time (though there have been more than one client node running on the same
machine).



I suspect what is happening is that one or more of the network interfaces
on the machine can have their address change dynamically. What I thought of
as a GUID is actually (I think) an IPv6 address attached to one of the
interfaces. This aspect of the folder name tends to come and go.



You can see from the folder names below that there are quite a number of
addresses involved. This seems to be fragile (and I certainly see the name
of this folder changing frequently), so I think being able to set it to
something concrete would be a good idea.





I think I understand what is happening. Ignite starts off with a d

Re: New API docs look for Ignite.NET

2017-09-05 Thread Oleg Ostanin
Great news, thanks a lot!

On Tue, Sep 5, 2017 at 11:47 AM, Pavel Tupitsyn 
wrote:

> DocFX takes around 30 seconds on my machine.
>
> > if you already tried that
> Yes, everything is done on my side, see JIRA ticket [4] and preview [5]
> above.
>
> On Tue, Sep 5, 2017 at 11:45 AM, Ilya Suntsov 
> wrote:
>
> > Pavel, thanks!
> > It is the great news!
> > Looks like DocFX will save 30-40 min.
> >
> > 2017-09-05 11:16 GMT+03:00 Pavel Tupitsyn :
> >
> > > Igniters and users,
> > >
> > > Historically we've been using Doxygen [1] to generate .NET API
> > > documentation [2].
> > >
> > > Recently it became very slow on our code base (more than 30 minutes to
> > > generate), and I could not find any solution or tweak to fix that.
> Other
> > > issues include outdated looks and limited customization possibilities.
> > >
> > > I propose to replace it with DocFX [3] [4]:
> > > - Popular .NET Foundation project
> > > - Good looks and usability out of the box
> > > - Easy to set up
> > >
> > > Our docs will look like this: [5]
> > > Let me know if you have any objections or suggestions.
> > >
> > > Pavel
> > >
> > >
> > > [1] http://www.stack.nl/~dimitri/doxygen/
> > > [2] https://ignite.apache.org/releases/latest/dotnetdoc/index.html
> > > [3] https://dotnet.github.io/docfx/
> > > [4] https://issues.apache.org/jira/browse/IGNITE-6253
> > > [5] https://ptupitsyn.github.io/docfx-test/api/index.html
> > >
> >
> >
> >
> > --
> > Ilya Suntsov
> >
>


Re: Specifying location of persistent storage location

2017-09-05 Thread Pavel Tupitsyn
Ignite.NET does not have IgniteConfiguration.ConsistentId, here is the
ticket:
https://issues.apache.org/jira/browse/IGNITE-6249

Workaround is to use Spring XML for that particular property (keep
everything else in .NET, configs will be merged):
https://apacheignite-net.readme.io/docs/configuration#section-spring-xml

On Tue, Sep 5, 2017 at 11:51 AM, Yakov Zhdanov  wrote:

> + dev
> Pavel Tupitsin, can you please check that
> org.apache.ignite.configuration.IgniteConfiguration#setConsistentId has it
> platform counterpart? I could not find it.
>
> Raymond, you can explicitly set a bind address for Ignite with public
> string Localhost { get; set; }. This will make consistent ID to use only 1
> address. Also I would suggest you disable ipv6 if you don't use it.
>
> Igniters, I think Ignite needs to do these checks and reports:
>
> 1. Output the store path and tell its (1) size or state that it is empty
> and (2) last data file modification date.
> 2. Output warning if there are other non-empty storage folders under work
> directory with their sizes and dates.
>
> --Yakov
>
> 2017-09-05 4:07 GMT+03:00 Raymond Wilson :
>
> > Dmitriy,
> >
> >
> >
> > I set up an XML file based on the default one and added the two elements
> > you noted.
> >
> >
> >
> > However, this has brought up an issue in that the XML file and an
> > IgniteConfiguration instance can’t both be provided to the
> Ignition.Start()
> > call. So I changed it to use the DiscoverSPI aspect of
> IgniteConfiguration
> > and set LocalAddress to “127.0.0.1” and LocalPort to 47500.
> >
> >
> >
> > This did change the name of the persistence folder to be
> “127_0_0_1_47500”
> > as you suggested.
> >
> >
> >
> > While this resolves my current issue with the folder name changing, it
> > still seems fragile as network configuration aspects of the server Ignite
> > is running on have a direct impact on an internal aspect of its
> > configuration (ie: the location where to store the persisted data). A
> DHCP
> > IP lease renewal or an internal DNS domain change or an internal IT
> > department change to using IPv6 addressing (among other things) could
> cause
> > problems when a node restarts and decides the location of its data is
> > different.
> >
> >
> >
> > Do you know how GridGain manage this in their enterprise deployments
> using
> > persistence?
> >
> >
> >
> > Thanks,
> > Raymond.
> >
> >
> >
> > *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> > *Sent:* Tuesday, September 5, 2017 11:41 AM
> >
> > *To:* user 
> > *Cc:* Raymond Wilson 
> > *Subject:* Re: Specifying location of persistent storage location
> >
> >
> >
> >
> >
> > On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson <
> raymond_wil...@trimble.com>
> > wrote:
> >
> > Hi,
> >
> >
> >
> > It’s possible this could cause change in the folder name, though I do not
> > think this is an issue in my case. Below are three different folder
> names I
> > have seen. All use the same port number, but differ in terms of the IPV6
> > address (I have also seen variations where the IPv6 address is absent in
> > the folder name).
> >
> > 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> > 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> > 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
> >
> >
> >
> > ,
> >
> > 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> > 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> > 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
> >
> > 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> > 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> > f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
> >
> >
> >
> > I start the nodes in my local setup in a well defined order so I would
> > expect the port to be the same. I did once start a second instance by
> > mistake and did see the port number incremented in the folder name.
> >
> >
> >
> > Are you suggesting the two changes you note below will result in the same
> > folder name being chosen every time, unlike above?
> >
> >
> >
> >
> >
> > Yes, exactly. My suggestions will ensure that you explicitly bind to the
> > same address every time.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Thanks,
> >
> > Raymond.
> >
> >
> >
> > *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> > *Sent:* Tuesday, September 5, 2017 11:17 AM
> > *To:* user 
> > *Cc:* Raymond Wilson 
> > *Subject:* Re: Specifying location of persistent storage location
> >
> >
> >
> >
> >
> >
> >
> > On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson <
> raymond_wil...@trimble.com>
> > wrote:
> >
> > Hi,
> >
> >
> >
> > I definitely have not had more than one server node running at the same
> > time (though there have been more than one client node running on the
> same
> > machine).
> >
> >
> >
> > I suspect what is happening is that one or more of the network interfaces
> > on the machine can have their address change d

Re: Specifying location of persistent storage location

2017-09-05 Thread Yakov Zhdanov
+ dev
Pavel Tupitsin, can you please check that
org.apache.ignite.configuration.IgniteConfiguration#setConsistentId has it
platform counterpart? I could not find it.

Raymond, you can explicitly set a bind address for Ignite with public
string Localhost { get; set; }. This will make consistent ID to use only 1
address. Also I would suggest you disable ipv6 if you don't use it.

Igniters, I think Ignite needs to do these checks and reports:

1. Output the store path and tell its (1) size or state that it is empty
and (2) last data file modification date.
2. Output warning if there are other non-empty storage folders under work
directory with their sizes and dates.

--Yakov

2017-09-05 4:07 GMT+03:00 Raymond Wilson :

> Dmitriy,
>
>
>
> I set up an XML file based on the default one and added the two elements
> you noted.
>
>
>
> However, this has brought up an issue in that the XML file and an
> IgniteConfiguration instance can’t both be provided to the Ignition.Start()
> call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
> and set LocalAddress to “127.0.0.1” and LocalPort to 47500.
>
>
>
> This did change the name of the persistence folder to be “127_0_0_1_47500”
> as you suggested.
>
>
>
> While this resolves my current issue with the folder name changing, it
> still seems fragile as network configuration aspects of the server Ignite
> is running on have a direct impact on an internal aspect of its
> configuration (ie: the location where to store the persisted data). A DHCP
> IP lease renewal or an internal DNS domain change or an internal IT
> department change to using IPv6 addressing (among other things) could cause
> problems when a node restarts and decides the location of its data is
> different.
>
>
>
> Do you know how GridGain manage this in their enterprise deployments using
> persistence?
>
>
>
> Thanks,
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:41 AM
>
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
> On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>
>
>
>
>
> Yes, exactly. My suggestions will ensure that you explicitly bind to the
> same address every time.
>
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
>
>
> I think I understand what is happening. Ignite starts off with a default
> port, and then starts incrementing it with every new node started on the
> same host. Perhaps you start server and client nodes in different order
> sometimes which causes server to bind to a different port.
>
>
>
> To make sure that your serv

Re: New API docs look for Ignite.NET

2017-09-05 Thread Pavel Tupitsyn
DocFX takes around 30 seconds on my machine.

> if you already tried that
Yes, everything is done on my side, see JIRA ticket [4] and preview [5]
above.

On Tue, Sep 5, 2017 at 11:45 AM, Ilya Suntsov  wrote:

> Pavel, thanks!
> It is the great news!
> Looks like DocFX will save 30-40 min.
>
> 2017-09-05 11:16 GMT+03:00 Pavel Tupitsyn :
>
> > Igniters and users,
> >
> > Historically we've been using Doxygen [1] to generate .NET API
> > documentation [2].
> >
> > Recently it became very slow on our code base (more than 30 minutes to
> > generate), and I could not find any solution or tweak to fix that. Other
> > issues include outdated looks and limited customization possibilities.
> >
> > I propose to replace it with DocFX [3] [4]:
> > - Popular .NET Foundation project
> > - Good looks and usability out of the box
> > - Easy to set up
> >
> > Our docs will look like this: [5]
> > Let me know if you have any objections or suggestions.
> >
> > Pavel
> >
> >
> > [1] http://www.stack.nl/~dimitri/doxygen/
> > [2] https://ignite.apache.org/releases/latest/dotnetdoc/index.html
> > [3] https://dotnet.github.io/docfx/
> > [4] https://issues.apache.org/jira/browse/IGNITE-6253
> > [5] https://ptupitsyn.github.io/docfx-test/api/index.html
> >
>
>
>
> --
> Ilya Suntsov
>


Re: Failed to accept TCP connection. java.net.SocketException: Too many open files

2017-09-05 Thread Вячеслав Коптилин
Hi,

Ignite Persistent store requires creating a number of files like a WAL
(write-ahead log), page store (which is implemented as file per partition)
etc.
So, you need to increase the open files limit. In order to do that you can
edit limits.conf (nofile - max number of open files) or you can use
'ulimit' command.

Please see the details
https://apacheignite.readme.io/v2.1/docs/jvm-and-system-tuning#file-descriptors

Thanks!

2017-09-05 9:18 GMT+03:00 userx :

> Hi all,
>
> I am not sure about the steps how the following error happened but can
> someone help explain what the logs have to say.
>
> 2017-09-05 04:45:52,731 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: -1689269445; exceptio
> n was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/-1689269445.bin
> (Too many open files)
> 2017-09-05 04:45:52,864 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: -1689269445; exceptio
> n was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/-1689269445.bin
> (Too many open files)
> 2017-09-05 04:45:52,943 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: 1537855308; exception
>  was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/1537855308.bin
> (Too many open files)
> 2017-09-05 04:45:52,972 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: 1537855308; exception
>  was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/1537855308.bin
> (Too many open files)
> 2017-09-05 04:45:52,979 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: -741311308; exception
>  was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/-741311308.bin
> (Too many open files)
> 2017-09-05 04:45:52,985 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: -741311308; exception
>  was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/-741311308.bin
> (Too many open files)
> 2017-09-05 04:45:52,991 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: 1055444204; exception
>  was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/1055444204.bin
> (Too many open files)
> 2017-09-05 04:45:52,996 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: 1055444204; exception
>  was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/1055444204.bin
> (Too many open files)
> 2017-09-05 04:45:53,011 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: 674065617; exception
> was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/674065617.bin
> (Too many open files)
> 2017-09-05 04:45:53,015 WARN
> [tcp-disco-msg-worker-#3%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl
> - Failed to save metadata for typeId: 674065617; exception
> was thrown:
> /temp/Ignite/Work/binary_meta/10_63_142_35_10_63_155_178_10_
> 63_170_29_127_0_0_1_47500/674065617.bin
> (Too many open files)
> 2017-09-05 04:45:53,861 ERROR
> [tcp-disco-srvr-#2%e89cfda3-beb6-4eca-ada2-fb4c3b2eebbc%] {}
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Failed to accept TCP
> connection. java.net.SocketException: Too many open files (Accept faile
> d)
> at java.net.PlainSocketImpl.socketAccept(Native Method)
> at
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
> at java.net.ServerSocket.implAccept(Server

Re: New API docs look for Ignite.NET

2017-09-05 Thread Sergey Kozlov
Pavel

I like the idea to change Doxygen. It really becomes uncomfortable to
regenerate often.
Do you know how much DocFX faster than Doxygen (if you already tried that)?



On Tue, Sep 5, 2017 at 11:16 AM, Pavel Tupitsyn 
wrote:

> Igniters and users,
>
> Historically we've been using Doxygen [1] to generate .NET API
> documentation [2].
>
> Recently it became very slow on our code base (more than 30 minutes to
> generate), and I could not find any solution or tweak to fix that. Other
> issues include outdated looks and limited customization possibilities.
>
> I propose to replace it with DocFX [3] [4]:
> - Popular .NET Foundation project
> - Good looks and usability out of the box
> - Easy to set up
>
> Our docs will look like this: [5]
> Let me know if you have any objections or suggestions.
>
> Pavel
>
>
> [1] http://www.stack.nl/~dimitri/doxygen/
> [2] https://ignite.apache.org/releases/latest/dotnetdoc/index.html
> [3] https://dotnet.github.io/docfx/
> [4] https://issues.apache.org/jira/browse/IGNITE-6253
> [5] https://ptupitsyn.github.io/docfx-test/api/index.html
>



-- 
Sergey Kozlov
GridGain Systems
www.gridgain.com


Re: Failed to accept TCP connection. java.net.SocketException: Too many open files

2017-09-05 Thread userx
Hi all,

Looking at https://apacheignite.readme.io/v2.1/docs/jvm-and-system-tuning

it looks like the recommended setting for ulimit is 32768. Pardon my limited
knowledge,  but can some one please tell me why this large number is
required ? Current settings have 1024 as the limit but before I increase the
same, I need to understand the logic behind the same.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Specifying location of persistent storage location

2017-09-05 Thread Raymond Wilson
I wasn’t thinking of a snapshot so much, as a situation where a server
(particularly a virtual server) that might need to be bounced or replaced
during a maintenance window. If this was on AWS the process is pretty
simple – create a new server (for example with an updated patched AMI) and
reattach the EBS volume containing the persisted data. This server may well
not have the same IP address (though that could be mitigated with a DNS
name).



In any event, I’m just trying to convey a sense of unease over how IP
configuration controls the naming of the location of persistent data on an
Ignite server node and how (especially with a default configuration) this
can result in unpredictable changes of the name of the folder Ignite
expects to find its persisted data.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 6:12 PM
*To:* user 
*Subject:* Re: Specifying location of persistent storage location







On Mon, Sep 4, 2017 at 8:40 PM, Raymond Wilson 
wrote:

Thanks.



I get the utility of specifying the network address to bind to; I’m not
convinced using that to derive the name of the internal data store is a
good idea! J

 For instance, what if you have to move a persistent data store to a
different server? Or are you saying everybody sets LocalHost or 120.0.0.1
to ensure the folder name is always essentially local host?



I think what you are asking about is a database backup or a snapshot.
Ignite does not support it out of the box, but you may wish to look at the
3rd party solutions, e.g. the one provided by GridGain -
https://docs.gridgain.com/docs/data-snapshots







*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 3:09 PM
*To:* user 


*Subject:* Re: Specifying location of persistent storage location







On Mon, Sep 4, 2017 at 6:07 PM, Raymond Wilson 
wrote:

Dmitriy,



I set up an XML file based on the default one and added the two elements
you noted.



However, this has brought up an issue in that the XML file and an
IgniteConfiguration instance can’t both be provided to the Ignition.Start()
call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
and set LocalAddress to “127.0.0.1” and LocalPort to 47500.



This did change the name of the persistence folder to be “127_0_0_1_47500”
as you suggested.



While this resolves my current issue with the folder name changing, it
still seems fragile as network configuration aspects of the server Ignite
is running on have a direct impact on an internal aspect of its
configuration (ie: the location where to store the persisted data). A DHCP
IP lease renewal or an internal DNS domain change or an internal IT
department change to using IPv6 addressing (among other things) could cause
problems when a node restarts and decides the location of its data is
different.



Do you know how GridGain manage this in their enterprise deployments using
persistence?



I am glad the issue is resolved. By default, Ignite will bind to all the
local network interfaces, and if they are provided in different order, it
may create the situation you witnessed.



All enterprise users explicitly specify which network address to bind to,
just like you did. This helps avoid any kind of magic in production.









Thanks,
Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:41 AM


*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location





On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
wrote:

Hi,



It’s possible this could cause change in the folder name, though I do not
think this is an issue in my case. Below are three different folder names I
have seen. All use the same port number, but differ in terms of the IPV6
address (I have also seen variations where the IPv6 address is absent in
the folder name).

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
,

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500



I start the nodes in my local setup in a well defined order so I would
expect the port to be the same. I did once start a second instance by
mistake and did see the port number incremented in the folder name.



Are you suggesting the two changes you note below will result in the same
folder name being chosen every time, unlike above?





Yes, exactly. My suggestions will ensure that you explicitly bind to the
same address every time.











Thanks,

Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:17 AM
*

New API docs look for Ignite.NET

2017-09-05 Thread Pavel Tupitsyn
Igniters and users,

Historically we've been using Doxygen [1] to generate .NET API
documentation [2].

Recently it became very slow on our code base (more than 30 minutes to
generate), and I could not find any solution or tweak to fix that. Other
issues include outdated looks and limited customization possibilities.

I propose to replace it with DocFX [3] [4]:
- Popular .NET Foundation project
- Good looks and usability out of the box
- Easy to set up

Our docs will look like this: [5]
Let me know if you have any objections or suggestions.

Pavel


[1] http://www.stack.nl/~dimitri/doxygen/
[2] https://ignite.apache.org/releases/latest/dotnetdoc/index.html
[3] https://dotnet.github.io/docfx/
[4] https://issues.apache.org/jira/browse/IGNITE-6253
[5] https://ptupitsyn.github.io/docfx-test/api/index.html


Re: About Apache Ignite Partitioned Cache

2017-09-05 Thread Denis Mekhanikov
> The backups which is mentioned in the documentation, how do I define
which node is the primary node and which node is the backup node.

You can either define your own affinity function or use
RendezvousAffinityFunction and set a backup filter using
setAffinityBackupFilter

method.
Here you can find a use-case for it:
https://www.youtube.com/watch?time_continue=801&v=u8BFLDfOdy8.

пн, 4 сент. 2017 г. в 20:48, ezhuravlev :

> >The backups which is mentioned in the documentation, how do I define which
> node is the primary node and >which node is the backup node.
>
> It defined by Affinity Function, you can read about it here:
>
> https://apacheignite.readme.io/docs/affinity-collocation#section-affinity-function
>
> >I will have a four node cluster in two data centers , is it normal to
> think
> each data center would have one >primary node and one backup node. What
> would happen in the scenario that the primary node on one dc >cannot reach
> the primary node on the other dc?
>
> I think that you not fully understand how partitioned caches work. Node can
> be primary for the part of the partitions, while other nodes will have
> other
> parts as primary. Here is basic information about Partitioned cache that
> should be enough to start:
> https://apacheignite.readme.io/docs/cache-modes#section-partitioned-mode
>
> >I assume that org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi is to be
> used for the dsicoverySpi property >even for Partitioned Caches. Are there
> special properties that should be filled in for the partitioned cache
> >except for addresses and ports?
>
> discoverySpi doesn't affect caches at all, so, you don't need to change
> anything at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
> configuration
>
> >Most of the caches I am using is using uuid as keys , in that case how
> would affinity collocation of the keys >work? Unfortunately I cannot change
> the key structure in a short notice.
>
> Do you want to collocate Data with Data or Compute with Data? In both cases
> you can find information on this page:
> https://apacheignite.readme.io/docs/affinity-collocation
>
>
> Evgenii
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: copyOnRead to false

2017-09-05 Thread steve.hostettler
Hello,

thanks for the answer. The benchmark is actually our application stressed
with several volumes. Some quite complex to describe. However, for these
benchmarks we are only using one node.

Basically we are loading a set of caches from the database, do a lot of
querying both ScanQuery (on BinaryObjects) and SQLQueries.

Most of what we are doing is read only with lot of computations (at least we
segregated the caches that are r/w)

Based on what you described, I should witness an performance improvment.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/