RE: Questions on setting up firewall for multicast cluster discovery.

2018-07-02 Thread Jon Tricker
The nodes are on VirtualBox VMs and connected via the Virtualboxe's 'bridged' 
network. Because it's Centos  the each machines own firewall on must be set up 
to enable the required ports (like most Unixes).

As a work round I can use static. That it works fine. However every time I 
create a new machine it is assigned a random address on the bridged network 
(that's just how Virtualbox works). So I then have add it to  all the other 
machines lists.

Although at least, once assigned, the addresses remain constant. So the lists 
do not require update after every reboot.




-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com]
Sent: 29 June 2018 15:05
To: user@ignite.apache.org
Subject: Re: Questions on setting up firewall for multicast cluster discovery.

Hi Jon,

First of all, you don't have to use multicast for discovery. Using static IP 
configuration or one other shared IP finder might simplify the setup:
https://apacheignite.readme.io/docs/tcpip-discovery

Second of all, I'm not sure I fully understand what you're trying to achieve. 
Are both nodes in the same network? If yes, why is there a firewall between 
them and why do you need to restrict internal traffic?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

The information in this e-mail and any attachments is confidential and may be 
legally privileged. It is intended solely for the addressee or addressees. Any 
use or disclosure of the contents of this e-mail/attachments by a not intended 
recipient is unauthorized and may be unlawful. If you have received this e-mail 
in error please notify the sender. Please note that any views or opinions 
presented in this e-mail are solely those of the author and do not necessarily 
represent those of TEMENOS. We recommend that you check this e-mail and any 
attachments against viruses. TEMENOS accepts no liability for any damage caused 
by any malicious code or virus transmitted by this e-mail.


Re: Is it possible to configure Apache Ignite QueryCursor to be autocloseable in the xml configuration file?

2018-07-02 Thread Igor Sapego
What is Ignite version you are on?
Best Regards,
Igor


On Sat, Jun 30, 2018 at 9:28 PM tizh  wrote:

> I am writing a program in Golang that connects to local Ignite clusters
> through an ODBC driver package written in Go.
>
> During development I began getting this error repeatedly:
>
>
>
> I have looked into the source code of the golang ODBC driver I used, which
> calls `SQLCloseCursor` promptly when my code calls the function `Close()`.
> I
> have looked at the ODBC conformance specifications of Ignite and noted that
> `SQLCloseCursor` is supported, yet somehow cursors are left open after
> queries.
>
> I have since attempted to configure the setting for QueryCursor to value
> `autocloseable` in the xml file I use for Ignite Configuration when
> initiating a cluster. I have used this doc as reference, but I am limited
> by
> my lack of knowledge of Java -
> https://ignite.apache.org/releases/latest/javadoc/index.html. After some
> attempts I am not sure whether my syntax is wrong, or this is simply not a
> configurable property through the xml configuration file, and that the only
> way to tell Ignite to close the cursor is in the code, after queries. Any
> insight is appreciated!
>
> Lastly,  here    is
> a
> repository with the chunk of code causing the issue taken out and redacted
> to provide further perspective for anyone who may be able to help
>
> Cheers and thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Node stopped automatically

2018-07-02 Thread kvenkatramtreddy
Please could you provide some pointers, so that I can look deeper into the
issue. 

1) Does PageMemory cause any issues

as per the metrics, one node is contains only 70  and other node is
PageMemory [pages=390178]

2) Why whole cluster is going down, when one node is down

3) Can I restart the ignite programatically if this error occurs.

Thanks & Regards,
Venkat



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: High cpu on ignite server nodes

2018-07-02 Thread Stanislav Lukyanov
I don't really have much to say but to try reducing/balancing on heap cache
size.
If you have a lot of objects on heap, you need to have a large heap,
obviously.
If you need to constantly add/remove on-heap objects, you'll have a lot of
work for GC.

Perhaps you can review your architecture to avoid having on-heap caching
whatsoever to minimize the impact of Java GC.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: OutOfMemoryError while streaming

2018-07-02 Thread Denis Mekhanikov
How many entries did you put into a cache?
How do you check, that nodes have entries?
Did you configure a node filter for the cache?

Rendezvous affinity function may give unfair distribution on small
datasets,
but it gets better, when number of entries grows.
So, if you've only put 5-10 entries, then you may get unfair data
distribution.

Denis

вс, 1 июл. 2018 г. в 18:33, breischl :

> The keys are hashed so in theory they should be distributed relatively
> evenly. I traced the logic to do the hashing once and it seemed ok, but
> it's
> pretty complicated so I won't claim to know it that well. We use UUIDs and
> it does seem to distribute pretty much evenly.
>
> Are you sure all the nodes have properly joined the cluster? ie, that you
> have a single cluster with 8 nodes, instead of a bunch of 1-2 node
> clusters.
> Depending on how your discovery is set up, that can be tricky and
> potentially have timing bugs. Although that may be more due to us having a
> custom DiscoverySPI that is a bit finicky.
>
> For #2, I would like to know the answer too. :) We are using
> OptimizedMarshaller, but that's due to some problems with serializing some
> of our complex objects which would not apply to your case. You may want to
> just test it.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Uneven partitioning and data not on off-heap memory.

2018-07-02 Thread Denis Mekhanikov
Mikael,

You can't configure a node not to use off-heap memory.
It's always used. But you can enable on-heap caching along with it:
https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching
So, off-heap memory cannot be disabled.

@smovva, "Off-Heap" shows how many entries are stored in off-heap memory,
and "Off-Heap Memory" – how much off-heap memory is used for it.
Currently calculating how much off-heap memory is used per-cache is not
implemented, so it always shows 0.
And I think, that it won't be implemented since cache memory metrics are
replaces with data region metrics:
https://apacheignite.readme.io/docs/memory-metrics
So, Visor will probably be changed not to show these metrics.

Denis

пн, 2 июл. 2018 г. в 2:37, smovva :

> > 2. By default all entries are saved off heap (not disk, heap outside the
> Java heap to avoid GC problems), you can configure it to use heap or off
> heap memory as you want.
>
> This is the output from visor. I'm not completely sure what the difference
> between off-heap and off-heap memory is here. So, I was assuming that
> "off-heap" is disk and "off-heap memory" is what's in off-heap RAM.
> 
>  Total: 419490
>Heap: 0
>Off-Heap: 419490
>Off-Heap Memory: 0
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


S3AFileSystem as IGFS secondary file system

2018-07-02 Thread otorreno
Hi,

I am struggling to get the S3AFileSystem configured as an IGFS secondary
file system.

I am using IGFS as my default file system, and do not want to have an HDFS
cluster up and running besides the IGFS one.

I have been able to reproduce the steps contained at
https://apacheignite-fs.readme.io/docs/secondary-file-system BUT that's not
the behaviour I am looking for.

What I want to do is having an instance of S3AFileSystem, which is an
implementation of the Hadoop FileSystem, configure IGFS to use it as
secondary file system.

Is it possible?

Best,
Oscar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Information regarding Ignite Web Console

2018-07-02 Thread Sriveena Mattaparthi
Denis,

Have some open questions in ignite

1.   Does all the tables created in ignite gets converted to binary objects 
internally?

2.   Does all the cache entities like Person gets converted to binary 
objects internally?

3.   Is using binary objects better than entity cache objects?

4.   Is the a way to deserialize AvroFormat messages from kafka to ignite 
sink? Examples are available for String and JSON converters.

Please help.

Thanks & Regards,
Sriveena

From: Denis Mekhanikov [mailto:dmekhani...@gmail.com]
Sent: Friday, June 29, 2018 8:09 PM
To: user@ignite.apache.org
Subject: Re: Information regarding Ignite Web Console

Sriveena,

You should configure corresponding query entities to be able to query data in 
cache.
Annotation driven configuration is also available.

See more: 
https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations

Denis

пт, 29 июн. 2018 г. в 12:43, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:
Hi Denis,

I am trying to use the below code to query the binary object

IgniteCache cache = 
start.getOrCreateCache(cfg).withKeepBinary();
BinaryObjectBuilder builder = start.binary().builder("BinaryTest");
builder.setField("name", "Test");
cache.put(1, builder.build());

QueryCursor> query = cache.query(new SqlFieldsQuery("select name 
from BinaryTest"));

But it is failing in latest 2.5 version saying BinaryTest Table does not exist.

How do we query the binary objects in the above example?

Please help.

Thanks & Regards,
Sriveena

From: Denis Mekhanikov 
[mailto:dmekhani...@gmail.com]
Sent: Wednesday, June 27, 2018 6:37 PM

To: user@ignite.apache.org
Subject: Re: Information regarding Ignite Web Console

Sriveena,

You can have objects of different types in one cache, but querying it will be 
tricky.
You will have to configure 
QueryEntities
 for your data, that will describe, which fields are available for querying.
Annotation based 
configuration
 is also available.
Querying nested object is also possible, if you configure the query entities 
properly: 
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-indexing-nested-objects

So, if you want to run SQL queries over your data, it should have some concrete 
schema.

Denis

ср, 27 июн. 2018 г. в 14:08, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:
Thank you so much for the quicker responses unlike any other forums..I really 
appreciate that.

One last question Denis, we have plan to load all the mongodb collections to 
ignite cache and perform complex aggregations and join in memory.

But Unlike any RDBMS data stores we cannot have fixed model objects for each 
collections as each document in the collection may have its own columns and 
datatypes.

Could you please suggest, if ignite is the choice for this kind of scenario 
where same mongo collection have different type of data.

Please note that we have tried using BinaryObject, but we are stuck that ignite 
doesn’t support querying on the inner binaryobject.( binaryobject inside a 
binaryobject – sub documents, array inside a mongo document)

Thanks & Regards,
Sriveena

From: Denis Mekhanikov 
[mailto:dmekhani...@gmail.com]
Sent: Wednesday, June 27, 2018 4:02 PM

To: user@ignite.apache.org
Subject: Re: Information regarding Ignite Web Console

Srive

S3AFileSystem as IGFS secondary file system

2018-07-02 Thread otorreno
Hi, 

I am struggling to get the S3AFileSystem configured as an IGFS secondary 
file system. 

I am using IGFS as my default file system, and do not want to have an HDFS 
cluster up and running besides the IGFS one. 

I have been able to reproduce the steps contained at 
https://apacheignite-fs.readme.io/docs/secondary-file-system BUT that's not 
the behaviour I am looking for. 

What I want to do is having an instance of S3AFileSystem, which is an 
implementation of the Hadoop FileSystem, configure IGFS to use it as 
secondary file system. 

Is it possible? 

Best, 
Oscar 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Deadlock during cache loading

2018-07-02 Thread David Harvey
transactions are easy to use: see  examples,  org.apache.ignite.
examples.datagrid.store.auto
We use them in the stream receiver.You simply bracket the get/put in
the transaction, but use a timeout, then bracket that with an "until done"
while loop, perhaps added a sleep to backoff.
We ended up with better performance with PESSIMISTIC transactions, though
we expected OPTIMISTIC to win.

My guess would be the DataStreamer is not a fundamental contributor to the
deadlock you are seeing, and you may have discovered an ignite bug.



On Sun, Jul 1, 2018 at 11:44 AM, breischl  wrote:

> @DaveHarvey, I'll look at that tomorrow. Seems potentially complicated, but
> if that's what has to happen we'll figure it out.
>
> Interestingly, cutting the cluster to half as many nodes (by reducing the
> number of backups) seems to have resolved the issue. Is there a guideline
> for how large a cluster should be?
>
> We were running a single 44-node cluster, with 3 data backups (4 total
> copies) and hitting the issue consistently. I switched to running two
> separate clusters, each with 22 nodes using 1 data backup (2 total copies).
> The smaller clusters seem to work perfectly every time, though I haven't
> tried them as much.
>
>
> @smovva - We're still actively experimenting with instance and cluster
> sizing. We were running on c4.4xl instances. However we were barely using
> the CPUs, but consistently have memory issues (using a 20GB heap, plus a
> bit
> of off-heap). We just switched to r4.2xl instances which is working better
> for us so far, and is a bit cheaper. However I would imagine that the
> optimal size depends on your use case - it's basically a tradeoff between
> the memory, CPU, networking and operational cost requirements of your use
> case.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: S3AFileSystem as IGFS secondary file system

2018-07-02 Thread otorreno
I have been able to do it using the following lines:
BasicHadoopFileSystemFactory f = new BasicHadoopFileSystemFactory();
f.setConfigPaths("cfg.xml");

IgniteHadoopIgfsSecondaryFileSystem sec = new
IgniteHadoopIgfsSecondaryFileSystem();
sec.setFileSystemFactory(f);

fileSystemCfg.setSecondaryFileSystem(sec);
fileSystemCfg.setDefaultMode(IgfsMode.DUAL_ASYNC);

The "cfg.xml" file contains the S3 access and secret keys, and the bucket
URI. However, I would like to set the configuration in the code not in a
configuration file. Taking a look at the BasicHadoopFileSystemFactory class
you can only specify a file path. Is there any reason to not allow passing a
Hadoop Configuration instance?

Best,
Oscar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite File System (igfs) spillover to disk

2018-07-02 Thread Denis Mekhanikov
Matt,

You can configure FileSystemConfiguration#dataCacheConfiguration

and
specify a persisted data region for this cache.
XML configuration may look as follows:




























Denis

сб, 30 июн. 2018 г. в 7:33, matt :

> Is it possible to have the IGFS component write to disk once heap/off-heap
> consumption hits a certain threshold? I have a custom cache store for one
> component of the app, but a different component requires temporary storage
> of raw data; we're using igfs for this, but what happens if the file size
> is
> much larger than the available RAM? Can igfs be configured to use the
> (relatively new) native "DataStorage" feature to spillover from RAM to
> disk,
> but then also have a specific cache use a custom store?
>
> Thanks
> - Matt
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0

2018-07-02 Thread NO
When I restart the node, I get the following error, 
The problem persists after restarting the machine??



==

[2018-07-02T21:25:52,932][INFO 
][exchange-worker-#190][GridCacheDatabaseSharedManager] Read checkpoint status 
[startMarker=/data3/apache-ignite-persistence/node00-8c6172fa-0543-4b8d-937e-75ac27ba21ff/cp/1530535766680-f62c2aa7-4a26-45ad-b311-5b5e9ddc3f0e-START.bin,
 
endMarker=/data3/apache-ignite-persistence/node00-8c6172fa-0543-4b8d-937e-75ac27ba21ff/cp/1530535612596-2ccb2f7a-9578-44a7-ad29-ff5d6e990ae4-END.bin]
[2018-07-02T21:25:52,933][INFO 
][exchange-worker-#190][GridCacheDatabaseSharedManager] Checking memory state 
[lastValidPos=FileWALPointer [idx=845169, fileOff=32892207, len=7995], 
lastMarked=FileWALPointer [idx=845199, fileOff=43729777, len=7995], 
lastCheckpointId=f62c2aa7-4a26-45ad-b311-5b5e9ddc3f0e]
[2018-07-02T21:25:52,933][WARN 
][exchange-worker-#190][GridCacheDatabaseSharedManager] Ignite node stopped in 
the middle of checkpoint. Will restore memory state and finish checkpoint on 
node start.
[2018-07-02T21:25:52,949][INFO 
][grid-nio-worker-tcp-comm-0-#153][TcpCommunicationSpi] Accepted incoming 
communication connection [locAddr=/10.16.133.187:47100, 
rmtAddr=/10.16.133.186:22315]
[2018-07-02T21:25:53,131][INFO 
][grid-nio-worker-tcp-comm-1-#154][TcpCommunicationSpi] Accepted incoming 
communication connection [locAddr=/10.16.133.187:47100, 
rmtAddr=/10.16.133.185:32502]
[2018-07-02T21:25:56,112][ERROR][exchange-worker-#190][GridDhtPartitionsExchangeFuture]
 Failed to reinitialize local partitions (preloading will be stopped): 
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=4917, 
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=3c06c945-de21-4b7f-8830-344306327643, addrs=[10.16.133.187, 127.0.0.1], 
sockAddrs=[/127.0.0.1:47500, /10.16.133.187:47500], discPort=47500, order=4917, 
intOrder=2496, lastExchangeTime=1530537954950, loc=true, 
ver=2.4.0#20180305-sha1:aa342270, isClient=false], topVer=4917, 
nodeId8=3c06c945, msg=null, type=NODE_JOINED, tstamp=1530537952291], 
nodeId=3c06c945, evt=NODE_JOINED]
org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getBPlusIO(PageIO.java:567)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:478)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:438)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.pagemem.wal.record.delta.DataPageInsertFragmentRecord.applyDelta(DataPageInsertFragmentRecord.java:58)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1967)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1827)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:725)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:741)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:626)
 [ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2337)
 [ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.4.0.jar:2.4.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
[2018-07-02T21:25:56,116][INFO 
][exchange-worker-#190][GridDhtPartitionsExchangeFuture] Finish exchange future 
[startVer=AffinityTopologyVersion [topVer=4917, minorTopVer=0], resVer=null, 
err=class org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0]
[2018-07-02T21:25:56,117][ERROR][main][IgniteKernal] Got exception while 
starting (will rollback startup routine).
org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getBPlusIO(PageIO.java:567)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:478)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:438)
 ~[ignite-core-2

How long Ignite retries upon NODE_FAILED events

2018-07-02 Thread HEWA WIDANA GAMAGE, SUBASH
Hi team,

For example, let's say one of the node is not down(JVM is up), but network not 
reachable from/to it. Then rest of the nodes will see  NODE_FAILED and started 
working as normal with reduced cluster size. If that failed node, the network 
from/to it, becomes normal again  after X minutes. Then,
- will other nodes discover them, or will that node be able to figure it out ?
- How long X can be at max? Is there max retry or timeout. (I seen joinTimeout 
param in discovery, but that's seems only applicable for startup, like how long 
it should pause starting the node to let join others)


Re: Node stopped automatically

2018-07-02 Thread Denis Mekhanikov
Venkat,

Please don't paste your logs into the body of your message, it makes them
look unreadable.
Use file attachment or provide a link instead.

Node on host 1 got segmented and killed according to the configured
segmentation policy.
It may happen due to a network problem or long GC pause.

Looks like you have persistence enabled and a single node in baseline
topology.
So, all data is actually stored on a single data node, and when it leaves
the grid, all partitions are marked as lost.
Refer to the documentation:
https://apacheignite.readme.io/docs/baseline-topology

You should add all nodes, that you want to store the data on, to the
baseline.
And when a node leaves topology forever, then you should drop it from the
baseline.

Denis

пн, 2 июл. 2018 г. в 12:15, kvenkatramtreddy :

> Please could you provide some pointers, so that I can look deeper into the
> issue.
>
> 1) Does PageMemory cause any issues
>
> as per the metrics, one node is contains only 70  and other node is
> PageMemory [pages=390178]
>
> 2) Why whole cluster is going down, when one node is down
>
> 3) Can I restart the ignite programatically if this error occurs.
>
> Thanks & Regards,
> Venkat
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-07-02 Thread Ilya Kasnacheev
Hello!

You can set IGNITE_SQL_LAZY_RESULT_SET=true (environment variable or JVM
system property) on all nodes since Apache Ignite 2.5, make sure that all
queries run with lazy=true.

It will still not save you from some scenarios such as runaway GROUP BYs,
but from SELECT * it would.

Regards,

-- 
Ilya Kasnacheev

2018-06-28 23:17 GMT+03:00 ApacheUser :

> Evgenii,
> what happens if the user doesn't set that limit or forget to set on client
> tool?,
>
> we set that but some one testing without the lazy=true to prove that Apache
> Ignite is not stable.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Deadlock during cache loading

2018-07-02 Thread breischl
Ah, I had not thought of that, thanks. 

Interestingly, going to a smaller cluster seems to have worked around the
problem. We were running a 44-node cluster using 3 backups of the data.
Switching to two separate 22-node clusters, each with 1 backup, seems to
work just fine. Is there some limit to how large a cluster should be? 

@smovva - We were using c4.4xl instances, but switched to r4.2xl because we
had spare CPU but kept having memory problems. I suspect that there isn't a
"right" size to use, it just depends on the use case you have. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-07-02 Thread Ilya Kasnacheev
Hello again!


It seems that I had a copy-paste problem here: the actual flag name is
IGNITE_SQL_FORCE_LAZY_RESULT_SET

Regards,


-- 
Ilya Kasnacheev

2018-07-02 16:53 GMT+03:00 Ilya Kasnacheev :

> Hello!
>
> You can set IGNITE_SQL_LAZY_RESULT_SET=true (environment variable or JVM
> system property) on all nodes since Apache Ignite 2.5, make sure that all
> queries run with lazy=true.
>
> It will still not save you from some scenarios such as runaway GROUP BYs,
> but from SELECT * it would.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-06-28 23:17 GMT+03:00 ApacheUser :
>
>> Evgenii,
>> what happens if the user doesn't set that limit or forget to set on client
>> tool?,
>>
>> we set that but some one testing without the lazy=true to prove that
>> Apache
>> Ignite is not stable.
>>
>> Thanks
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Information regarding Ignite Web Console

2018-07-02 Thread Denis Mekhanikov
> Does all the tables created in ignite gets converted to binary objects
internally?
Yes, unless you specify a different IgniteConfiguration#marshaller

.
But if you want to query data with Ignite SQL, only BinaryMarshaller is
applicable.

> Does all the cache entities like Person gets converted to binary objects
internally?
All entries are serialized with a configured marshaller. It is binary
marshaller by default.

> Is using binary objects better than entity cache objects?
Using POJOs is usually more convenient. But BinaryObject lets you operate
over objects without having the corresponding POJOs on your class path.
Also by using BinaryObject you skip (de)serialization step, when performing
put/get operations, so you may get better performance.

> Is the a way to deserialize AvroFormat messages from kafka to ignite
sink? Examples are available for String and JSON converters.
You can deserialize any data, coming from Kafka.
All you need is to implement a proper StreamSingleTupleExtractor


Denis

пн, 2 июл. 2018 г. в 14:03, Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Denis,
>
>
>
> Have some open questions in ignite
>
> 1.   Does all the tables created in ignite gets converted to binary
> objects internally?
>
> 2.   Does all the cache entities like Person gets converted to binary
> objects internally?
>
> 3.   Is using binary objects better than entity cache objects?
>
> 4.   Is the a way to deserialize AvroFormat messages from kafka to
> ignite sink? Examples are available for String and JSON converters.
>
>
>
> Please help.
>
>
>
> Thanks & Regards,
>
> Sriveena
>
>
>
> *From:* Denis Mekhanikov [mailto:dmekhani...@gmail.com]
> *Sent:* Friday, June 29, 2018 8:09 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Information regarding Ignite Web Console
>
>
>
> Sriveena,
>
>
>
> You should configure corresponding query entities to be able to query data
> in cache.
>
> Annotation driven configuration is also available.
>
>
>
> See more:
> https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations
> 
>
>
>
> Denis
>
>
>
> пт, 29 июн. 2018 г. в 12:43, Sriveena Mattaparthi <
> sriveena.mattapar...@ekaplus.com>:
>
> Hi Denis,
>
>
>
> I am trying to use the below code to query the binary object
>
>
>
> IgniteCache cache =
> start.getOrCreateCache(cfg).withKeepBinary();
>
> BinaryObjectBuilder builder = start.binary().builder("BinaryTest");
>
> builder.setField("name", "Test");
>
> cache.put(1, builder.build());
>
>
>
> QueryCursor> query = cache.query(new SqlFieldsQuery("select
> name from BinaryTest"));
>
>
>
> But it is failing in latest 2.5 version saying BinaryTest Table does not
> exist.
>
>
>
> How do we query the binary objects in the above example?
>
>
>
> Please help.
>
>
>
> Thanks & Regards,
>
> Sriveena
>
>
>
> *From:* Denis Mekhanikov [mailto:dmekhani...@gmail.com]
> *Sent:* Wednesday, June 27, 2018 6:37 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Information regarding Ignite Web Console
>
>
>
> Sriveena,
>
>
>
> You can have objects of different types in one cache, but querying it will
> be tricky.
>
> You will have to configure QueryEntities
> 
>  for
> your data, that will describe, which fields are available for querying.
>
> Annotation based configuration
> 
> is also available.
>
> Querying nested object is also possible, if you configure the query
> entities properly:
> https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-indexing-nested-objects
> 

Re: How long Ignite retries upon NODE_FAILED events

2018-07-02 Thread Evgenii Zhuravlev
Hi,

by default, Ignite uses a mechanism, that can be configured using
failureDetectionTimeout:
https://apacheignite.readme.io/v2.5/docs/tcpip-discovery#section-failure-detection-timeout

Evgenii

2018-07-02 16:40 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Hi team,
>
>
>
> For example, let’s say one of the node is not down(JVM is up), but network
> not reachable from/to it. Then rest of the nodes will see  NODE_FAILED and
> started working as normal with reduced cluster size. If that failed node,
> the network from/to it, becomes normal again  after X minutes. Then,
>
> - will other nodes discover them, or will that node be able to figure it
> out ?
>
> - How long X can be at max? Is there max retry or timeout. (I seen
> joinTimeout param in discovery, but that’s seems only applicable for
> startup, like how long it should pause starting the node to let join others)
>


Re: Information regarding Ignite Web Console

2018-07-02 Thread Denis Mekhanikov
You should specify QueryEntity.valueType

corresponding
to the type name, that you use when constructing a binary object.

Please find attached example, that shows how to insert BinaryObjects in a
way, that will let you query them from SQL.
I made a named constant *PERSON_TYPE_NAME *to emphasise, that
QueryEntity#valueType

should
match BinaryObject's type name.

If you want different binary objects with the same name to have different
fields, you should disable BinaryConfiguration#compactFooter

.
It will let different BinaryObjects with the same name have different
schemas.
See the following thread for more information about compactFooter:
http://apache-ignite-users.70518.x6.nabble.com/Best-practice-for-class-versioning-marshaller-error-td22294.html

But all fields, that you want to access from SQL, should be specified in
the QueryEntity, so you should think about it in advance.

Denis

пн, 2 июл. 2018 г. в 17:54, Denis Mekhanikov :

> > Does all the tables created in ignite gets converted to binary objects
> internally?
> Yes, unless you specify a different IgniteConfiguration#marshaller
> 
> .
> But if you want to query data with Ignite SQL, only BinaryMarshaller is
> applicable.
>
> > Does all the cache entities like Person gets converted to binary objects
> internally?
> All entries are serialized with a configured marshaller. It is binary
> marshaller by default.
>
> > Is using binary objects better than entity cache objects?
> Using POJOs is usually more convenient. But BinaryObject lets you operate
> over objects without having the corresponding POJOs on your class path.
> Also by using BinaryObject you skip (de)serialization step, when
> performing put/get operations, so you may get better performance.
>
> > Is the a way to deserialize AvroFormat messages from kafka to ignite
> sink? Examples are available for String and JSON converters.
> You can deserialize any data, coming from Kafka.
> All you need is to implement a proper StreamSingleTupleExtractor
> 
>
> Denis
>
> пн, 2 июл. 2018 г. в 14:03, Sriveena Mattaparthi <
> sriveena.mattapar...@ekaplus.com>:
>
>> Denis,
>>
>>
>>
>> Have some open questions in ignite
>>
>> 1.   Does all the tables created in ignite gets converted to binary
>> objects internally?
>>
>> 2.   Does all the cache entities like Person gets converted to
>> binary objects internally?
>>
>> 3.   Is using binary objects better than entity cache objects?
>>
>> 4.   Is the a way to deserialize AvroFormat messages from kafka to
>> ignite sink? Examples are available for String and JSON converters.
>>
>>
>>
>> Please help.
>>
>>
>>
>> Thanks & Regards,
>>
>> Sriveena
>>
>>
>>
>> *From:* Denis Mekhanikov [mailto:dmekhani...@gmail.com]
>> *Sent:* Friday, June 29, 2018 8:09 PM
>>
>>
>> *To:* user@ignite.apache.org
>> *Subject:* Re: Information regarding Ignite Web Console
>>
>>
>>
>> Sriveena,
>>
>>
>>
>> You should configure corresponding query entities to be able to query
>> data in cache.
>>
>> Annotation driven configuration is also available.
>>
>>
>>
>> See more:
>> https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations
>> 
>>
>>
>>
>> Denis
>>
>>
>>
>> пт, 29 июн. 2018 г. в 12:43, Sriveena Mattaparthi <
>> sriveena.mattapar...@ekaplus.com>:
>>
>> Hi Denis,
>>
>>
>>
>> I am trying to use the below code to query the binary object
>>
>>
>>
>> IgniteCache cache =
>> start.getOrCreateCache(cfg).withKeepBinary();
>>
>> BinaryObjectBuilder builder = start.binary().builder("BinaryTest");
>>
>> builder.setField("name", "Test");
>>
>> cache.put(1, builder.build());
>>
>>
>>
>> QueryCursor> query = cache.query(new SqlFieldsQuery("select
>> name from BinaryTest"));
>>
>>
>>
>> But it is failing in latest 2.5 version saying BinaryTest Table does not
>> exist.
>>
>>
>>
>> How do we query the binary objects in the above example?
>>
>>
>>
>> Please help.
>>
>>
>>
>> Thanks & Regards,
>>
>> Sriveena
>>
>>
>>
>> *Fr

Re: Deadlock during cache loading

2018-07-02 Thread Denis Mekhanikov
Why did you decide, that cluster is deadlocked in the first place?

> We've had several deployments in a row fail, apparently due to
deadlocking in the loading process.
What did you see in logs of the failing nodes?

Denis

пн, 2 июл. 2018 г. в 17:08, breischl :

> Ah, I had not thought of that, thanks.
>
> Interestingly, going to a smaller cluster seems to have worked around the
> problem. We were running a 44-node cluster using 3 backups of the data.
> Switching to two separate 22-node clusters, each with 1 backup, seems to
> work just fine. Is there some limit to how large a cluster should be?
>
> @smovva - We were using c4.4xl instances, but switched to r4.2xl because we
> had spare CPU but kept having memory problems. I suspect that there isn't a
> "right" size to use, it just depends on the use case you have.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: How long Ignite retries upon NODE_FAILED events

2018-07-02 Thread HEWA WIDANA GAMAGE, SUBASH
Yes failureDetectionTimeout determines the time it wait to mark a node failed. 
But my question is, after such node failed happened, and then what happens when 
that failed node becomes reachable in the network (less that 
failureDetectionTimeout) ?

From: Evgenii Zhuravlev [mailto:e.zhuravlev...@gmail.com]
Sent: Monday, July 02, 2018 11:05 AM
To: user@ignite.apache.org
Subject: Re: How long Ignite retries upon NODE_FAILED events

Hi,

by default, Ignite uses a mechanism, that can be configured using 
failureDetectionTimeout: 
https://apacheignite.readme.io/v2.5/docs/tcpip-discovery#section-failure-detection-timeout

Evgenii

2018-07-02 16:40 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH 
mailto:subash.hewawidanagam...@fmr.com>>:
Hi team,

For example, let’s say one of the node is not down(JVM is up), but network not 
reachable from/to it. Then rest of the nodes will see  NODE_FAILED and started 
working as normal with reduced cluster size. If that failed node, the network 
from/to it, becomes normal again  after X minutes. Then,
- will other nodes discover them, or will that node be able to figure it out ?
- How long X can be at max? Is there max retry or timeout. (I seen joinTimeout 
param in discovery, but that’s seems only applicable for startup, like how long 
it should pause starting the node to let join others)



Re: Deadlock during cache loading

2018-07-02 Thread David Harvey
Denis does have a point.   When we were trying to run using GP2 storage,
the cluster would simply lock up for an hour.   Once we moved to local SSDs
on i3 instances those issues went away (but we needed 2.5 to have the
streaming rate hold for up as we had a lot of data loaded).   The i3
instances are rated at about 700,000 write IOPS, and we were only getting
about 20-30,000 out of GP2.   You could separate or combine the WAL and
storage, and hardly move the needle.
Will describe cluster snapshots on AWS in more detail when we have
completed that work.



On Mon, Jul 2, 2018 at 11:20 AM, Denis Mekhanikov 
wrote:

> Why did you decide, that cluster is deadlocked in the first place?
>
> > We've had several deployments in a row fail, apparently due to
> deadlocking in the loading process.
> What did you see in logs of the failing nodes?
>
> Denis
>
> пн, 2 июл. 2018 г. в 17:08, breischl :
>
>> Ah, I had not thought of that, thanks.
>>
>> Interestingly, going to a smaller cluster seems to have worked around the
>> problem. We were running a 44-node cluster using 3 backups of the data.
>> Switching to two separate 22-node clusters, each with 1 backup, seems to
>> work just fine. Is there some limit to how large a cluster should be?
>>
>> @smovva - We were using c4.4xl instances, but switched to r4.2xl because
>> we
>> had spare CPU but kept having memory problems. I suspect that there isn't
>> a
>> "right" size to use, it just depends on the use case you have.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0

2018-07-02 Thread Denis Mekhanikov
Looks like your persistence files are corrupted.
You configured *LOG_ONLY* WAL mode. It doesn't guarantee survival of OS
crushes and power failures.
How did you restart your node?

Denis

пн, 2 июл. 2018 г. в 16:40, NO <727418...@qq.com>:

> When I restart the node, I get the following error,
> The problem persists after restarting the machine。
>
> ==
> [2018-07-02T21:25:52,932][INFO
> ][exchange-worker-#190][GridCacheDatabaseSharedManager] Read checkpoint
> status
> [startMarker=/data3/apache-ignite-persistence/node00-8c6172fa-0543-4b8d-937e-75ac27ba21ff/cp/1530535766680-f62c2aa7-4a26-45ad-b311-5b5e9ddc3f0e-START.bin,
> endMarker=/data3/apache-ignite-persistence/node00-8c6172fa-0543-4b8d-937e-75ac27ba21ff/cp/1530535612596-2ccb2f7a-9578-44a7-ad29-ff5d6e990ae4-END.bin]
> [2018-07-02T21:25:52,933][INFO
> ][exchange-worker-#190][GridCacheDatabaseSharedManager] Checking memory
> state [lastValidPos=FileWALPointer [idx=845169, fileOff=32892207,
> len=7995], lastMarked=FileWALPointer [idx=845199, fileOff=43729777,
> len=7995], lastCheckpointId=f62c2aa7-4a26-45ad-b311-5b5e9ddc3f0e]
> [2018-07-02T21:25:52,933][WARN
> ][exchange-worker-#190][GridCacheDatabaseSharedManager] Ignite node stopped
> in the middle of checkpoint. Will restore memory state and finish
> checkpoint on node start.
> [2018-07-02T21:25:52,949][INFO
> ][grid-nio-worker-tcp-comm-0-#153][TcpCommunicationSpi] Accepted incoming
> communication connection [locAddr=/10.16.133.187:47100, rmtAddr=/
> 10.16.133.186:22315]
> [2018-07-02T21:25:53,131][INFO
> ][grid-nio-worker-tcp-comm-1-#154][TcpCommunicationSpi] Accepted incoming
> communication connection [locAddr=/10.16.133.187:47100, rmtAddr=/
> 10.16.133.185:32502]
> [2018-07-02T21:25:56,112][ERROR][exchange-worker-#190][GridDhtPartitionsExchangeFuture]
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=4917,
> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=3c06c945-de21-4b7f-8830-344306327643, addrs=[10.16.133.187, 127.0.0.1],
> sockAddrs=[/127.0.0.1:47500, /10.16.133.187:47500], discPort=47500,
> order=4917, intOrder=2496, lastExchangeTime=1530537954950, loc=true,
> ver=2.4.0#20180305-sha1:aa342270, isClient=false], topVer=4917,
> nodeId8=3c06c945, msg=null, type=NODE_JOINED, tstamp=1530537952291],
> nodeId=3c06c945, evt=NODE_JOINED]
> org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getBPlusIO(PageIO.java:567)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:478)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:438)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.pagemem.wal.record.delta.DataPageInsertFragmentRecord.applyDelta(DataPageInsertFragmentRecord.java:58)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1967)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1827)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:725)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:741)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:626)
> [ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2337)
> [ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [ignite-core-2.4.0.jar:2.4.0]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
> [2018-07-02T21:25:56,116][INFO
> ][exchange-worker-#190][GridDhtPartitionsExchangeFuture] Finish exchange
> future [startVer=AffinityTopologyVersion [topVer=4917, minorTopVer=0],
> resVer=null, err=class org.apache.ignite.IgniteCheckedException: Unknown
> page IO type: 0]
> [2018-07-02T21:25:56,117][ERROR][main][IgniteKernal] Got exception while
> starting (will rollback startup routine).
> org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0
> at
> org.apache.ignite.internal.processors.cache.pe

Re: Deadlock during cache loading

2018-07-02 Thread breischl
(OT: Sorry about the duplicate posts, for some reason Nabble was refusing to
show me new posts so I thought my earlier ones had been lost.)

>Why did you decide, that cluster is deadlocked in the first place?

Because all of the Datastreamer threads were stuck waiting on locks, and no
progress was being made on loading the cache. We have various logging and
metrics around progress that were all zero, and all the threads trying to
load data were blocked waiting to insert more data. This persisted for an
hour or more with no change. 

>What did you see in logs of the failing nodes?
Nothing that jumped out at me as a smoking gun or even really related,
although I don't have the logs handy anymore as they've aged off our Splunk
servers. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is it possible to configure Apache Ignite QueryCursor to be autocloseable in the xml configuration file?

2018-07-02 Thread tizh
Hi Igor, 
I am using version 2.5.0. Thanks
Ti



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How long Ignite retries upon NODE_FAILED events

2018-07-02 Thread Evgenii Zhuravlev
If cluster already decided that node failed, it will be stopped after it
will try to reconnect to the cluster with the same id

2018-07-02 18:37 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Yes failureDetectionTimeout determines the time it wait to mark a node
> failed. But my question is, after such node failed happened, and then what
> happens when that failed node becomes reachable in the network (less that
> failureDetectionTimeout) ?
>
>
>
> *From:* Evgenii Zhuravlev [mailto:e.zhuravlev...@gmail.com]
> *Sent:* Monday, July 02, 2018 11:05 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: How long Ignite retries upon NODE_FAILED events
>
>
>
> Hi,
>
>
>
> by default, Ignite uses a mechanism, that can be configured using
> failureDetectionTimeout: https://apacheignite.readme.io/v2.
> 5/docs/tcpip-discovery#section-failure-detection-timeout
>
>
>
> Evgenii
>
>
>
> 2018-07-02 16:40 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com>:
>
> Hi team,
>
>
>
> For example, let’s say one of the node is not down(JVM is up), but network
> not reachable from/to it. Then rest of the nodes will see  NODE_FAILED and
> started working as normal with reduced cluster size. If that failed node,
> the network from/to it, becomes normal again  after X minutes. Then,
>
> - will other nodes discover them, or will that node be able to figure it
> out ?
>
> - How long X can be at max? Is there max retry or timeout. (I seen
> joinTimeout param in discovery, but that’s seems only applicable for
> startup, like how long it should pause starting the node to let join others)
>
>
>


RE: How long Ignite retries upon NODE_FAILED events

2018-07-02 Thread HEWA WIDANA GAMAGE, SUBASH
Ok I did following poc real quick.


1.   Two nodes, started. And joined. Topology snapshot servers=2.

2.   In one node, I blocked the Ignite ports(47500, 47100 etc).

3.   Then After failureDetecitonTimeout,  it logged NODE_FAILED, and 
Topology snapshot servers=1 in each node.

4.   Then after 10-15 seconds, I unblock those ports.

5.   Then after few seconds, both nodes logged, Node joined, and topology 
snapshot server=2

So it’s the same node, ID, because JVM is still up and running. And looks like 
it doesn’t forget.

Can this “10-15 seconds” be any time ? Even in 1-2 hours if the node comes 
back, can it rejoin ?




From: Evgenii Zhuravlev [mailto:e.zhuravlev...@gmail.com]
Sent: Monday, July 02, 2018 1:25 PM
To: user@ignite.apache.org
Subject: Re: How long Ignite retries upon NODE_FAILED events

If cluster already decided that node failed, it will be stopped after it will 
try to reconnect to the cluster with the same id

2018-07-02 18:37 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH 
mailto:subash.hewawidanagam...@fmr.com>>:
Yes failureDetectionTimeout determines the time it wait to mark a node failed. 
But my question is, after such node failed happened, and then what happens when 
that failed node becomes reachable in the network (less that 
failureDetectionTimeout) ?

From: Evgenii Zhuravlev 
[mailto:e.zhuravlev...@gmail.com]
Sent: Monday, July 02, 2018 11:05 AM
To: user@ignite.apache.org
Subject: Re: How long Ignite retries upon NODE_FAILED events

Hi,

by default, Ignite uses a mechanism, that can be configured using 
failureDetectionTimeout: 
https://apacheignite.readme.io/v2.5/docs/tcpip-discovery#section-failure-detection-timeout

Evgenii

2018-07-02 16:40 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH 
mailto:subash.hewawidanagam...@fmr.com>>:
Hi team,

For example, let’s say one of the node is not down(JVM is up), but network not 
reachable from/to it. Then rest of the nodes will see  NODE_FAILED and started 
working as normal with reduced cluster size. If that failed node, the network 
from/to it, becomes normal again  after X minutes. Then,
- will other nodes discover them, or will that node be able to figure it out ?
- How long X can be at max? Is there max retry or timeout. (I seen joinTimeout 
param in discovery, but that’s seems only applicable for startup, like how long 
it should pause starting the node to let join others)




Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-07-02 Thread ApacheUser
Thanks Ily ,
could share any guidelines to control groupby?, Like didicated client nodes
for connectivity from Tableau and SQL?
Thanks
Bhaskar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Thin client doesn't support Expiry Policies.

2018-07-02 Thread ysc751206
Hi, 

We are currently using Ignite 2.1/C# in our application and considering to
use Ignite 2.5/C# because of thin client feature. But we notice that thin
client doesn't support to cache value with expiry policy. We use that
feature heavily in our application.

May I ask if you are going to support that in the near future or if there is
any workaround about this? 

Thanks

Edison



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Affinity calls in stream receiver

2018-07-02 Thread David Harvey
We have a custom stream receiver that makes affinity calls. This all
functions properly, but we see a very large number of the following
messages for the same two  classes.   We also just tripped a 2GB limit on
Metaspace size, which we came close to in the past.

[18:41:50,365][INFO][pub-#6954%GridGainTrial%][GridDeploymentPerVersionStore]
Class was deployed in SHARED or CONTINUOUS mode: class
com.IgniteCallable

So these affinity calls need to load classes that where loaded from client
nodes, which may be related to why this happening, but my primary suspect
is the fact that both classes are nested.  ( I had previously hit an issue
where setting the peer-class-loading "userVersion" would cause ignite to
thrown exceptions when the client node attempted to activate the cluster.
  In that case, the Ignite call into the cluster was also using a nested
class. )

We will try flattening these classes to see if the problem goes away.

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0

2018-07-02 Thread 剑剑
The node has not happened fault, is I modify configuration problem appears 
after the restart, now I need to how to correct the nodes offline, and then 
again in the form of a new node to join the cluster? All node profiles for the 
cluster are the same.

发自我的 iPhone

> 在 2018年7月3日,00:16,Denis Mekhanikov  写道:
> 
> Looks like your persistence files are corrupted.
> You configured LOG_ONLY WAL mode. It doesn't guarantee survival of OS crushes 
> and power failures.
> How did you restart your node?
> 
> Denis
> 
> пн, 2 июл. 2018 г. в 16:40, NO <727418...@qq.com>:
>> When I restart the node, I get the following error, 
>> The problem persists after restarting the machine。
>> 
>> ==
>> [2018-07-02T21:25:52,932][INFO 
>> ][exchange-worker-#190][GridCacheDatabaseSharedManager] Read checkpoint 
>> status 
>> [startMarker=/data3/apache-ignite-persistence/node00-8c6172fa-0543-4b8d-937e-75ac27ba21ff/cp/1530535766680-f62c2aa7-4a26-45ad-b311-5b5e9ddc3f0e-START.bin,
>>  
>> endMarker=/data3/apache-ignite-persistence/node00-8c6172fa-0543-4b8d-937e-75ac27ba21ff/cp/1530535612596-2ccb2f7a-9578-44a7-ad29-ff5d6e990ae4-END.bin]
>> [2018-07-02T21:25:52,933][INFO 
>> ][exchange-worker-#190][GridCacheDatabaseSharedManager] Checking memory 
>> state [lastValidPos=FileWALPointer [idx=845169, fileOff=32892207, len=7995], 
>> lastMarked=FileWALPointer [idx=845199, fileOff=43729777, len=7995], 
>> lastCheckpointId=f62c2aa7-4a26-45ad-b311-5b5e9ddc3f0e]
>> [2018-07-02T21:25:52,933][WARN 
>> ][exchange-worker-#190][GridCacheDatabaseSharedManager] Ignite node stopped 
>> in the middle of checkpoint. Will restore memory state and finish checkpoint 
>> on node start.
>> [2018-07-02T21:25:52,949][INFO 
>> ][grid-nio-worker-tcp-comm-0-#153][TcpCommunicationSpi] Accepted incoming 
>> communication connection [locAddr=/10.16.133.187:47100, 
>> rmtAddr=/10.16.133.186:22315]
>> [2018-07-02T21:25:53,131][INFO 
>> ][grid-nio-worker-tcp-comm-1-#154][TcpCommunicationSpi] Accepted incoming 
>> communication connection [locAddr=/10.16.133.187:47100, 
>> rmtAddr=/10.16.133.185:32502]
>> [2018-07-02T21:25:56,112][ERROR][exchange-worker-#190][GridDhtPartitionsExchangeFuture]
>>  Failed to reinitialize local partitions (preloading will be stopped): 
>> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=4917, 
>> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
>> [id=3c06c945-de21-4b7f-8830-344306327643, addrs=[10.16.133.187, 127.0.0.1], 
>> sockAddrs=[/127.0.0.1:47500, /10.16.133.187:47500], discPort=47500, 
>> order=4917, intOrder=2496, lastExchangeTime=1530537954950, loc=true, 
>> ver=2.4.0#20180305-sha1:aa342270, isClient=false], topVer=4917, 
>> nodeId8=3c06c945, msg=null, type=NODE_JOINED, tstamp=1530537952291], 
>> nodeId=3c06c945, evt=NODE_JOINED]
>> org.apache.ignite.IgniteCheckedException: Unknown page IO type: 0
>> at 
>> org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getBPlusIO(PageIO.java:567)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:478)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.getPageIO(PageIO.java:438)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.pagemem.wal.record.delta.DataPageInsertFragmentRecord.applyDelta(DataPageInsertFragmentRecord.java:58)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1967)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1827)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:725)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:741)
>>  ~[ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:626)
>>  [ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2337)
>>  [ignite-core-2.4.0.jar:2.4.0]
>> at 
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
>> [ignite-core-2.4.0.jar:2.4.0]
>> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
>> [2018-07-02T21:25:56,116][INFO 
>> ][exchange-worker-#190][GridDh

RE: Information regarding Ignite Web Console

2018-07-02 Thread Sriveena Mattaparthi
Hi Deniz,

Thank you so much for the detailed explanation.
This clarified so many open questions on going forward with ignite.

Regards,
Sriveena

From: Denis Mekhanikov [mailto:dmekhani...@gmail.com]
Sent: Monday, July 02, 2018 8:39 PM
To: user@ignite.apache.org
Subject: Re: Information regarding Ignite Web Console

You should specify 
QueryEntity.valueType
 corresponding to the type name, that you use when constructing a binary object.

Please find attached example, that shows how to insert BinaryObjects in a way, 
that will let you query them from SQL.
I made a named constant PERSON_TYPE_NAME to emphasise, that 
QueryEntity#valueType
 should match BinaryObject's type name.

If you want different binary objects with the same name to have different 
fields, you should disable 
BinaryConfiguration#compactFooter.
It will let different BinaryObjects with the same name have different schemas.
See the following thread for more information about compactFooter: 
http://apache-ignite-users.70518.x6.nabble.com/Best-practice-for-class-versioning-marshaller-error-td22294.html

But all fields, that you want to access from SQL, should be specified in the 
QueryEntity, so you should think about it in advance.

Denis

пн, 2 июл. 2018 г. в 17:54, Denis Mekhanikov 
mailto:dmekhani...@gmail.com>>:
> Does all the tables created in ignite gets converted to binary objects 
> internally?
Yes, unless you specify a different 
IgniteConfiguration#marshaller.
But if you want to query data with Ignite SQL, only BinaryMarshaller is 
applicable.

> Does all the cache entities like Person gets converted to binary objects 
> internally?
All entries are serialized with a configured marshaller. It is binary 
marshaller by default.

> Is using binary objects better than entity cache objects?
Using POJOs is usually more convenient. But BinaryObject lets you operate over 
objects without having the corresponding POJOs on your class path.
Also by using BinaryObject you skip (de)serialization step, when performing 
put/get operations, so you may get better performance.

> Is the a way to deserialize AvroFormat messages from kafka to ignite sink? 
> Examples are available for String and JSON converters.
You can deserialize any data, coming from Kafka.
All you need is to implement a proper 
StreamSingleTupleExtractor

Denis

пн, 2 июл. 2018 г. в 14:03, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:
Denis,

Have some open questions in ignite

1.   Does all the tables

Re: How long Ignite retries upon NODE_FAILED events

2018-07-02 Thread Evgenii Zhuravlev
Can you share the logs?

2018-07-02 20:54 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Ok I did following poc real quick.
>
>
>
> 1.   Two nodes, started. And joined. Topology snapshot servers=2.
>
> 2.   In one node, I blocked the Ignite ports(47500, 47100 etc).
>
> 3.   Then After failureDetecitonTimeout,  it logged NODE_FAILED, and
> Topology snapshot servers=1 in each node.
>
> 4.   Then after 10-15 seconds, I unblock those ports.
>
> 5.   Then after few seconds, both nodes logged, Node joined, and
> topology snapshot server=2
>
>
>
> So it’s the same node, ID, because JVM is still up and running. And looks
> like it doesn’t forget.
>
>
>
> Can this “10-15 seconds” be any time ? Even in 1-2 hours if the node comes
> back, can it rejoin ?
>
>
>
>
>
>
>
>
>
> *From:* Evgenii Zhuravlev [mailto:e.zhuravlev...@gmail.com]
> *Sent:* Monday, July 02, 2018 1:25 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: How long Ignite retries upon NODE_FAILED events
>
>
>
> If cluster already decided that node failed, it will be stopped after it
> will try to reconnect to the cluster with the same id
>
>
>
> 2018-07-02 18:37 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com>:
>
> Yes failureDetectionTimeout determines the time it wait to mark a node
> failed. But my question is, after such node failed happened, and then what
> happens when that failed node becomes reachable in the network (less that
> failureDetectionTimeout) ?
>
>
>
> *From:* Evgenii Zhuravlev [mailto:e.zhuravlev...@gmail.com]
> *Sent:* Monday, July 02, 2018 11:05 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: How long Ignite retries upon NODE_FAILED events
>
>
>
> Hi,
>
>
>
> by default, Ignite uses a mechanism, that can be configured using
> failureDetectionTimeout: https://apacheignite.readme.io/v2.
> 5/docs/tcpip-discovery#section-failure-detection-timeout
>
>
>
> Evgenii
>
>
>
> 2018-07-02 16:40 GMT+03:00 HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com>:
>
> Hi team,
>
>
>
> For example, let’s say one of the node is not down(JVM is up), but network
> not reachable from/to it. Then rest of the nodes will see  NODE_FAILED and
> started working as normal with reduced cluster size. If that failed node,
> the network from/to it, becomes normal again  after X minutes. Then,
>
> - will other nodes discover them, or will that node be able to figure it
> out ?
>
> - How long X can be at max? Is there max retry or timeout. (I seen
> joinTimeout param in discovery, but that’s seems only applicable for
> startup, like how long it should pause starting the node to let join others)
>
>
>
>
>