Re: cannot unsubscribe

2022-07-14 Thread Mikael

Hi!

No need to do that ;)

I assume you don't get a confirmation email either ? any chance that may 
end up in spam or something ? it should not but you never know, if I 
remember correct you need to confirm your unsubscription so you should 
receive an email.


Mikael


On 2022-07-14 10:43, אריאל מסלאטון wrote:

Hi MiKael,

I've verified it multiple times...
I can send screenshots to prove it, if it will help :)

‫בתאריך יום ה׳, 14 ביולי 2022 ב-11:37 מאת ‪Mikael‬‏ 
<‪mikael.arons...@gmail.com‬‏>:‬


Hi!

It should work ok, make sure you use the same email as you used to
subscribe so there is no hickup there.

Mikael

On 2022-07-14 10:28, אריאל מסלאטון wrote:

I tried that, multiple times...
Still receiving emails

‫בתאריך יום ה׳, 14 ביולי 2022 ב-10:29 מאת ‪Pavel Tupitsyn‬‏
<‪ptupit...@apache.org‬‏>:‬

Please send any text to user-unsubscr...@ignite.apache.org

‪On Thu, Jul 14, 2022 at 10:24 AM ‫אריאל מסלאטון‬‎
 wrote:‬

I have been unsubscribing from the mailing list but I
keep getting emails.
I sent unsubscribe messages to every mailing list and it
didn't help.

Can you please assist?
Regards

-- 
*054-2116997*

*arielmasla...@gmail.com*



-- 
*054-2116997*

*arielmasla...@gmail.com*




--
*054-2116997*
*arielmasla...@gmail.com*

Re: cannot unsubscribe

2022-07-14 Thread Mikael

Hi!

It should work ok, make sure you use the same email as you used to 
subscribe so there is no hickup there.


Mikael

On 2022-07-14 10:28, אריאל מסלאטון wrote:

I tried that, multiple times...
Still receiving emails

‫בתאריך יום ה׳, 14 ביולי 2022 ב-10:29 מאת ‪Pavel Tupitsyn‬‏ 
<‪ptupit...@apache.org‬‏>:‬


Please send any text to user-unsubscr...@ignite.apache.org

‪On Thu, Jul 14, 2022 at 10:24 AM ‫אריאל מסלאטון‬‎
 wrote:‬

I have been unsubscribing from the mailing list but I keep
getting emails.
I sent unsubscribe messages to every mailing list and it
didn't help.

Can you please assist?
Regards

-- 
*054-2116997*

*arielmasla...@gmail.com*



--
*054-2116997*
*arielmasla...@gmail.com*

Re: long JVM pauses

2022-01-12 Thread Mikael

Hi!

Ok, it does not sound like very extreme, do you really need 10GB heap ? 
as you said it takes two weeks before the problems start, it does sound 
like you have something growing there pretty slow and it does have to do 
some intense GC pass finally to clean all that up, using a smaller heap 
might help there if you don't need the big heap (less garbage to cleanup 
when it get full, less GC time, so it will GC more often but will not 
lock up the JVM for such long time).


~2 seconds GC pause is not a huge disaster so I think you should be able 
to solve it with some tweaking and/or change collector.


The G1 or ZGC collectors work pretty well these days but I don't know 
what JVM you use.


Linux/Windows ? setting a large heap (in relation to ram) on a Linux 
machine with default swappiness can cause problems, it often start 
paging if you just fill up half the memory or so.


I am no export on GC so I hope someone with better knowledge can give 
some ideas, but I think the best is to log the GC for a few days and see 
what is going on with the heap, each application is unique so it is very 
difficult to give any generic " do this" and all will be good tips, also 
make sure it actually is GC pauses that is causing the problem so that 
it isn't something else.


regards

Mikael


On 2022-01-12 05:52, satyajit.man...@barclays.com wrote:


Hi Mikael,

We are  using  below  settings  and  we  have  default  off heap  
memory  enabled.  Heap  size  is  10  GB per  node and  we  are  
running  4  nodes  as  part  of  cluster. Data size  is  ~25MB  but  
it  continuously  updates  and inserts  records in  every  5  mins  
and  15 mins. It’s a  low  latency  application. We  haven’t  enabled 
persistence.


-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.port=5 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false -Xms10g -Xmx10g 
-XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC 
-XX:+DisableExplicitGC -Djava.net.preferIPv4Stack=true


Regards

Satyajit

*From:*Mikael 
*Sent:* Tuesday, January 11, 2022 11:29 PM
*To:* user@ignite.apache.org
*Subject:* Re: long JVM pauses

CAUTION: This email originated from outside our organisation - 
mikael.arons...@gmail.com Do not click on links, open attachments, or 
respond unless you recognize the sender and can validate the content 
is safe.


Hi!

There are no generic settings, each case is unique, it will require 
some tuning to get it right, how much java heap do you use ? and what 
garbage collector are you using ? how much data do you have ? 
persistence enabled ? how do you use the caches, is data deleted often 
or do you keep it around for longer time ?


Mikael

On 2022-01-11 18:23, satyajit.man...@barclays.com wrote:

Hi Team,

We do  see  long JVM  pauses  and  after  that  nodes in our 
cluster  stops. This  happens  in every  two  weeks usually. 
What  are  the  possible  solutions  to  avoid long  JVM  pause.
Can someone  advise  generic  settings which  is  recommended  in 
such  cases?

[23:20:37,824][WARNING][jvm-pause-detector-worker][IgniteKernal]
Possible too long JVM pause: 2262 milliseconds.

Regards

Satyajit


Restricted - Internal

This message is for information purposes only. It is not a
recommendation, advice, offer or solicitation to buy or sell a
product or service, nor an official confirmation of any
transaction. It is directed at persons who are professionals and
is intended for the recipient(s) only. It is not directed at
retail customers. This message is subject to the terms at:
https://www.cib.barclays/disclosures/web-and-email-disclaimer.html

<https://clicktime.symantec.com/36Drmrd2f6GnrX5HMTwSVT36H4?u=https%3A%2F%2Fwww.cib.barclays%2Fdisclosures%2Fweb-and-email-disclaimer.html>.


For important disclosures, please see:
https://www.cib.barclays/disclosures/sales-and-trading-disclaimer.html

<https://clicktime.symantec.com/3B4s9V77DYoyX9dN8ua84my6H4?u=https%3A%2F%2Fwww.cib.barclays%2Fdisclosures%2Fsales-and-trading-disclaimer.html>
regarding marketing commentary from Barclays Sales and/or Trading
desks, who are active market participants;

https://www.cib.barclays/disclosures/barclays-global-markets-disclosures.html

<https://clicktime.symantec.com/3JegRP7ErAjYtcC7TGJFKtS6H4?u=https%3A%2F%2Fwww.cib.barclays%2Fdisclosures%2Fbarclays-global-markets-disclosures.html>
regarding our standard terms for Barclays Corporate and Investment
Bank where we trade with you in principal-to-principal wholesale
markets transactions; and in respect to Barclays Research,
including disclosures relating to specific issuers, see:
http://publicresearch.barclays.com

<https://clicktime.symantec.com/3HZtwy1wDet5tLGDuUowvHv6H4?u=http%3A%2F%2Fpublicresearch.barclays.com>.

_

Re: long JVM pauses

2022-01-11 Thread Mikael

Hi!

There are no generic settings, each case is unique, it will require some 
tuning to get it right, how much java heap do you use ? and what garbage 
collector are you using ? how much data do you have ? persistence 
enabled ? how do you use the caches, is data deleted often or do you 
keep it around for longer time ?


Mikael


On 2022-01-11 18:23, satyajit.man...@barclays.com wrote:


Hi Team,

We do  see  long JVM  pauses  and  after  that  nodes in  our cluster  
stops. This  happens  in every  two  weeks usually.  What  are  the  
possible  solutions  to  avoid long  JVM  pause. Can someone  advise  
generic  settings which  is  recommended  in  such  cases?


[23:20:37,824][WARNING][jvm-pause-detector-worker][IgniteKernal] 
Possible too long JVM pause: 2262 milliseconds.


Regards

Satyajit


Restricted - Internal

This message is for information purposes only. It is not a 
recommendation, advice, offer or solicitation to buy or sell a product 
or service, nor an official confirmation of any transaction. It is 
directed at persons who are professionals and is intended for the 
recipient(s) only. It is not directed at retail customers. This 
message is subject to the terms at: 
https://www.cib.barclays/disclosures/web-and-email-disclaimer.html.


For important disclosures, please see: 
https://www.cib.barclays/disclosures/sales-and-trading-disclaimer.html 
regarding marketing commentary from Barclays Sales and/or Trading 
desks, who are active market participants; 
https://www.cib.barclays/disclosures/barclays-global-markets-disclosures.html 
regarding our standard terms for Barclays Corporate and Investment 
Bank where we trade with you in principal-to-principal wholesale 
markets transactions; and in respect to Barclays Research, including 
disclosures relating to specific issuers, see: 
http://publicresearch.barclays.com.
__ 

If you are incorporated or operating in Australia, read these 
important disclosures: 
https://www.cib.barclays/disclosures/important-disclosures-asia-pacific.html.

__
For more details about how we use personal information, see our 
privacy notice: 
https://www.cib.barclays/disclosures/personal-information-use.html.

__


Re: Swedish manual

2021-10-27 Thread Mikael

Hi!

No there are no known documentation in Swedish for Ignite (at least not 
that I am aware of), I guess because there are not enough of us speaking 
Swedish ;) it's not a widespread language, most documentation for Ignite 
is only available in English (and Mandarin).


Mikael

On 2021-10-27 11:04, Catherine Walt wrote:

greetings

does Ignite have the Swedish manual or books?
Can it replace HDFS for data storage?

Thanks.


Re: Enable Native persistence only for one node of the cluster

2021-06-14 Thread Mikael
As long as you only have persistent caches on the persistent nodes it 
should be fine, you cannot have a persistent cache on a non persistent 
node as far as I am aware of.


On 2021-06-14 14:45, Krish wrote:

Is it possible to have a cluster topology where native persistence is enabled
only for one node and all other nodes use in-memory cache store *without
*native persistence?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite server not equally distribution load

2021-03-31 Thread Mikael

Hi!

What is it that is not distributed well, is it cache data/memory load or 
computations/cpu load ?


There are no caches in the config so I assume this is created by the 
application (if there are any), do you use any custom collocation ?


Mikael

Do you use any custom collocation of any kind

On 2021-03-31 15:52, Charlin S wrote:

Hello,
Thanks for your reply. I have attached my configuration files here.
Below code has been used to start ignite.

IgniteConfiguration igniteGridIg = new IgniteConfiguration();
igniteGridIg.SpringConfigUrl = Path.Combine(igniteXmlfilePath);
Ignition.Start(igniteGridIg);

Thanks & Regards,
Charlin



On Wed, 31 Mar 2021 at 17:44, Stephen Darlington 
<mailto:stephen.darling...@gridgain.com>> wrote:


What are you doing with Ignite? Are you sending compute tasks,
cache operations, both? What’s your configuration?

> On 31 Mar 2021, at 12:31, Charlin S mailto:charli...@hotelhub.com>> wrote:
>
> Hi,
>
> I'm running an ASP.Net application with ignite 2.9.1  and node
setup as 2 server nodes and 11 client nodes.
> We are seeing most of the load on one server only it's supposed
to distribute load between two servers.
> How can we distributed load be equally distributed on both servers?
>
>
> Thanks & Regards,
> Charlin




Re: Could you please help remove my email from the mail list?

2021-02-10 Thread Mikael
You sent email to user-unsubscr...@ignite.apache.org 
 
?


Any chance you have multiple email accounts and that it is using another 
account ? make sure "to" is actually to your current email and not some 
other email you also use.



On 2021-02-10 23:20, Ares Zhu wrote:


I have removed my account from the forum, but I still can receive mail 
from http://apache-ignite-users.70518.x6.nabble.com/ 
, seems I are still 
on the list.


Thanks,

Ares



Re: Ignite performance issues...

2020-11-14 Thread Mikael

Hi!

How many nodes will you be running this on ?

What is your use case, what kind of performance is important to you, 
reading, writing ? do you need access to all data or will you just use a 
small subset ?


In terms of writing you can get good performance with a streamer instead 
of using SQL insert if that would be possible.


Mikael


On 2020-11-14 23:35, Wolfgang Meyerle wrote:


Hi,

I have a question in regards to Apache Ignite performance tuning.

This is my config so far:

https://pastebin.com/NWDzY3RK

I plan to store 2 billion entries in the database. The machine that I 
have is not great, 32gb ram nothing more.


I wonder how I can tune the performance to speed up things. I know 
that if I leave out indexing that would speed things are but indices 
are necessary. Also an SQL access is necessary though not necessarily 
during inserts. As I'm using C++ I don't know if ignite supports the 
feature in creating an SQL access later on after the data has been 
inserted in a cache. Is this possible?


According to my logging I'm able to insert

42108964 entries in 66 minutes

with 610 Insertions per second:610

and an average of 1057 insertions per second (which is not great in my 
opinion)



According to ignites performance guidelines I segregated already to 
Wal stores from the persistence storage drive.


I'm also getting Apache ignite errors from time to time:

[21:24:15] Possible failure suppressed accordingly to a configured 
handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class 
o.a.i.IgniteException: GridWorker [name=db-checkpoint-thread, 
igniteInstanceName=null, finished=false, heartbeatTs=1605385441327]]]


Tried to find something online and in the Apache Ignite documentation 
but struggles so hopefully any geek out there can drop a comment...


Regards,


Wolfgang





3.0 and messaging

2020-10-15 Thread Mikael

Hi!

It sounds like the messaging API will be removed from 3.0, is that 
correct ? I use the messaging a lot so in that case I guess I have to 
work out some other solution for sending messages between nodes, any 
idea on the best solution for this ?





Re: Ignite Configuration file modify

2020-06-30 Thread Mikael
Ok, well modifying the configuration might not be a good solution, what 
if you have multiple nodes and the configurations files no longer match, 
I guess you could create a service that runs on all nodes and have that 
updating the XML configuration files when you add caches, there is no 
need to "reload" the configuration while Ignite is running as you create 
the caches manually from code AND update the XML configuration, and on a 
restart the caches will be in the modified configuration.


Any chance you could enable persistence on the system cache ? that would 
keep your created caches on restart.


The reason you cannot reload configuration while running is that many 
settings cannot be changed on a running instance, page size, off heap 
memory region sizes and so on.


Den 2020-06-30 kl. 11:15, skrev kay:

Hello :)

Because If I add cache by code, the cache disappears when the node is
restarted
so I want to modify the configuration file directly.

Thank you!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Configuration file modify

2020-06-30 Thread Mikael

Hi!

No, you need to restart Ignite if you change xml configuration as far as 
I know.


But you can add caches after Ignite is started without any problems, 
both from code and from SQL, is there any reason you cannot use one of 
these methods ?



Den 2020-06-30 kl. 10:47, skrev kay:

Hello,
Is there a way to modify the configuration file after Ignite Node starts?
Should I use FTP directly to create a page to modify configuration file?

I know I can add cache in ignite run time but after restart, There is not
exist cache add in runtime.
So I need to find a way modify configuration file..

Thank you so much





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Check service location possible ?

2020-06-11 Thread Mikael

Ah, of course, thanks.

Den 2020-06-11 kl. 10:41, skrev Alex Plehanov:

Hello,

IgniteServices.service(String name) method return service instance if 
it's deployed locally or null.


чт, 11 июн. 2020 г. в 11:16, Mikael <mailto:mikael.arons...@gmail.com>>:


Hi!

Is there any good way to detect if a service is running local or
remote
? I guess I could check the instance of the returned proxy and see
if it
is the implementation class or not, but it feels a bit ugly, is there
some other nicer way to do this ?




Check service location possible ?

2020-06-11 Thread Mikael

Hi!

Is there any good way to detect if a service is running local or remote 
? I guess I could check the instance of the returned proxy and see if it 
is the implementation class or not, but it feels a bit ugly, is there 
some other nicer way to do this ?





Re: Ignite memory memory-architecture with cache partitioned mode

2020-06-02 Thread Mikael

Hi!

You may get worse performance with a thin client compared to an ordinary 
client because as the thin client works through an intermediary node, so 
your request will always go to node A in your case and then it will be 
handled there, a normal client would go straight to the node where the 
data is while the thin client first has to go to node A and then from 
there to the B node where the data is.


The index data is stored where the data is (B,C) so if your primary data 
and backup data is not on node A it will have to pass on the request to 
B, Ignite may not have index data on node A but it does know on what 
node your data is located so it will go from A to B, it does not have 
the request it from C also.


So with an ordinary client your request will go straight to B and 
response back to client.


With a thin client your request will go to A and from there to B, 
resoponse back to A and then to your client.


Mikael

Den 2020-06-03 kl. 07:22, skrev kay:

Hello, I read this page,

https://apacheignite.readme.io/docs/memory-architecture.

and I would like to know what is going to be happen if there are 3 remote
server nodes(A,B,C)
, and cache mode is partitioned and backups 1.

If I wanna get '1' cache data and my application is connected with node A
using thin client but '1' data is allocated in node B(primary), C(backup).

In these case, I want to know Ignite operating principle for data get.

All node have a index page all data??
I want to know..

I look forward to reply.
Thank you so much.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Node vs Data Node

2020-05-16 Thread Mikael

Hi!

There is nothing like service vs data nodes in Ignite, you have server 
nodes and client nodes, if you want to separate data and services nodes 
you can do that, but that is up to you it is not a requirement.


You have server nodes that can have caches that store data, run compute 
grid jobs and services also and so on, then you have client nodes, these 
are more "lightweight" and do not have any caches on them, do not 
perform computations and so on, but they can connect to server nodes and 
access all caches and services.


You have control over where the caches are created, you can have them on 
all or some server nodes, it's up to you.


You can deploy a service on any "data" node, (a data node is a server 
node).


You can connect using a client and use the compute grid, the client node 
can access all features of Ignite server nodes.


Ignite is very flexible in terms of where data is stored, where and how 
many services are run and also where compute grid jobs are excuted using 
properties, affinity keys and a few other ways.


The documentation on services and caches is very good and explain 
everything about how to use them.


Mikael

Den 2020-05-16 kl. 16:38, skrev nithin91:

Hi

We are exploring the iginite Service Grid.

Can any explain the difference between Service node and Data node.

i.e Currently i have 2 data nodes.

If i need to have a service deployed, then should i need to have a new node
which doesn't store data(based on the explanation given in the video
https://www.youtube.com/watch?v=nZ57o330yD0 ) or can i have the services
deployed on any of the data nodes directly.

Also if we want to use compute grid, can we connect in client mode or do we
need to start as server node.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [ANNOUNCEMENT] Ignite New Website is Live

2020-03-27 Thread Mikael

Hi!

It looks great, I just had a quick look yet but so far no problems.

Mikael

Den 2020-03-27 kl. 18:46, skrev Denis Magda:

Dear Community,

I've just merged and released a new version of the website. Take your 
time and enjoy the new look & feel, structure, content, and navigation 
menu:

https://ignite.apache.org

Thanks to everyone who took part in the most recent Ignite survey [1] 
and helped us pinpoint key use cases of the projects. Those are now 
engraved on the front page to let new Ignite users figure out quicker 
our advantages. Thanks to all of you who took part in the new website 
review [2] sharing feedback publicly and privately. Finally, let me 
applaud to Mauricio and Ignacio who were in charge of the most 
heavyweight tasks and rebuilt the website from scratch using new 
design and structure.


Send us a note if you catch any issues. The next step is to migrate 
the source code to Git...


[1] 
http://apache-ignite-developers.2346864.n4.nabble.com/Ignite-Evolution-Direction-short-questionary-td44577.html
[2] 
http://apache-ignite-developers.2346864.n4.nabble.com/Ignite-Website-New-Look-td46324.html


-
Denis


Re: Thick vs Thin client

2020-03-02 Thread Mikael
A thick client is an Ignite node set to client mode, it works just like 
a server node but it does not keep any data, so no caches are created on 
the client node and so on, and yes your link [1] describes the 
difference between a server and a (thick) client.


[1] https://apacheignite.readme.io/docs/clients-vs-servers

Den 2020-03-02 kl. 10:02, skrev scriptnull:

Hi,

While going through the threads in this user list, I noticed that these
terms are getting used a lot "thick" and "thin" client.

For thin clients, there is a clear documentation (
https://apacheignite.readme.io/docs/thin-clients ), but for thick clients
there is no reference of what exactly it is in the documentation or
elsewhere.

It would be really helpful if someone could describe this here or at least
point to a resource which defines it clearly (I tried my best searching
about it, but couldn't arrive at it)

Further, I wonder if thick client is just another name for server node
mentioned in this document (
https://apacheignite.readme.io/docs/clients-vs-servers ) and thin client is
the client that people use it in the application layer to access data from
the cache. In that case, we might at least modify that document to state
this fact.

Thanks in advance!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache ExpirationPolicy

2020-02-28 Thread Mikael

Hi!

Are you talking about the contents of the cache or the cache itself ? 
Ignite does not delete the cache, just the entries.




Den 2020-02-29 kl. 03:31, skrev Mahesh Renduchintala:

Hi

I have a cache template marked as below in the defaul_config.xml.
I was expecting that table created as such would automatically get 
deleted from off-heap and backup.

This is because of the expiration policy set as below


However, I observe that these tables are not getting deleted? What am 
I doing wrong?




 class="org.apache.ignite.configuration.CacheConfiguration">

 
 
 
 
 
 
 
 
 

factory-method="factoryOf">









 
 
 
 
 
 
 
 
 
 
 




Re: Cores on a node

2020-02-09 Thread Mikael

Hi!

int numCores = Runtime.getRuntime().availableProcessors();

It will return an ok value, it may not return the actual number of cores 
depending on affinity settings and things like that, but it will return 
the cores that JVM have access to.


Mikael

Den 2020-02-09 kl. 12:44, skrev F.D.:

Hi,
I'd like to know if is possible to know the number of logical/physical 
cores on a node in a cluster?


Thanks,
   F.D.


Re: Getting No space left on device exception when persistence is enabled

2020-02-04 Thread Mikael

Hi!

Well, the message say no space left on device so you are sure the 
persistence/WAL data is stored on the drive where you have 100GB free 
space ? you don't have any user limitations on disk usage or anything 
like that ?


Mikael


Den 2020-02-05 kl. 07:18, skrev adipro:

Can someone please help?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Class loader

2020-02-03 Thread Mikael

Hi!

Yes, that should do the trick, I will have a look at it.

Thanks.


Den 2020-02-03 kl. 12:05, skrev Ilya Kasnacheev:

Hello!

We have a thing calles URI deployment:

https://apacheignite.readme.io/docs/deployment-spi

It allows you to specify location to be scanned at regular intervals, 
if there are any new JARs they will be reloaded, with versioning support.


Sounds like the exact thing that you are describing.

Regards,
--
Ilya Kasnacheev


сб, 1 февр. 2020 г. в 14:49, Mikael <mailto:mikael-arons...@telia.com>>:


Hi!

Will the Ignite class loader find classes/jar files added to the lib
directory after it is started, or do all classes/jar files have to be
there at startup (running Ignite from ignite.sh) ?

It looks like it will not find any classes added after startup as
far as
I can tell after a quick test, but maybe there is a way around that ?

Why ? well, the idea was to be able to replace services without any
restart, the service is wrapper class that load the actual
services, and
it do that by searching for a specific class name with a version
number
included in the name, so by adding a new class with a different
name it
would be possible to load the new updated class without having to
restart Ignite, don't even need to restart the service itself.

But for that to work the class loader must of course be able to
find the
new added class file or jar.

Mikael




Re: DataStreamer as a Service

2020-02-02 Thread Mikael

Hi!

Not as far as I know, I have a number of services using streamers 
without any problems, do you have any specific problem with it ?


Mikael


Den 2020-02-02 kl. 22:33, skrev narges saleh:

Hi All,

Is there a problem with running the datastreamer as a service, being 
instantiated in init method? Or loading the data via JDBC connection 
with streaming mode enabled?

In either case, the deployment is affinity based.

thanks.


Class loader

2020-02-01 Thread Mikael

Hi!

Will the Ignite class loader find classes/jar files added to the lib 
directory after it is started, or do all classes/jar files have to be 
there at startup (running Ignite from ignite.sh) ?


It looks like it will not find any classes added after startup as far as 
I can tell after a quick test, but maybe there is a way around that ?


Why ? well, the idea was to be able to replace services without any 
restart, the service is wrapper class that load the actual services, and 
it do that by searching for a specific class name with a version number 
included in the name, so by adding a new class with a different name it 
would be possible to load the new updated class without having to 
restart Ignite, don't even need to restart the service itself.


But for that to work the class loader must of course be able to find the 
new added class file or jar.


Mikael




Re: Doubt regarding CACHE Replication.

2020-01-30 Thread Mikael

Hi!

Well, if the new server is not part of the baseline topology there would 
not be any rebalancing, I would think you have to add the new server to 
the baseline first, at least that is how I think it works.


Mikael

Den 2020-01-30 kl. 08:20, skrev adipro:

I am using Ignite with persistence mode enabled. I created a cache and let it
grow in one server node till the data is upto 2 GB. After some time, if I
start a new server node and since the cache is in replicated mode, why the
data is not replicated in the new server node? Any idea? It's showing 0 MB
used both in off-heap and persistence regions in new server node. Please
help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Server side cache configuration only

2020-01-27 Thread Mikael

Hi!

1) You do not need to have cache configuration on client side, I do that 
all the time.


2) Do not know

Mikael

Den 2020-01-27 kl. 21:42, skrev Sabar Banks:

Hello Ignite Community,

My questions are:

1) Is it possible to only define cache configurations on the server side,
via xml, and avoid defining caches on the client side?

2) Is it possible to only have data bean classes listed in a cache config,
to ONLY exist on the server side physically? I am trying to only have the
bean definitions in one place: server.

Let me know

Thanks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread Mikael

Hi!

If you use put() to insert the data it's not the fastest way, using 
putAll(), IgniteCache.loadCache() or a streamer is usually much faster, 
but it depends a little on how you use your data, a streamer is fast but 
you can't expect all data to be available until you close or flush the 
streamer, there are many examples in the documentation.


Mikael

Den 2020-01-27 kl. 09:36, skrev nithin91:

Hi

I am trying to load data from Oracle Table to Ignite Cache using Cache Store
Load Cache method.

Following is the logic implemented in Load Cache method to load the data
from Oracle Table using Ignite Cache.

1. JDBC connector is used to connect to Oracle Table and the data is
available in Result Set Cursor.
2. While loop is used to loop through the Result Set Object and insert the
data into cache.

Is there any other way to insert the data from Oracle Table to Ignite Cache.
If possible please share sample code.







 










--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Java 13 ?

2020-01-21 Thread Mikael

Hi!

So far I have been using Java 8 for Ignite but I guess sooner or later I 
have to switch, it looks like it works fine on Java 11, but how about 
13, can't find anything about that ?


Mikael




Re: Use custom Data Region or custom Cache for IgniteAtomicReference - Ignite 2.3

2020-01-17 Thread Mikael

Hi!

If you set the default region persistent all services will be persistent 
also, I guess this goes for all the set, queues and counters also, if 
you want to use the security system it has to be persistent, I use it 
all time just for the fact that all services will survive a  restart 
even if all nodes go down.


Mikael

Den 2020-01-17 kl. 18:49, skrev j_recuerda:

I am using Ignite 2.7.0



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How best to push continuously updating ordered data on Apache ignite cache

2020-01-17 Thread Mikael

Hi!

Trusting that things come in the correct order could be tricky, the only 
way to do it I guess it to use put(), if you need fast writes you will 
have to use multiple threads and any ordering will be lost, can't you 
include some ordering data when you write them, a counter or whatever 
and use that to sort them in client 2 ? that would be uch safer.


You might need to include some kind of waiting process in case one entry 
has not arrived yet, but if you use a counter to write them you know 
that after 1 there will always ba a 2 and so on without any gaps so it 
should not be that messy.


Mikael

Den 2020-01-17 kl. 04:51, skrev trans:

*Usecase*

Here is the topology we are working on

*Server - 1* --> marketData cache, holds different share prices information

*Client - 1* --> Pushing data on the cache

*Client - 2* --> Continuous query, listening updates coming on marketData
cache per key basis

I want data to follow the order in which it was received and pushed on
queue. Reason is that client - 2 should not get old data. For example last
price data for an instrument moved from 100 to 101 then 102. Client - 2
should get in the same order and to **not** get in order like 100 to 102
then 101.
So for a key in my cache i want the messages to be pushed in order.

Ways of pushing data on cache:

  1. *put* seems to be safest way but looks like slow and full cache update
is happening and then thread moves to next statement. This might not be
suitable for pushing 2 updates per second.

  2. *putAsync *seems to be good way, my understanding is that it uses
striped pool to push data on the cache. As the  striped pool
<https://apacheignite.readme.io/docs/thread-pools#section-striped-pool>
uses max(8, no of cores) so it should be faster. But as the multiple threads
are being processed in parallel so does it confirm the data ordering as it
was pushed in?

  3. *DataStreamer *seems to be best way as it also processes things in
parallel but again the problem is with the order of putting data into cache.
API documentation
<https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Datastream.IDataStreamer-2.html>
also mention that ordering of data is not guaranteed.


Can someone please clarify above? I could not find document giving clear
deeper understanding for these ways.
*What is best way out of above to push continuously updating data like
market data?*



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Caused by: org.apache.ignite.IgniteCheckedException: Requested DataRegion is not configured: Data_Region

2019-12-26 Thread Mikael
The exception is thrown when it cannot find the region name in a Map, so 
that would indicate that the region does not exist in the cluster, are 
you 100% sure that your region configuration is executed on the cluster ?


Any chance you could run a ignite.dataRegionMetrics(); call and see what 
regions are defined in the cluster before you try to create the cache ?


Mikael

Den 2019-12-26 kl. 12:44, skrev ashishb888:

Quotes are there I just removed them by mistakenly while pasting here



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Caused by: org.apache.ignite.IgniteCheckedException: Requested DataRegion is not configured: Data_Region

2019-12-26 Thread Mikael
Ok, can't see anything wrong with it, I have never tried to create a 
cache from client side, even though the cache is never created on client 
side I guess it's possible you need to have the regions configured on 
client side also, but the getOrCreateCache do almost nothing on client 
size, everything is performed on the server side.


Den 2019-12-26 kl. 12:44, skrev ashishb888:

Quotes are there I just removed them by mistakenly while pasting here



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Caused by: org.apache.ignite.IgniteCheckedException: Requested DataRegion is not configured: Data_Region

2019-12-26 Thread Mikael

Hi!

Is that intentional ? you have:

dataRegion.setName(Data_Region);

I would assume you should have quotes around the name unless you have a String 
variable named Data_Region that contains the correct name ?

Mikael

Den 2019-12-26 kl. 11:39, skrev ashishb888:

*Server node configuration*

DataStorageConfiguration storageCfg = new DataStorageConfiguration();
DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName("Default_Region");
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);

DataRegionConfiguration dataRegion = new DataRegionConfiguration();
dataRegion.setName(Data_Region);
dataRegion.setPersistenceEnabled(Boolean.valueOf(ip.getDataRegion().get("persistence")));
storageCfg.setDataRegionConfigurations(dataRegion);

*Cache creation from client node*

CacheConfiguration personCacheConfig = new
CacheConfiguration<>(cacheName);
personCacheConfig.setIndexedTypes(Integer.class, Person.class);
personCacheConfig.setCacheMode(CacheMode.PARTITIONED);
personCacheConfig.setSqlSchema(schema);
personCacheConfig.setDataRegionName(regionName);

ignite.addCacheConfiguration(personCacheConfig);
return ignite.getOrCreateCache(cacheName);




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Number of backups of the cache named [ignite sys cache]

2019-12-23 Thread Mikael

Hi!

It is not the number of backups, I am not sure why it say that value but 
you always get that for any replicated cache, quick guess is that it is 
set max integer value when you use replicated instead of partitioned cache.


Mikael


Den 2019-12-23 kl. 15:00, skrev 李玉珏@163:

hi,

In the 2.7.6 version, start an ignite node through ignite.sh with the 
fully default configuration, and then execute the following command:


./control.sh --cache list [.]*

The following output will be found:

[cacheName=ignite-sys-cache, cacheId=-2100569601, grpName=null, 
grpId=-2100569601, prim=100, mapped=100, mode=REPLICATED, 
atomicity=TRANSACTIONAL, backups=2147483647, 
affCls=RendezvousAffinityFunction]


It shows: backups = 2147483647
This is a huge number of backups. Why?




Re: How to reload a cache being queried with minimum downtime

2019-12-11 Thread Mikael

Hi!

Any chance you could do it at the "client" end ? telling the client(s) 
to use another cache instead of the alias function, I guess that all 
depends on how you query the data.


So create another cache, load it with the data and tell "client" to use 
the other cache instead and then delete the old cache.


Another way would be messing around with the keys, load the new data 
with a different "key" (some kind of version number) and switch to use 
the new one when it's loaded and then delete the old.


But all of these solutions require some kind of cooperation from the 
query client so I am not sure it is possible for you to do that.


Mikael

Den 2019-12-12 kl. 03:50, skrev Renato Melo:

Hello,

Periodically I need to reload a cache that is being massive queried by 
a stream application.


In my current solution every reload I drop the cache, recreate it and 
reload data. Caches are persisted in the file system.


I am looking for alternatives to minimize downtime when reloading caches.

What alternatives does ignite offer?

Some relational databases offer CREATE TABLE ALIAS option. Using an 
alias I would be able to create a new table, reload data and when 
ready, pointing the alias to the new table having minimum downtime.


I read ignited does not allow cache renaming neither offers support 
for "CREATE ALIAS".


What alternatives do you suggest?

Best regards,

Renato de Melo







Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Mikael

Hi!

I guess you should forward that information to GridGain as web console 
is not part of Apache Ignite.


Mikael

Den 2019-12-10 kl. 13:10, skrev Prasad Bhalerao:

Hi,

We found 3 vulnerabilities while scanning Grid Gain Web console 
application.


We are using HTTP and not HTTPS due to some issues on our side. 
Although vulnerabilities are of lower severity, but thought of 
reporting it here.


1) HTTP TRACE / TRACK Methods Enabled. (CVE-2004-2320 
<https://nvd.nist.gov/vuln/detail/CVE-2004-2320>, CVE-2010-0386 
<https://nvd.nist.gov/vuln/detail/CVE-2010-0386>, CVE-2003-1567 
<https://nvd.nist.gov/vuln/detail/CVE-2003-1567>)

2) Session Cookie Does Not Contain the "Secure" Attribute.
3) Web Server HTTP Trace/Track Method Support Cross-Site Tracing 
Vulnerability. (CVE-2004-2320 
<https://nvd.nist.gov/vuln/detail/CVE-2004-2320>, CVE-2007-3008 
<https://nvd.nist.gov/vuln/detail/CVE-2007-3008>)


Can these be fixed?

Thanks,
Prasad


On Tue, Dec 10, 2019 at 4:39 PM Denis Magda <mailto:dma...@apache.org>> wrote:


It's free software without limitations. Just download and use it.

-
Denis


On Tue, Dec 10, 2019 at 1:21 PM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

Hi,

Can apache ignite users use it for free in their production
environments?
What license does it fall under?

Thanks,
Prasad

On Fri, Oct 4, 2019 at 5:33 AM Denis Magda mailto:dma...@apache.org>> wrote:

Igniters,

There is good news. GridGain made its distribution of Web
Console
completely free. It goes with advanced monitoring and
management dashboard
and other handy screens. More details are here:

https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite

-
Denis



Re: Failed to start near cache (a cache with the same name without near cache is already started)

2019-12-09 Thread Mikael

Hi!

You need to be careful, with copyOnRead disabled, any local read can 
return a "real" copy of the object and you can cause pretty weird bugs 
if you modify that object, so be careful with that or best is to only 
use it for readonly cases.


If you see that much performance improvement I would think that you run 
it locally or on one node, when you fetch remote data copyOnRead would 
not make any difference.


Mikael

Den 2019-12-09 kl. 12:43, skrev Hemambara:

I have disabled copyonread and the time taken to getAll() reduced to 1/3.
Earlier it was 300 ms on 10k entries and now it is 100ms. Is disabling
copyonread is fine ???



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: about memory configuration

2019-12-08 Thread Mikael

Hi!

Not so easy to answer, if you also run your own code for example it will 
use memory, I would suggest you create an application or a test 
application, run it and see how much memory it use (just run JConsole or 
something) and you will get a pretty clear view of how much you will 
need, add a little to that and you should be fine, but make sure there 
is enough, depending on what your application does there might be spikes 
at some times and so on, give it at least 200-300mb extra space to work 
with.


Each cache use heap memory (20mb or so), services use heap memory and so 
on, there is no answer that fits all, looking at one of my Ignite 
applications with 14 caches, 12 services it runs around 800mb heap, as I 
said before have a peek at the capacity planning documentation it gives 
a lot if good information on this.


Mikael

Den 2019-12-08 kl. 14:54, skrev c c:

Thanks for your reply.
I mean if we store data in off-heap memory,  should give much heap 
memory by setup ignite jvm start option (-XMX)?  We setup 
CacheConfiguration.setOnheapCacheEnabled (false) .  How many memory 
should we configurate for  ignite jvm start option (-XMX) is enough 
usually?


Mikael mailto:mikael-arons...@telia.com>> 
于2019年12月8日周日 下午3:40写道:


Hi!

I would not expect any big difference, withKeepBinary allows you
to work with object without having to deserialize the entire
object and you do not need the actual class to be available, and
you can also add/remove fields from objects, but from a heap point
of view I do not think you will notice much difference, the
entries are still stored in the same way internally.

Mikael

Den 2019-12-08 kl. 04:03, skrev c c:

By reading document we know it need read object from off-heap to
on-heap when do some reads on server node. We do some timer job
that would query cache(igniteCache.withKeepBinary().query(new
ScanQuery())) , Does this operation need more on-heap memory? we
setup CacheConfiguration.setOnheapCacheEnabled (false)
And we also want to know using EntryProcessor(withKeepBinary or
not) need more on-heap memory?

c c mailto:yeahch2...@gmail.com>>
于2019年12月8日周日 上午10:24写道:

thank you very much.

    Mikael mailto:mikael-arons...@telia.com>> 于2019年12月8日周日
上午1:24写道:

Hi!

The data regions are always off-heap, you just configure
the Java heap for on-heap usage with -Xmx and so on as
usual, have a look in the ignite.sh/ignite.bat
<http://ignite.sh/ignite.bat>, it depends on how you run
your application, just configure this any way you like if
you use embedded Ignite instance, also read the section
about capacity planning.

The java heap is just for java objects, services and any
on-heap data, all caches are stored in data regions and
they are off heap and have nothing to do with -Xmx except
when you use on-heap caching.

The Ignite documentation is very good and explains all
you need to know on how to configure it.


https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching

https://apacheignite.readme.io/docs/memory-configuration

Mikael

Den 2019-12-07 kl. 17:41, skrev c c:

HI,
    According to document we can setup memory size by
org.apache.ignite.configuration.DataStorageConfiguration.
But we do not know this works for off-heap or on-heap
memory. We want to know how to setup ignite jvm startup
option(xms, xmx). Shoud jvm heap memory be great than
maxSixe in DataStorageConfiguration. We know some hot
data would be deserialized from off-heap to on-heap.
Would you mind giving me some advice? thanks very much!




Re: about memory configuration

2019-12-07 Thread Mikael

Hi!

I would not expect any big difference, withKeepBinary allows you to work 
with object without having to deserialize the entire object and you do 
not need the actual class to be available, and you can also add/remove 
fields from objects, but from a heap point of view I do not think you 
will notice much difference, the entries are still stored in the same 
way internally.


Mikael

Den 2019-12-08 kl. 04:03, skrev c c:
By reading document we know it need read object from off-heap to 
on-heap when do some reads on server node. We do some timer job that 
would query cache(igniteCache.withKeepBinary().query(new 
ScanQuery())) , Does this operation need more on-heap memory? we setup 
CacheConfiguration.setOnheapCacheEnabled (false)
And we also want to know using EntryProcessor(withKeepBinary or not) 
need more on-heap memory?


c c mailto:yeahch2...@gmail.com>> 
于2019年12月8日周日 上午10:24写道:


thank you very much.

Mikael mailto:mikael-arons...@telia.com>> 于2019年12月8日周日 上午1:24写道:

Hi!

The data regions are always off-heap, you just configure the
Java heap for on-heap usage with -Xmx and so on as usual, have
a look in the ignite.sh/ignite.bat
<http://ignite.sh/ignite.bat>, it depends on how you run your
application, just configure this any way you like if you use
embedded Ignite instance, also read the section about capacity
planning.

The java heap is just for java objects, services and any
on-heap data, all caches are stored in data regions and they
are off heap and have nothing to do with -Xmx except when you
use on-heap caching.

The Ignite documentation is very good and explains all you
need to know on how to configure it.


https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching

https://apacheignite.readme.io/docs/memory-configuration

Mikael

Den 2019-12-07 kl. 17:41, skrev c c:

HI,
    According to document we can setup memory size by
org.apache.ignite.configuration.DataStorageConfiguration. But
we do not know this works for off-heap or on-heap memory. We
want to know how to setup ignite jvm startup option(xms,
xmx). Shoud jvm heap memory be great than maxSixe in
DataStorageConfiguration. We know some hot data would be
deserialized from off-heap to on-heap. Would you mind giving
me some advice? thanks very much!




Re: about memory configuration

2019-12-07 Thread Mikael

Hi!

The data regions are always off-heap, you just configure the Java heap 
for on-heap usage with -Xmx and so on as usual, have a look in the 
ignite.sh/ignite.bat, it depends on how you run your application, just 
configure this any way you like if you use embedded Ignite instance, 
also read the section about capacity planning.


The java heap is just for java objects, services and any on-heap data, 
all caches are stored in data regions and they are off heap and have 
nothing to do with -Xmx except when you use on-heap caching.


The Ignite documentation is very good and explains all you need to know 
on how to configure it.


https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching

https://apacheignite.readme.io/docs/memory-configuration

Mikael

Den 2019-12-07 kl. 17:41, skrev c c:

HI,
    According to document we can setup memory size by 
org.apache.ignite.configuration.DataStorageConfiguration. But we do 
not know this works for off-heap or on-heap memory. We want to know 
how to setup ignite jvm startup option(xms, xmx). Shoud jvm heap 
memory be great than maxSixe in DataStorageConfiguration.  We know 
some hot data would be deserialized from off-heap to on-heap. Would 
you mind giving me some advice? thanks very much!


Re: Improving Get operation performance

2019-11-26 Thread Mikael

Hi!

The numbers sound very low, I run on hardware close to yours (3 nodes 
(X5660*5) and 1 client), and I get way more than 1500/sec, not sure how 
much, I will have to check, but as long as you do single get's there is 
not so much you can do, each get will be one roundtrip over the network, 
and with single get's latency can have a huge impact, I modified my code 
and most of the time I cache all get's over 100ms into a getAll and that 
makes a huge impact on performance.


Not that much to change in configuration, number of backups don't have 
much impact on reads (unless you do replicated of course)


I am not sure how the traffic works but if there is only one tcp 
connection to each node you will not have much use for more than 3 
threads I would think.


Did you read 500K unique entries or the same multiple times ?

Mikael

Den 2019-11-26 kl. 21:38, skrev Victor:

I am running some comparison tests (ignite vs cassandra) to check how to
improve the performance of 'get' operation. The data is fairly
straightforward. A simple Employee Object(10 odd fields), being stored as
BinaryObject in the cache as

IgniteCache empCache;

The cache is configured with, Write Sync Mode - FULL_SYNC, Atomicity -
TRANSACTIONAL, Backup - 1 & Persistence - Enabled

Cluster config, 3 server + 1 client node. Setup on 2 machine, server machine
(Intel(R) Xeon(R) CPU X5675  @ 3.07GHz) & client machine (Intel(R) Xeon(R)
CPU X5560  @ 2.80GHz).

Client has multiple threads(configurable) making concurrent 'get' calls.
Using 'get' on purpose due to use case requirements.

For about 500k request, i am getting a throughput of about 1500/sec. Given
all of the data is in off-heap with cache hits percentage = 100%.
Interestingly with Cassandra i am getting a similar performance, with key
Cache and limited row cache.
I've tried running with 10/20/30 threads, the performance is more/less same.

Letting the defaults for most of the Data configuration. For this test i
turned the persistence off. Ideally for get's it shouldn't really matter.
The performance is the same.


Data Regions Configured:
[19:35:58]   ^-- default [initSize=256.0 MiB, maxSize=14.1 GiB,
persistence=false]

Topology snapshot [ver=4, locNode=038f99b3, servers=3, clients=1,
state=ACTIVE, CPUs=40, offheap=42.0GB, heap=63.0GB]


Additionally ran top on both the machines to check if they are hitting the
resources,
-- Server
PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
14159 root  20   0   29.7g   3.2g  15216 S  10.3  4.5   1:35.69 java
14565 root  20   0   29.4g   2.9g  15224 S   8.3  4.2   1:33.41 java
13770 root  20   0   30.0g   2.9g  15184 S   6.3  4.2   1:36.99 java

- Client
3731 root  20   0   27.8g   1.1g  15304 S 136.5  1.5   2:39.16 java

As you can see everything is well under.

Frankly, i was expecting Ignite gets to be pretty fast, given all data is in
cache. Atleast looking at this test
https://www.gridgain.com/resources/blog/apacher-ignitetm-and-apacher-cassandratm-benchmarks-power-in-memory-computing
<https://www.gridgain.com/resources/blog/apacher-ignitetm-and-apacher-cassandratm-benchmarks-power-in-memory-computing>

Planning to run one more test tomorrow with no-persistence and setting near
cache (on heap) to see if it helps.

Let me know if you guys see any obvious configurations that should be set.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Active tasks in cluster

2019-11-23 Thread Mikael
The whole idea with a future is that it is a small lightweight compact 
object, and you still have Igor's suggestion:


Collection>> result = 
ignite.compute().broadcast(() -> ignite.compute().activeTaskFutures());


If you would have to implement a cluster wide listening mechanism in the 
futures you would add a terrible amount of overhead to it, and you would 
cause a lot of problems, what if you try to deserialize a future on a 
computer that is in another cluster, it may not even be an Ignite 
application, what if you deserialize a future that was created 2 years 
ago and the "id" of the future is now being reused for another future 
that has nothing to do with the original one, what if you deserialize it 
in a different cluster where that id is something different and not the 
same you submitted on the other cluster, yes all these things can be 
handled but once again you would turn a small nice simple object into a 
complex beast.


Den 2019-11-23 kl. 15:00, skrev Prasad Bhalerao:

By member you meant the output of the thread right?

If yes, can we keep the member at centralised location like an 
internal cache?
(May be we can provide the flag if turned on then the member can be 
broadcasted to whoever is listening to it or centralised cache location)
I am considering future as a handle to the task which can be used to 
cancel the task even if the submitter node goes down.




On Sat 23 Nov, 2019, 7:21 PM Mikael <mailto:mikael-arons...@telia.com> wrote:


Well, the thread has to set a member in the future when it is
finished, if you serialize the future and send it somewhere else,
how is the thread going to be able to tell the future it had
finished ?

Den 2019-11-23 kl. 14:31, skrev Prasad Bhalerao:

Can someone please explain why Active task futures can't be
serialized?

If we loose the future then we don't have the way to cancel the
active task if it's taking too long. I think this is important
feature.



Thanks,
Prasad

On Thu 21 Nov, 2019, 5:16 AM Denis Magda mailto:dma...@apache.org> wrote:

I think that you should broadcast another task that will
simply ask every node if taskA is already running or not
every time the topology changes. If the response from all the
nodes is empty then you need to reschedule taskA, otherwise,
you will skip this procedure.

-
Denis


On Wed, Nov 20, 2019 at 9:28 AM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

That means I can't do this..

Collection>> result =
ignite.compute().broadcast(() ->
ignite.compute().activeTaskFutures());

Is there any way to get list futures of all active tasks
running on all nodes of the cluster?

Thanks,
    Prasad


On Wed 20 Nov, 2019, 10:51 PM Mikael
mailto:mikael-arons...@telia.com> wrote:

Hi!

    No you cannot serialize any future object.

Mikael


Den 2019-11-20 kl. 17:59, skrev Prasad Bhalerao:

Thank you for the suggestion. I will try this.

I am thinking to store the task future object in a
(replicated)cache against a jobId. If the node goes
down as described in case (b), I will get the task's
future object from this cache using a jobId and will
invoke the get method on it.

But I am not sure about this approach, whether a
future object can be serialized and send it over the
wire to another node and deserialize it and then
invoke the get API on it.

I will try to implement it tomorrow.

Thanks,
Prasad


On Wed 20 Nov, 2019, 9:06 PM Igor Belyakov
mailto:igor.belyako...@gmail.com> wrote:

Hi Prasad,

I think that you can use compute().broadcast()
for collecting results of activeTaskFutures()
from all the nodes:
Collection>> result =
ignite.compute().broadcast(() ->
ignite.compute().activeTaskFutures());

Regards,
Igor Belyakov

On Wed, Nov 20, 2019 at 5:27 PM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

Hi,

I want to get the active tasks running in
cluster (tasks running on all nodes in cluster)

IgniteCompute interface has method
"activeTaskFutures" which returns tasks
future 

Re: Active tasks in cluster

2019-11-23 Thread Mikael
Well, the thread has to set a member in the future when it is finished, 
if you serialize the future and send it somewhere else, how is the 
thread going to be able to tell the future it had finished ?


Den 2019-11-23 kl. 14:31, skrev Prasad Bhalerao:

Can someone please explain why Active task futures can't be serialized?

If we loose the future then we don't have the way to cancel the active 
task if it's taking too long. I think this is important feature.




Thanks,
Prasad

On Thu 21 Nov, 2019, 5:16 AM Denis Magda <mailto:dma...@apache.org> wrote:


I think that you should broadcast another task that will simply
ask every node if taskA is already running or not every time the
topology changes. If the response from all the nodes is empty then
you need to reschedule taskA, otherwise, you will skip this
procedure.

-
Denis


On Wed, Nov 20, 2019 at 9:28 AM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

That means I can't do this..

Collection>>
result = ignite.compute().broadcast(() ->
ignite.compute().activeTaskFutures());

Is there any way to get list futures of all active tasks
running on all nodes of the cluster?

Thanks,
Prasad


    On Wed 20 Nov, 2019, 10:51 PM Mikael
mailto:mikael-arons...@telia.com>
wrote:

Hi!

No you cannot serialize any future object.

Mikael


Den 2019-11-20 kl. 17:59, skrev Prasad Bhalerao:

Thank you for the suggestion. I will try this.

I am thinking to store the task future object in a
(replicated)cache against a jobId. If the node goes down
as described in case (b), I will get the task's future
object from this  cache using a jobId and will invoke the
get method on it.

But I am not sure about this approach, whether a future
object can be serialized and send it over the wire to
another node and deserialize it and then invoke the get
API on it.

I will try to implement it tomorrow.

Thanks,
Prasad


On Wed 20 Nov, 2019, 9:06 PM Igor Belyakov
mailto:igor.belyako...@gmail.com> wrote:

Hi Prasad,

I think that you can use compute().broadcast() for
collecting results of activeTaskFutures() from all
the nodes:
Collection>> result =
ignite.compute().broadcast(() ->
ignite.compute().activeTaskFutures());

Regards,
Igor Belyakov

On Wed, Nov 20, 2019 at 5:27 PM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

Hi,

I want to get the active tasks running in cluster
(tasks running on all nodes in cluster)

IgniteCompute interface has method
"activeTaskFutures" which returns tasks future
for active tasks started on local node.

Is there anyway to get the task futures for all
active tasks of whole cluster?

My use case is as follows.

a) The node submits the affinity task and task
runs on some other node in the cluster and the
node which submitted the task dies.

b) The node submits the affinity task and the
task runs on the same node and the same node dies.

The task consumers running on all ignite grid
nodes consumes tasks from kafka topic. If the
node which submitted the affinity task dies,
kafka re-assigns the partitions to another
consumer (running on different node) as part of
its partition rebalance process. In this case my
job gets consumed one more time,

But in this scenario that job might be already
running on one of the node case (a) or already
died as mentioned case (b).

So I want to check if the job is still running on
one of the node or it is already died. For this I
need the active job list running on all nodes.

Can someone please advise?

Thanks,
Prasad







Thanks,
Prasad




Invoke and serialization

2019-11-23 Thread Mikael

Hi!

I use invoke a lot on some caches and the code is not that small, would 
it be ok to put some of the code in a static method in some other class 
so I don''t need to send it over the network all the time, so the actual 
invoke is very small and just call a static method in some other class, 
it looks like it working fine and static method should be handled as 
transient I guess, but I though I would ask so I don't run into some 
nasty problem later.


Mikael




Re: Actual size of Ignite caches

2019-11-22 Thread Mikael
Don't think there is any direct way to do it, but you can read out the 
metric for your data regions and see how much memory is used and how 
much is free, that would give you a good indication of memory usage


Den 2019-11-22 kl. 13:01, skrev ashishb888:

Yes I am talking about physical size in bytes. How to get it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Actual size of Ignite caches

2019-11-22 Thread Mikael

Hi!

Are you talking about size as in number of entries or physical size in 
bytes ?


Ignite put most of the cache contents off heap, you create data regions 
(off heap) where the cache contents are allocated (there is a default 
data region), depending on configuration some contents of the cache can 
be on heap also (to improve on serialization/deserialization performance).


Mikael

Den 2019-11-22 kl. 10:48, skrev ashishb888:

How to get the actual size of caches? And do Ignite cache use heap size
provided to the application at the time of starting?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Does ignite suite for large data search without index?

2019-11-21 Thread Mikael

Hi!

One idea would be to have one cache for each column, so the key is name 
and value is the hobby for example, you get an index on the key for 
"free" and create one index on the value.


If the cache does not contain name that person does not have a hobby, 
only names that does have a hobby is in the cache, it would complicate 
the query a bit and you need to ask multiple queries for each column, 
but updating of index is fast as you only need to update one index for 
each cache if you only update a few columns, if you need to update all 
it will of course still need to update the index for all caches, I am 
not sure if that would work for you, it depends on what kind of queries 
you need.


In theory you could have 15 nodes and have one cache on each node and 
ask queries in parallel.


I am not at all sure it will work well, it's just an idea.

Mikael


Den 2019-11-21 kl. 12:17, skrev c c:
yes, we may add more columns in the future. You mean creating index 
create on one column or multiple columns? And some columns value 
difference are not big. So many index is not efficient and will cost a 
lot of ram and decrease update or insert performance(this table may 
udpate real time). So we think just traveling collection in memory is 
good. And cache is scalable will get rid of ram limit and make filter 
more quick.


Mikael mailto:mikael-arons...@telia.com>> 
于2019年11月21日周四 下午7:06写道:


Hi!

Are the queries limited to something like "select name from ...
where hobby=x and location=y..." or you need more complex queries ?

If the columns are fixed to 15, I don't see why you could not
create 15 indices, it would use lots of ram and I don't think it's
the best solution either but it should work.

Is it fixed to 15 columns ? or will you have to add more columns
in the future ?

Den 2019-11-21 kl. 10:56, skrev c c:


HI,Mikael
 Thanks for you reply very much!
 The type of data like this:
 member [name, location, age, gender, hobby, level, credits,
expense ...]
 We need filter data by arbitrary fileds combination, so
creating index is not of much use. We thought traveling all data
in memory works better.
 We can keep all data in ram, but data may increase
progressisvely, single node is not scalable. So we plan to use a
distribute memory cache.
 We store data off heap and all in ram with default ignite
serialization. We just create table, then populate data with
default configuration in ignite, query by sql(one node,  4
million records ).
 Is there anyway can improve query performance ?

Mikael mailto:mikael-arons...@telia.com>> 于2019年11月21日周四
下午5:02写道:

Hi!

The comparison is not of much use, when you talk about
ignite, it's not
just to search a list, there is serialization/deserialization
and other
things to consider that will make it slower compared to a
simple list
search, a linear search on an Ignite cache depends on how you
store data
(off heap/on heap, in ram/partially on disk, type of
serialization and
so on.

If you cannot keep all data in ram you are going to need some
index to
do a fast lookup, there is no way around it.

If you can have all the data in ram, why do you need Ignite ?
do you
have some other requirements for it that Ignite gives you ?
otherwise it
might be simpler to just use a list in ram and go with that ?

Is memory a limitation (cluster or single node ?) ? if not,
could you
explain why is it difficult to create an index on the data ?

Could you explain what type of data it is ? maybe it is
possible to
arrange the data in some other way to improve everything

Did you test with a single node or a cluster of nodes ? with
more nodes
you can improve performance as any search can be split up
between the
nodes, still, some kind of index will help a lot.

Mikael

Den 2019-11-21 kl. 08:49, skrev c c:
> HI,
>  We have a table with about 30 million records and 15
fields. We
> need implement function that user can filter record by
arbitrary 12
> fields( one,two, three...of them) with very low latency. It's
> difficult to create index. We think ignite is a grid memory
cache and
> test it with 4 million records(one node) without creating
index. It
> took about 5 seconds to find a record match one field filter
> condition. We have tested just travel a java List(10
million elements)
> with 3 filter condition. It took about 0.1 second. We just
want to
> know whether ignite suit this use case? Thanks very much.
>



Re: Apache Ignite High Memory Usage

2019-11-21 Thread Mikael

Hi!

Difficult to say without knowing anything about the data, you say "the 
object", I assume you have lots of them ?


How big is a serialized object, how many do you have ?

Each entry in the cache have an overhead of around 200 bytes or so, so 
1M entries have an overhead of 200MB, pages are 4KB by default, if your 
objects are big and use multiple pages you get some overhead there also.


You say it use 8.5GB memory, how much is java heap and how much is other 
memory ?


Mikael

Den 2019-11-21 kl. 11:01, skrev xabush:

I have a serialized object that I load into memory to use it as a cache for
retrieving data. I am thinking of using Apache Ignite to scale my
application so that I can distribute the object over multiple nodes.
However, when trying to save the object in a single ignite cache, my program
uses around 8.5GB of memory. Without Apache Ignite (i.e loading it directly
to memory and running computations on the object) my program uses only 1.5GB
of memory. So I am wondering how is it using Ignite increases the memory
usage by 8x as that seems to be very high. And what can I do to reduce the
memory size?

Here is my ignite configuration xml
https://pastebin.com/tejnqVK2

P.S I am using ignite with spring data.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Does ignite suite for large data search without index?

2019-11-21 Thread Mikael

Hi!

Are the queries limited to something like "select name from ... where 
hobby=x and location=y..." or you need more complex queries ?


If the columns are fixed to 15, I don't see why you could not create 15 
indices, it would use lots of ram and I don't think it's the best 
solution either but it should work.


Is it fixed to 15 columns ? or will you have to add more columns in the 
future ?


Den 2019-11-21 kl. 10:56, skrev c c:


HI,Mikael
 Thanks for you reply very much!
 The type of data like this:
 member [name, location, age, gender, hobby, level, credits, 
expense ...]
 We need filter data by arbitrary fileds combination, so creating 
index is not of much use. We thought traveling all data in memory 
works better.
 We can keep all data in ram, but data may increase 
progressisvely, single node is not scalable. So we plan to use a 
distribute memory cache.
 We store data off heap and all in ram with default ignite 
serialization. We just create table, then populate data with default 
configuration in ignite, query by sql(one node,  4 million records ).

 Is there anyway can improve query performance ?

Mikael mailto:mikael-arons...@telia.com>> 
于2019年11月21日周四 下午5:02写道:


Hi!

The comparison is not of much use, when you talk about ignite,
it's not
just to search a list, there is serialization/deserialization and
other
things to consider that will make it slower compared to a simple list
search, a linear search on an Ignite cache depends on how you
store data
(off heap/on heap, in ram/partially on disk, type of serialization
and
so on.

If you cannot keep all data in ram you are going to need some
index to
do a fast lookup, there is no way around it.

If you can have all the data in ram, why do you need Ignite ? do you
have some other requirements for it that Ignite gives you ?
otherwise it
might be simpler to just use a list in ram and go with that ?

Is memory a limitation (cluster or single node ?) ? if not, could you
explain why is it difficult to create an index on the data ?

Could you explain what type of data it is ? maybe it is possible to
arrange the data in some other way to improve everything

Did you test with a single node or a cluster of nodes ? with more
nodes
you can improve performance as any search can be split up between the
nodes, still, some kind of index will help a lot.

Mikael

Den 2019-11-21 kl. 08:49, skrev c c:
> HI,
>  We have a table with about 30 million records and 15
fields. We
> need implement function that user can filter record by arbitrary 12
> fields( one,two, three...of them) with very low latency. It's
> difficult to create index. We think ignite is a grid memory
cache and
> test it with 4 million records(one node) without creating index. It
> took about 5 seconds to find a record match one field filter
> condition. We have tested just travel a java List(10 million
elements)
> with 3 filter condition. It took about 0.1 second. We just want to
> know whether ignite suit this use case? Thanks very much.
>



Re: Does ignite suite for large data search without index?

2019-11-21 Thread Mikael

Hi!

The comparison is not of much use, when you talk about ignite, it's not 
just to search a list, there is serialization/deserialization and other 
things to consider that will make it slower compared to a simple list 
search, a linear search on an Ignite cache depends on how you store data 
(off heap/on heap, in ram/partially on disk, type of serialization and 
so on.


If you cannot keep all data in ram you are going to need some index to 
do a fast lookup, there is no way around it.


If you can have all the data in ram, why do you need Ignite ? do you 
have some other requirements for it that Ignite gives you ? otherwise it 
might be simpler to just use a list in ram and go with that ?


Is memory a limitation (cluster or single node ?) ? if not, could you 
explain why is it difficult to create an index on the data ?


Could you explain what type of data it is ? maybe it is possible to 
arrange the data in some other way to improve everything


Did you test with a single node or a cluster of nodes ? with more nodes 
you can improve performance as any search can be split up between the 
nodes, still, some kind of index will help a lot.


Mikael

Den 2019-11-21 kl. 08:49, skrev c c:

HI,
 We have a table with about 30 million records and 15 fields. We 
need implement function that user can filter record by arbitrary 12 
fields( one,two, three...of them) with very low latency. It's 
difficult to create index. We think ignite is a grid memory cache and 
test it with 4 million records(one node) without creating index. It 
took about 5 seconds to find a record match one field filter 
condition. We have tested just travel a java List(10 million elements) 
with 3 filter condition. It took about 0.1 second. We just want to 
know whether ignite suit this use case? Thanks very much.




Streaming exception

2019-11-20 Thread Mikael

Hi!

When I get timeout exceptions on the striping threads (like below) when 
streaming data, what is the best way around it ? should I increase the 
thread pool size, I would guess the reason is that the HD is not that 
fast and both WAL and storage is on the same drive (it's a persistent 
cache), but I  would like some kind of setup that does not have to be 
tuned all the time to work without exceptions even if persistent storage 
is not so fast, I do use:




So the question is what to modify that would help best, more threads, 
bigger checkpointPageBufferSize (128MB on a 2GB data region) or 
something else ? 11 seconds is a long time so increasing timeouts does 
not sound like a good idea ?


[2019-11-20T21:36:05,471][ERROR][tcp-disco-msg-worker-#2][G] Blocked 
system-critical thread has been detected. This can lead to cluster-wide 
undefined behaviour [threadName=data-streamer-stripe-0, blockedFor=11s]
[2019-11-20T21:36:05,471][ERROR][tcp-disco-msg-worker-#2][] Critical 
system error detected. Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler 
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker 
[name=data-streamer-stripe-0, igniteInstanceName=null, finished=false, 
heartbeatTs=1574282154412]]]
org.apache.ignite.IgniteException: GridWorker 
[name=data-streamer-stripe-0, igniteInstanceName=null, finished=false, 
heartbeatTs=1574282154412]
    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
[ignite-core-2.7.6.jar:2.7.6]
[2019-11-20T21:36:05,810][ERROR][tcp-disco-msg-worker-#2][G] Blocked 
system-critical thread has been detected. This can lead to cluster-wide 
undefined behaviour [threadName=data-streamer-stripe-1, blockedFor=11s]
[2019-11-20T21:36:05,810][ERROR][tcp-disco-msg-worker-#2][] Critical 
system error detected. Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler 
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker 
[name=data-streamer-stripe-1, igniteInstanceName=null, finished=false, 
heartbeatTs=1574282154310]]]
org.apache.ignite.IgniteException: GridWorker 
[name=data-streamer-stripe-1, igniteInstanceName=null, finished=false, 
heartbeatTs=1574282154310]
    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663) 
~[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119) 
[ignite-core-2.7.6.jar:2.7.6]
    at 
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
[ignite-core-2.7.6.jar:2.7.6]


Mikael




Re: Active tasks in cluster

2019-11-20 Thread Mikael

Hi!

No you cannot serialize any future object.

Mikael


Den 2019-11-20 kl. 17:59, skrev Prasad Bhalerao:

Thank you for the suggestion. I will try this.

I am thinking to store the task future object in a (replicated)cache 
against a jobId. If the node goes down as described in case (b), I 
will get the task's future object from this  cache using a jobId and 
will invoke the get method on it.


But I am not sure about this approach, whether a future object can be 
serialized and send it over the wire to another node and deserialize 
it and then invoke the get API on it.


I will try to implement it tomorrow.

Thanks,
Prasad


On Wed 20 Nov, 2019, 9:06 PM Igor Belyakov <mailto:igor.belyako...@gmail.com> wrote:


Hi Prasad,

I think that you can use compute().broadcast() for collecting
results of activeTaskFutures() from all the nodes:
Collection>> result =
ignite.compute().broadcast(() ->
ignite.compute().activeTaskFutures());

Regards,
Igor Belyakov

On Wed, Nov 20, 2019 at 5:27 PM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

Hi,

I want to get the active tasks running in cluster (tasks
running on all nodes in cluster)

IgniteCompute interface has method "activeTaskFutures" which
returns tasks future for active tasks started on local node.

Is there anyway to get the task futures for all active tasks
of whole cluster?

My use case is as follows.

a) The node submits the affinity task and task runs on some
other node in the cluster and the node which submitted the
task dies.

b) The node submits the affinity task and the task runs on the
same node and the same node dies.

The task consumers running on all ignite grid nodes consumes
tasks from kafka topic. If the node which submitted the
affinity task dies, kafka re-assigns the partitions to another
consumer (running on different node) as part of its partition
rebalance process. In this case my job gets consumed one more
time,

But in this scenario that job might be already running on one
of the node case (a) or already died as mentioned case (b).

So I want to check if the job is still running on one of the
node or it is already died. For this I need the active job
list running on all nodes.

Can someone please advise?

Thanks,
Prasad







Thanks,
Prasad




Re: Ignite data loss

2019-11-15 Thread Mikael

Hi!

So it does not matter what node you restart ? it's always that one that 
keeps the data ?


Are all 4 nodes part of the baseline topology ?

I pretty much have the same setup but with 3 nodes and have not had any 
problems at all, not sure what it could be ?


If you turn on all logging you will see all it does at startup and can 
see if there is something weird going on, it's lots of information but 
it usually gives a good indication if there is any problem, nothing else 
in the logs ? if the nodes clear everything at startup there should be 
something in the logs.


Mikael

Den 2019-11-16 kl. 04:16, skrev KR Kumar:
Hi Guys - I have a four node cluster with native persistence enabled. 
Its a partitioned cache and sync rebalance is enabled. When I restart 
the cluster the first node that starts retain the data and all the 
other nodes data is deleted and all the ignite data files are turned 
into 4096 byte files. Am I missing something? or some configuration 
that I missing.


Following is the cache configuration:

CacheConfiguration cacheConfig = new 
CacheConfiguration();


cacheConfig.setCacheMode(CacheMode.PARTITIONED);

cacheConfig.setRebalanceMode(CacheRebalanceMode.SYNC);

//cacheConfig.setRebalanceDelay(30);

cacheConfig.setName("eventCache-"+ tenantRunId+ "-"+ tenantId);

cacheConfig.setBackups(1);

cacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);

cacheConfig.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);

IgniteCache cache = 
IgniteContextWrapper.getInstance().getEngine()


.getOrCreateCache(cacheConfig);


Here is the configuration of ignite


class="org.apache.ignite.configuration.IgniteConfiguration">































Any quick pointers ??

Thanx and Regards,
KR Kumar


Re: query if queue exist and get it like getOrCreateCache

2019-11-14 Thread Mikael

Hi!

If there is a queue created already with the name "abc" it will just 
return that queue, if it cannot find it and you have included a 
configuration it will create it.


Mikael

Den 2019-11-14 kl. 13:30, skrev Narsi Reddy Nallamilli:

Hi Mikael,

Do you mean the 4th statement will not create new Queue? and would 
point to ig

CollectionConfiguration colCfg =new CollectionConfiguration(); colCfg.setCollocated(true); 
IgniteQueue ig =ignite.queue("abc",0,colCfg); IgniteQueue ig1 
=ignite.queue("abc",0,colCfg);


On Thu, Nov 14, 2019 at 5:42 PM Mikael <mailto:mikael-arons...@telia.com>> wrote:


What is wrong with Ignite.queue() ?

|queue

<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#queue-java.lang.String-int-org.apache.ignite.configuration.CollectionConfiguration->(String

<http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true>
 name,
int cap, CollectionConfiguration

<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CollectionConfiguration.html>
 cfg)|


Will get a named queue from cache and create one if it has not
been created yet and |cfg| is not |null|.

Mikael

Den 2019-11-14 kl. 13:02, skrev Narsi Reddy Nallamilli:

Hi!

I am looking for some method just like getOrCreateCache available
for queue. Any help!




Re: query if queue exist and get it like getOrCreateCache

2019-11-14 Thread Mikael

What is wrong with Ignite.queue() ?

|queue 
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#queue-java.lang.String-int-org.apache.ignite.configuration.CollectionConfiguration->(String 
<http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true> name, 
int cap, CollectionConfiguration 
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CollectionConfiguration.html> cfg)| 



Will get a named queue from cache and create one if it has not been 
created yet and |cfg| is not |null|.


Mikael

Den 2019-11-14 kl. 13:02, skrev Narsi Reddy Nallamilli:

Hi!

I am looking for some method just like getOrCreateCache available for 
queue. Any help!


Re: Question about memory when uploading CSV using .NET DataStreamer

2019-11-14 Thread Mikael

Hi!

If each row is stored as an entry in the cache you can expect an 
overhead of around 200 byte per entry, so 200MB just for the actual 
entries (1M) not counting your data (more if you have any index).


You can control the streamer, how much data and when it should be 
flushed, I have no idea how this work on the .NET client though, so 
maybe something there, you could try and manually call flush on the 
streamer at intervals (this is not needed, but just to see if it makes 
any difference), I use a lot of streamers (from java) and have never had 
any problems with it so maybe it is something on the .NET side.


Mikael

Den 2019-11-14 kl. 12:14, skrev Pavel Tupitsyn:

Sounds nasty, can you share a reproducer please?

On Thu, Nov 14, 2019 at 10:12 AM camer314 
<mailto:cameron.mur...@towerswatson.com>> wrote:


I have a large CSV file (50 million rows) that i wish to upload to
a cache. I
am using .NET and a DataStreamer from my application which is
designated as
a client only node.

What i dont understand is i quickly run out of memory on my C#
streaming
(client) application while my data node (an instance of
Apache.Ignite.exe)
slowly increases RAM usage but not at the rate as my client app does.

So it would seem that either (A) my client IS actually being used
to cache
data or (B) there is a memory leak where data that has been sent
to the
cache is not released.

As for figures, Apache.Ignite.exe when first started uses 165Mb. After
loading in 1 million records and letting it all settle down,
Apache.Ignite.exe now sits at 450Mb while my client app (the one
streaming)
sits at 1.5Gb.

The total size of the input file is 5Gb so 1 million records
should really
only be about 100Mb so i dont know how my client even gets to
1.5Gb to begin
with. If i comment out the AddData() then my client never gets
past 200Mb so
its certainly something happening in the cache.

Is this expected behaviour? If so then i dont know how to import
huge CSV
files without memory issues on the streaming machine.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite.Net Read through approach - error : could not create .Net cachestore

2019-11-13 Thread Mikael

Hi!

The client should not have to pass any cache configuration, once the 
cache is created in the server, you should just be able to get the 
cache, I do not see any reason that should not work but I have never 
used .NET so it is possible there is some hickup there that I am not seeing.


You do not have any other Ignite servers running on the same subnet or 
something ? verify that the client connect to the correct server (sounds 
silly but I managed to mess up a number of times ;).


Is it just that cache ? could you create another cache as simple as 
possible (no cache store) and see if it will find that one ?


It sounds like it cannot create the actual cache store but in that case 
I would think you would get the exception in the server and not the 
client, is there any more useful information from the exception ?


Mikael


Den 2019-11-14 kl. 02:41, skrev Sudhir Patil:

Hi All,

Server node is started & cache is created with configuration like
var cacheCfg = new CacheConfiguration
 {
 Name = "cars",
 CacheStoreFactory = new AdoNetCacheStoreFactory(),
 KeepBinaryInStore = true,
 ReadThrough = true,
 WriteThrough = true
 };

 ICache cars = ignite.GetOrCreateCache(cacheCfg).WithKeepBinary();
Now another node with clientmode = true is started and when I access same cache 
like
ICache carsFromCache = igniteClient.GetCache().WithKeepBinary();
This line is throwing IgniteException with message - could not create 
.Net cachestore
Question is - while accessing cache like above from client node do I 
also need to pass cachestore configuration like below


var cacheCfg = new CacheConfiguration
 {
 Name = "cars",
 CacheStoreFactory = new AdoNetCacheStoreFactory(),
 KeepBinaryInStore = true,
 ReadThrough = true,
 WriteThrough = true
 };
ICache carsFromCache = igniteClient.GetCache(cacheCfg).WithKeepBinary();

Regards,
Sudhir

On Wednesday, November 13, 2019, Sudhir Patil <mailto:patilsudhi...@gmail.com>> wrote:


Hi All,

I am really struggling with this exception / error message.

I am using same configurations on client & server node but still
it is giving same error / exception - could not create .Net
CacheStore.

Can someone please help / provide what is issue here??

Regards,
Sudhir

On Wednesday, November 13, 2019, Sudhir Patil
mailto:patilsudhi...@gmail.com>> wrote:

Alexandr,

Can you share the code u used + configuration ?

I am not able to send snippets as  i have to type in and can't
send those from my work.

On Wednesday, November 13, 2019, Alexandr Shapkin
mailto:lexw...@gmail.com>> wrote:

I configured 1 server node with :

Name = "cars",

CacheStoreFactory = new AdoNetCacheStoreFactory(),

KeepBinaryInStore = true,

ReadThrough = true,

WriteThrough = true

The second one with ClientMode = true just accesses the
cache by its name and performs all operations.

*From: *Sudhir Patil <mailto:patilsudhi...@gmail.com>
*Sent: *Wednesday, November 13, 2019 4:56 PM
*To: *user@ignite.apache.org <mailto:user@ignite.apache.org>
*Subject: *Re: Ignite.Net Read through approach - error :
could not create .Net cachestore

Alexandr,

Ok. But what is type of those 2 nodes ???

On Wednesday, November 13, 2019, Alexandr Shapkin
mailto:lexw...@gmail.com>> wrote:

Sudhir,

> Are there any samples for ReadThrough operation done
from normal ignite client node?

I tried the AspNetCachestore example with two nodes
and it worked well for me.

> I can not share snippet now but I will try to type
it later...:(

That’s ok. Feel free to add details whenever you get
ready.

*From: *Sudhir Patil <mailto:patilsudhi...@gmail.com>
*Sent: *Wednesday, November 13, 2019 4:30 PM
*To: *user@ignite.apache.org
<mailto:user@ignite.apache.org>
*Subject: *Re: Ignite.Net Read through approach -
error : could not create .Net cachestore

Alexandr,

Thanks for details.

I can not share snippet now but I will try to type it
later...:(

I am getting cacheexception with message as -
could not create .Net CacheStore.

From ignite samples i

Wait for streamer to "finish" ?

2019-11-12 Thread Mikael

Hi!

If I use flush() an a streamer it looks like it is waiting for all tags 
to be written, but if I call .size() on a cache after flush is complete 
it is still zero (primary and backup), it takes some time for it to show 
up even on a single node, is there a way to wait for the streamer to 
finish so that the entries are actually in the cache before I continue ?


I just realized I have not checked if the entries are actually there, is 
it possible that the entries are there and it's just not the size that 
is reported correct yet ?


Mikael





Re:

2019-11-01 Thread Mikael
That is up to you, you can use anything you want, the most obvious 
choice would be the built in attributes feature in Ignite configuration:


    
  
    
  
    
  

You will find a lot of information under the "Affinity Function" and 
"Affinity Key Mapper" sections here:


https://apacheignite.readme.io/docs/affinity-collocation

Did you consider the idea of using multiple caches ? that would be much 
easier to implement if it is a possible solution for you ?


Mikael


Den 2019-11-01 kl. 03:01, skrev BorisBelozerov:

How can I choose node? By IP, MAC, or other criteria??
Thank you!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re:

2019-10-31 Thread Mikael

Hi!

You can do almost anything messing around with affinity keys and the 
AffinityKeyMapper API, but if you can I would suggest you use two 
different caches one is used on the "low" memory nodes and the other on 
the "lots of memory" nodes, it would be easier to work with, it's easy 
to mess up the affinity key mapping but it can be done.


You can find some ideas here:

http://apache-ignite-users.70518.x6.nabble.com/How-to-use-Affinity-Function-to-Map-a-set-of-Keys-to-a-particular-node-in-a-cluster-td21995.html

Mikael


Den 2019-10-31 kl. 09:27, skrev BorisBelozerov:

I don't understand, can you explain for me?? Thank you!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: How does Apache Ignite distribute???

2019-10-31 Thread Mikael

Hi!

You have a good description at:

https://apacheignite.readme.io/docs/affinity-collocation

Long story short, it use hash codes on the keys to control distribution 
of the entries, affinity keys allow you to control what is used to 
calculate the hash codes, you can for example have a string and an 
integer as a key where only the integer is used as the affinity key, so 
all keys with the same integer value (and different strings) will end up 
in the same node.


Mikael

Den 2019-10-31 kl. 08:42, skrev BorisBelozerov:

Hello, I want to ask a question:
Apache Ignite distribute data equal each nodes, and use Affinity
How does Apache Ignite distribute and how does Affinity work??
Thank you!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Streamer question

2019-10-30 Thread Mikael

Hi!

If a data streamer with allow overwrite set and I write the same key 
multiple times to the streamer, is it clever enough to notice this and 
just replace the key with the last value before it flush the data or 
will it write the same key multiple times ?


Mikael




Re: How to control the distribution??

2019-10-24 Thread Mikael

Hi!

You can always use node filter to control where the cache is, I am not 
sure if there is a way to control distribution though, I guess it might 
be possible to use affinity keys and some kind of custom affinity 
function that is able to distribute the keys depending on some custom 
criteria, attributes for example.


Mikael


Den 2019-10-25 kl. 04:47, skrev BorisBelozerov:

Normally, Apache Ignite distribute data equally to nodes
How can I control this distribution?
I want to distribute more data to some nodes, and less data to other nodes
I also want to choose node to store cache
Thank you!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: "Failed to deserialize object" in Client Mode

2019-10-22 Thread Mikael

Did you check everything in the error message ?

"Caused by: java.io.IOException: Unexpected error occurred during
unmarshalling of an instance of the class:
org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot. Check that
all nodes are running the same version of Ignite and that all nodes have
GridOptimizedMarshaller configured with identical optimized classes lists,
if any (see setClassNames and setClassNamesPath methods). If your serialized
classes implement java.io.Externalizable interface, verify that
serialization logic is correct."

Mikael

Den 2019-10-23 kl. 08:44, skrev j_recuerda:


Hello, I am trying to modify the baseline topology by using
setBaselineTopology
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#setBaselineTopology-java.util.Collection->
. When I run it from a Server Node it works but when I run it form a Cient
node I am getting a deserialization error. What is the difference between
running the command in a server node and running it on a client node?

Thank you!!

This is the error printed by other node in the cluster:

08:48:45.649 [org.apache.ignite.internal.binary.BinaryContext] Failed to
deserialize object
[typeName=o.a.i.i.processors.closure.GridClosureProcessor$C4]
org.apache.ignite.binary.BinaryObjectException: Failed to unmarshal object
with optimized marshaller
at
org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1778)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.readObject(BinaryReaderExImpl.java:1331)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.readBinary(GridClosureProcessor.java:1959)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:865)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
at
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140)
at
org.apache.ignite.internal.processors.job.GridJobWorker.initialize(GridJobWorker.java:440)
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1119)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1923)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.ignite.IgniteCheckedException: Failed to deserialize
object with given class loader:
[clsLdr=sun.misc.Launcher$AppClassLoader@7852e922, err=Failed to deserialize
object
[typeName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest]]
at
org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:237)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
at
org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1762)
... 22 common frames omitted
Caused by: java.io.IOException: Failed to deserialize object
[typeName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest]
at
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:350)
at
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
at java.io.ObjectInputStream.readObject(Object

Re: Performance Question

2019-10-20 Thread Mikael

Hi!

Well, it depends on what you are doing I guess ;)

The example use put(), this is the slowest way of putting anything in 
the cache compared to streamers for example, anyway, if you run your 
code on one node in a big loop that does put 2M times it will be slower, 
with two nodes it will have to send half of the entries to the other 
node, so 1M entries are being sent over the network to the other node, 
with one node you don't need to send anything over the network.


You say 10 sec is slow, not sure what you compare with, too me that 
sounds like a good time, Ignite will by default put your objects off 
heap so the objects have to be serialized  and moved off heap and there 
is a lot of things going on behind the scenes that takes time in case 
you compare with a HashMap or something.


Ignite likes to do things in parallel, having one thread doing all the 
puts is the worst way to put a lot of entries in a cache, try to create 
a streamer and run your example with two nodes on that and see what you 
get, I am not sure what you intend to use it for ? you can feed Ignite a 
huge amount of entries per second but trying to get all of them through 
one single thread on one node is not going to do that.


How did you configure the cache ? number of backups ? persistence ? 
partitioned/replicated ?


Mikael


Den 2019-10-20 kl. 19:02, skrev Simin Eftekhari:

Hello Everyone,

I am new to apache ignite. I'd appreciate your help with the following 
question.


I modified the CacheApiExample, to insert 2,000,000 entries into the 
cache.


https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheApiExample.java

If I run the program as is, the insert all the entries takes less 10 
seconds (which I think is still too long). If I start an additional 
ignite node (on the same vm), the inserts take over 3 minutes. This is 
a huge discrepancy. I'd have thought that an additional instance would 
actually speed up the inserts.


Can someone please explain what is going?

thanks

SK


Re: LifeCycleBean problem/question

2019-10-09 Thread Mikael

Hi!

I figured it out, when I test I use CTRL+C to stop the node and that 
makes the log4j2 builtin shutdown hook to execute and it looks it is 
executed in parallel with the jvm shutdown hooks and before Ignite is 
finished with shutdown, so I disabled log4j2's shutdown hook 
(-Dlog4j.shutdownHookEnabled=false) and then do a LogManager.shutdown() 
at the end of Ignites AFTER_NODE_STOP event is complete and now it works 
just fine.


Mikael


Den 2019-10-09 kl. 10:54, skrev Ilya Kasnacheev:

Hello!

I don't think this should happen? Can you add breakpoint to this 
method (LogManager.shutdown), share stack trace with us once it gets hit?


Regards,
--
Ilya Kasnacheev


вт, 8 окт. 2019 г. в 23:44, Mikael <mailto:mikael-arons...@telia.com>>:


Hi!

I gave up on the JUL logging so I went back to log4j 2, this works
better but there is still very weird behavior.

I run my code as a LifeCycleBean and all works fine, but the
moment that
LifeCyclebean::BEFORE_NODE_STOP event is called the log4j 2 logger is
shutdown, if I turn on log4j2.debug=true I see:

"DEBUG StatusLogger Stopping LoggerContext[name=,
org.apache.logging.log4j.core.LoggerContext@543588e6]" just when the
BEFORE_NODE_STOP event is executed and any logging after that is
lost,
it looks like the shutdown hook log4j 2 install is executed at
that time
looking at the source code but how is that possible ? I would
assume I
am doing something silly as I have not heard any one else complain
about
this, any idea what it could be ?

This all standard log4j2 logging out of the box, I do have a
Cassandra
client running with slf4j->log4j2 setup in the same application,
but I
do not think that has anything to do with it and I shutdown the
Cassandra client in the AFTER_NODE_STOP event later on.

Mikael






LifeCycleBean problem/question

2019-10-08 Thread Mikael

Hi!

I gave up on the JUL logging so I went back to log4j 2, this works 
better but there is still very weird behavior.


I run my code as a LifeCycleBean and all works fine, but the moment that 
LifeCyclebean::BEFORE_NODE_STOP event is called the log4j 2 logger is 
shutdown, if I turn on log4j2.debug=true I see:


"DEBUG StatusLogger Stopping LoggerContext[name=, 
org.apache.logging.log4j.core.LoggerContext@543588e6]" just when the 
BEFORE_NODE_STOP event is executed and any logging after that is lost, 
it looks like the shutdown hook log4j 2 install is executed at that time 
looking at the source code but how is that possible ? I would assume I 
am doing something silly as I have not heard any one else complain about 
this, any idea what it could be ?


This all standard log4j2 logging out of the box, I do have a Cassandra 
client running with slf4j->log4j2 setup in the same application, but I 
do not think that has anything to do with it and I shutdown the 
Cassandra client in the AFTER_NODE_STOP event later on.


Mikael






Re: JUL logging problem

2019-10-07 Thread Mikael

Hi again!

It is a bit weird, I use the standard Ignite property file and it say:

handlers=java.util.logging.ConsoleHandler, 
org.apache.ignite.logger.java.JavaLoggerFileHandler


I cannot see that there should be any reason for there to not be two 
handlers (it does use the property file, I tried to change the name of 
the log file and it started to log to that file instead), but here comes 
the weird part:


I create my logger with Logger = Logger.getLogger("..");

And then I do logger.getParent().getHandlers() and the returned list 
only have one handler, the JavaLoggerFilehandler, the console handler is 
gone, it looks like Ignite removes the Console handler when 
IGNITE_QUIET=true, or could I be doing something else wrong ?


Mikael


Den 2019-10-07 kl. 12:02, skrev Ilya Kasnacheev:

Hello!

When ignite is in verbose mode, it adds IGNITE_CONSOLE_APPENDER 
implicltly.
You probably don't have any appenders to console set up, hence you 
can't see anything.


Regards,
--
Ilya Kasnacheev


сб, 5 окт. 2019 г. в 13:49, Mikael <mailto:mikael-arons...@telia.com>>:


Hi!

I must have done some silly mistake but I can't figure it out.

All standard Ignite logging using JUL, standard
java.util.logging.properties.

I create a logger in my own code Logger log = Logger.getLogger(
"..");

When I use that logger even with warning or severe I never get any
output to the console, everything is logged ok to the files but
nothing
on the console, but if I set IGNITE_QUIET=false, then it starts
logging
to console also, but IGNITE_QUIET=true should only disable INFO and
DEBUG as far as I understand.

Any idea what I could have messed up ?

Mikael




JUL logging problem

2019-10-05 Thread Mikael

Hi!

I must have done some silly mistake but I can't figure it out.

All standard Ignite logging using JUL, standard 
java.util.logging.properties.


I create a logger in my own code Logger log = Logger.getLogger( "..");

When I use that logger even with warning or severe I never get any 
output to the console, everything is logged ok to the files but nothing 
on the console, but if I set IGNITE_QUIET=false, then it starts logging 
to console also, but IGNITE_QUIET=true should only disable INFO and 
DEBUG as far as I understand.


Any idea what I could have messed up ?

Mikael




Re: Thick client multiple clusters

2019-09-17 Thread Mikael
Can't you just start multiple clients in the same JVM, one for each 
Ignite cluster ? I assume you will have your own code running there anyway.


Mikael


Den 2019-09-17 kl. 13:28, skrev reachtovishal:

Hi,
We are trying to see if Apache ignite can be used in our org. As part 
of this , we wanted to create multiple clusters and each of these 
clusters could have different caches. Now we need a client 
connectivity and looks like thick click is the option which provides 
most of the capabilities that we need like eventing, transactions etc.


How do we get a thick client join multiple clusters? The reason we 
want this is that because our a client would want to access date from 
different caches hosted in different clusters.


Is there a way that this can be achieved ?




Sent from my Samsung Galaxy smartphone.


Iterator for keys only ?

2019-09-11 Thread Mikael

Hi!

What is the best way to iterate over the keys only from a cache ? I 
don't want the values because these are long text strings so will slow 
down the iterator a lot.


Pretty much the Ignite way of doing map.keySet().iterator();

Mikael




Re: Ignite Performance - Adding new node is not improving SQL query performance

2019-09-09 Thread Mikael

Hi!

Well, as I said do not take my word for 100% truth, I could be wrong.

Yes, nodes that are not part of the baseline will still handle 
everything except persisted data, so you can still use them for compute 
grid, reading/writing KV, ML, services and so on, but in your case you 
are running SQL queries on persisted data so you do not have any use of 
the new node unless it is part of the baseline, that is my understanding 
on it.


Hopefully someone with better knowledge than me will step in and give 
you a more detailed answer (and correct me if I am wrong).


Mikael


Den 2019-09-09 kl. 12:05, skrev Muhammed Favas:


Thanks Mikael for the response.

So in that case, is it  necessary to add all the new nodes to baseline 
to make use of the resources efficiently ? But in ignite doc, it is 
not mentioned in that way. A sub set of nodes in cluster can be part 
of baseline.


What I though of is like, when I run a query, the data will load to 
memory of all these 5 nodes and will use the computing power of all. 
But now it seems, it is not working like that.


*Regards,*

*Favas ***

*From:*Mikael 
*Sent:* Monday, September 9, 2019 3:21 PM
*To:* user@ignite.apache.org
*Subject:* Re: Ignite Performance - Adding new node is not improving 
SQL query performance


Hi!

If the new node is not part of the baseline topology it will not have 
any persisted data stored so any SQL query will not be of any use on 
the node as it does not have any of the data (at least that is how I 
understand it, I could be wrong here).


If so you would need to add the new node to the baseline topology to 
see any performance improvement, and of course wait for a complete 
rebalance of the data.


From docs:

"The same tools and APIs can be used to adjust the baseline topology 
throughout the cluster lifetime. It's required if you decide to scale 
out or scale in an existing topology by setting more or fewer nodes 
that will store the data. The sections below show how to use the APIs 
and tools."


Mikael

Den 2019-09-09 kl. 11:31, skrev Muhammed Favas:

Hi,

I have an ignite cluster with 4 node(each with 8 core, 32 GB RAM
and 30 GB Disk)  with native persistence enabled and added to
baseline topology. There are two sql table created and loaded with
120 GB data.

One of my test sql query is taking 8 second with this set up.
Currently I am trying various option to reduce the execution time
of the query.

For that, I have added on more node (with same configuration) to
the cluster (non- baselined) with the impression that it will
reduce the  execution time, but it didn’t. When I checked the CPU
utilization of each node, all 4 previously added node’s CPU is
utilizing at its best, but the CPU of newly added node is not
using much.

Can you please help me to figure out why it is so and how can I
make sure all the nodes CPU is utilizing when I run a distributed
query so that my query runs faster.

Also, what all the additional things I need to do to make my query
runs faster.

*Regards,*

*Favas *



Re: Ignite Performance - Adding new node is not improving SQL query performance

2019-09-09 Thread Mikael

Hi!

If the new node is not part of the baseline topology it will not have 
any persisted data stored so any SQL query will not be of any use on the 
node as it does not have any of the data (at least that is how I 
understand it, I could be wrong here).


If so you would need to add the new node to the baseline topology to see 
any performance improvement, and of course wait for a complete rebalance 
of the data.


From docs:

"The same tools and APIs can be used to adjust the baseline topology 
throughout the cluster lifetime. It's required if you decide to scale 
out or scale in an existing topology by setting more or fewer nodes that 
will store the data. The sections below show how to use the APIs and tools."


Mikael

Den 2019-09-09 kl. 11:31, skrev Muhammed Favas:


Hi,

I have an ignite cluster with 4 node(each with 8 core, 32 GB RAM and 
30 GB Disk)  with native persistence enabled and added to baseline 
topology. There are two sql table created and loaded with 120 GB data.


One of my test sql query is taking 8 second with this set up. 
Currently I am trying various option to reduce the execution time of 
the query.


For that, I have added on more node (with same configuration) to the 
cluster (non- baselined) with the impression that it will reduce the 
 execution time, but it didn’t. When I checked the CPU utilization of 
each node, all 4 previously added node’s CPU is utilizing at its best, 
but the CPU of newly added node is not using much.


Can you please help me to figure out why it is so and how can I make 
sure all the nodes CPU is utilizing when I run a distributed query so 
that my query runs faster.


Also, what all the additional things I need to do to make my query 
runs faster.


*Regards,*

*Favas ***



Re: ML stable and performance

2019-09-06 Thread Mikael

Hi!

I have never used it myself but it's been there for long time and I 
would expect it to be stable, and yes it will run distributed, I can't 
say anything about performance as I have never used it.


You will find a lot of more information at:

https://apacheignite.readme.io/docs/machine-learning

Mikael


Den 2019-09-06 kl. 11:50, skrev David Williams:



I am evaluating ML framework for Java platform. I knew Ignite has ML
package.
But I like to know its stability and performance for production. Can 
Ignite

ML code run in distribute way?

Except its own ML package, which ml packages are best options for Ignite?


Re: What do master node and worker node mean?

2019-08-28 Thread Mikael

Master node is the node where the class is originating from.

Den 2019-08-28 kl. 11:51, skrev 李玉珏:



hi,
In the following documents:
Https://apacheignite.readme.io/docs/deployment-modes#overview
What do master node and worker node mean?


Log4j2 problem

2019-07-17 Thread Mikael

Hi!

I have setup the log4j2 logger and everything works fine until I 
shutdown the application (2.7.5), I run it with ignite.bat and I have my 
own code in a LifeCycleBean, I have some code in beforeNodeStop() and 
some code in afterNodeStop() and it looks like the log4j2 logger stops 
working between beforeNodeStop and afterNodeStop, does Ignite shutdown 
the logger before afterNodeStop is called or something like that ?


I get all logging information from the code in beforeNodeStop() but any 
logging calls in afterNodeStop do nothing at all.






Re: Ignite authentication without persistence enabled?

2019-06-04 Thread Mikael

Hi!

You only need persistence for the default data region, you can put your 
caches in a different data region without persistence, that's what I do.


Mikael

Like:

  
  
    class="org.apache.ignite.configuration.DataRegionConfiguration">

  
  value="#{32L * 1024 * 1024}"/>
  

  
  
    
  

  
    
  class="org.apache.ignite.configuration.DataRegionConfiguration">

    
    
    

    
    value="#{128L * 1024 * 1024}"/>

    
  

  
  class="org.apache.ignite.configuration.DataRegionConfiguration">

    
    

    
    
    value="#{10L * 1024 * 1024}"/>

    
  

  
  class="org.apache.ignite.configuration.DataRegionConfiguration">

    
    
    

    
    value="#{10L * 1024 * 1024}"/>

    
  
    
  
    

    

Den 2019-06-04 kl. 10:34, skrev Jeff Jiao:

Hi Igniters,

We want to enable the authentication feature for our Ignite cluster, but
currently, it still requires us to enable Ignite native persistence which is
not suitable for our use case.

Is there a way to enable persistence in IgniteConfiguration but disabled for
all the caches inside?
If not, what's the plan to relax this requirement? Will it be in recent
release? we are looking forward this...


Thanks,
Jeff



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Streamer for updates ?

2019-02-28 Thread Mikael

Hi!

If I have to update a large number of items in a cache (same keys, new 
values every few seconds), but it's the same keys so I need to have 
allow overwrite enabled, is there any advantage of using a streamer for 
this or is it better to just collect them in a map and use putAll ?


Mikael




User authentication and persistence

2019-02-26 Thread Mikael

Hi!

It say in the docs that I need to have persistence enabled to use user 
authentication, I assume this mean that I need to have persistence 
enabled on the defaultDataRegion, I can still have other regions without 
persistence enabled ?


Mikael




Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-08 Thread Mikael

Hi!

Any chance you might have one node running 2.6 or something like that ?

It looks like it get a different object that does not match the one 
expected in 2.7


Mikael

Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
Before submitting the affinity task ignite first gets the affinity 
cached function (AffinityInfo) by submitting the cluster wide task 
"AffinityJob". But while in the process of retrieving the output of 
this AffinityJob, ignite deserializes this output. I am getting 
exception while deserailizing this output.
In TcpDiscoveryNode.readExternal() method while deserailizing the 
CacheMetrics object from input stream on 14th iteration I am getting 
following exception. Complete stack trace is given in this mail chain.


Caused by: java.io.IOException: Unexpected error occurred during 
unmarshalling of an instance of the class: 
org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot. //


This is working fine on Ignite 2.6 version but giving problem on 2.7.

Is this a bug or am I doing something wrong?

Can someone please help?

On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde <mailto:akashshi...@gmail.com>> wrote:


Hi,

When execute affinity.partition(key), I am getting following
exception on Ignite  2.7.

Stacktrace:


2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
GridJobExecuteResponse
[nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
gridEx=null, isCancelled=false, retry=null]
org.apache.ignite.IgniteCheckedException: Failed to unmarshal
object with optimized marshaller
 at

org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
 at

org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
 at

org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
 at

org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
 at

org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
 at

org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
 at

org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
 at

org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
 at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.ignite.binary.BinaryObjectException: Failed
to unmarshal object with optimized marshaller
 at

org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
 at

org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
 at

org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
 at

org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
 at

org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
 at

org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
 at

org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140)
 ... 10 common frames omitted
Caused by: org.apache.ignite.IgniteCheckedException: Failed to
deserialize object with given class loader:
[clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, err=Failed to
deserialize object
[typeName=org.apache.ignite.internal.util.lang.GridTuple3]]
 at

org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:237)
 at

org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
 at

org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1762)
 ... 16 common frames omitted
Caused by: java.io.IOException: Failed to deserialize object
[typeName=org.apache.ignite.internal.util.lang.GridTuple3]
 at

org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:350)
 at

org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
 at java.io.ObjectInputStream.readObject(ObjectInputStrea

Re: Failing to create index on Ignite table column

2019-01-07 Thread Mikael

Hi!

It does look very strange because it has null value for the index name, 
and it looks like it does a "x = map.put( indexname, entry)" in the code 
and if x is not null (index already exists) you will get that exception, 
but if the name of the index is null it's a bit strange that the map 
returns a non null value, but maybe I am looking at the wrong place in 
the source.


Does this happen every time you do it ? is this code executed just after 
you create the cache ? what version of Ignite ?


Have you checked that you did not get any exception when you created 
that cache ? the only things I can think of is if the binary marshaller 
did complain about your column names, it does not store the names, just 
a hash code so in theory you could get a name collision but if that was 
the case I think Ignite would complain when you created the cache.


Mikael

Den 2019-01-07 kl. 08:42, skrev Shravya Nethula:

Hi,

I successfully created an Ignite table named User. Now I want to create 3
indexes on id, cityId, age columns in User table. Indexes were successfully
created on id and cityId columns. But I am getting the following "duplicate
index found" error though I am trying to create index for the first time for
age column:


Generated sql : CREATE INDEX  ON User (id asc)
Successfully created index on id column
Generated sql : CREATE INDEX  ON User (cityId asc)
Successfully created index on cityId column
Generated sql : CREATE INDEX  ON User (age asc)
Failed to create index on age column
Invalid schema state (duplicate index found): null

[12:34:52] (err) DDL operation failureSchemaOperationException [code=0,
msg=Invalid schema state (duplicate index found): null]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.prepareChangeOnNotStartedCache(GridQueryProcessor.java:1104)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.startSchemaChange(GridQueryProcessor.java:627)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.onSchemaPropose(GridQueryProcessor.java:488)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:378)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2475)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2620)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
javax.cache.CacheException: Invalid schema state (duplicate index found):
null
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)
at
net.aline.cloudedh.base.database.IgniteTable._createIndex(IgniteTable.java:99)
at 
net.aline.cloudedh.base.database.BigTable.createIndex(BigTable.java:234)
at 
net.aline.cloudedh.base.database.BigTable.createIndex(BigTable.java:246)
at
net.aline.cloudedh.base.framework.DACEngine.createIndexOnTable(DACEngine.java:502)
at
net.aline.cloudedh.base.framework.DACOperationsTest.main(DACOperationsTest.java:24)
Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Invalid
schema state (duplicate index found): null
at
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:644)
at
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:505)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2293)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(Igni

Re: ignite questions

2019-01-02 Thread Mikael

Hi!

By default you cannot assign a specific affinity key to a specific node 
but I think that could be done with a custom affinity function, you can 
do pretty much whatever you want with that, for example set an attribute 
in the XML file and use that to match with a specific affinity key 
value, so a node with attribute x will be assigned all affinity keys 
with value y.


I never tried it but I do not see any reason why it would not work.

Mikael


Den 2019-01-02 kl. 17:13, skrev Clay Teahouse:

Thanks Mikael.

I did come across that link before, but I am not sure it addresses my 
concern. I want to see how I need I size my physical VMs based on 
affinity keys. How would I say for India affinity key use this super 
size VM and for others use the other smaller ones, so the data doesn't 
get shuffled around? Maybe, there is no way, and I just have to wait 
for ignite to rebalance the partitions and fit things where they 
should be based on the affinity key.


On Wed, Jan 2, 2019 at 8:32 AM Mikael <mailto:mikael-arons...@telia.com>> wrote:


You can find some information about capacity planning here:

https://apacheignite.readme.io/docs/capacity-planning

About your India example you can use affinity keys to keep data
together in groups to avoid network traffic.

https://apacheignite.readme.io/docs/affinity-collocation

Mikael

Den 2019-01-02 kl. 14:44, skrev Clay Teahouse:

Thanks Naveen.

-- Cache Groups: When would I start considering cache groups, if
my system is growing, and sooner or later I will have to add to
my caches and I need to know 1) should I starting grouping now
(I'd think yes), 2) if no, when, what number of caches?
-- Capacity Planning: So, there is no guidelines on how to size
the nodes and the physical storage nodes reside on? How do I make
sure all the related data fit the same VM? It can't be the case
that I have to come up with 100s of super size VMs just because I
have one instance with a huge set of entries. For example, if I
have millions of entries for India and only a few for other
countries, how do I make sure all the India related data fits the
same VM (to avoid the network) and have the data for all the
small countries fit on the same VM?
-- Pinning the data to cache: the data pinned to on-heap cache
does not get evicted from the memory? I want to see if there is
something similar to Oracle's memory pinning.
-- Read through: How do I know if something on cache or disk
(using native persistence)?
5) Service chaining: Is there an example of service chaining that
you can point me to?

6) How do I implement service pipelining in apache ignite? Would
continuous query be the mechanism? Any examples?

7) Streaming: Are there examples on how to define watermarks,
i.e., input completeness with regard to the event timestamp?

thank you
Clay

On Tue, Jan 1, 2019 at 11:29 PM Naveen mailto:naveen.band...@gmail.com>> wrote:

Hello
Couple of things I would like to with my experience

1. Cache Groups : Around 100 caches, I do not think we need
to go for Cache
groups, as you mentioned cache groups will have impact on you
read/writes.
However, changing the partition count to 128 from default
1024 would improve
your cluster restart.

2. I doubt if Ignite has any settings we have for this.

3. The only I can think of is to keep the data in on-heap if
the data size
is not so huge.

4. Read through, with native persistence enabled, doing a
read to the disk
will load the cache. But the read is much slower compared
with read from
RAM, by default it does not pre-load the data. If you want to
avoid this you
can pre-load the data programatically and load Memory, good
for even SQL
SELECT as well. But with the 3rd party persistence, we need
to pre-load the
data to make your read work for SQL SELECT.

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: ignite questions

2019-01-02 Thread Mikael

You can find some information about capacity planning here:

https://apacheignite.readme.io/docs/capacity-planning

About your India example you can use affinity keys to keep data together 
in groups to avoid network traffic.


https://apacheignite.readme.io/docs/affinity-collocation

Mikael

Den 2019-01-02 kl. 14:44, skrev Clay Teahouse:

Thanks Naveen.

-- Cache Groups: When would I start considering cache groups, if my 
system is growing, and sooner or later I will have to add to my caches 
and I need to know 1) should I starting grouping now (I'd think yes), 
2) if no, when, what number of caches?
-- Capacity Planning: So, there is no guidelines on how to size the 
nodes and the physical storage nodes reside on? How do I make sure all 
the related data fit the same VM? It can't be the case that I have to 
come up with 100s of super size VMs just because I have one instance 
with a huge set of entries. For example, if I have millions of entries 
for India and only a few for other countries, how do I make sure all 
the India related data fits the same VM (to avoid the network) and 
have the data for all the small countries fit on the same VM?
-- Pinning the data to cache: the data pinned to on-heap cache does 
not get evicted from the memory? I want to see if there is something 
similar to Oracle's memory pinning.
-- Read through: How do I know if something on cache or disk (using 
native persistence)?
5) Service chaining: Is there an example of service chaining that you 
can point me to?


6) How do I implement service pipelining in apache ignite? Would 
continuous query be the mechanism? Any examples?


7) Streaming: Are there examples on how to define watermarks, i.e., 
input completeness with regard to the event timestamp?


thank you
Clay

On Tue, Jan 1, 2019 at 11:29 PM Naveen <mailto:naveen.band...@gmail.com>> wrote:


Hello
Couple of things I would like to with my experience

1. Cache Groups : Around 100 caches, I do not think we need to go
for Cache
groups, as you mentioned cache groups will have impact on you
read/writes.
However, changing the partition count to 128 from default 1024
would improve
your cluster restart.

2. I doubt if Ignite has any settings we have for this.

3. The only I can think of is to keep the data in on-heap if the
data size
is not so huge.

4. Read through, with native persistence enabled, doing a read to
the disk
will load the cache. But the read is much slower compared with
read from
RAM, by default it does not pre-load the data. If you want to
avoid this you
can pre-load the data programatically and load Memory, good for
even SQL
SELECT as well. But with the 3rd party persistence, we need to
pre-load the
data to make your read work for SQL SELECT.

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: ignite questions

2018-12-31 Thread Mikael

Hi!

You have to try, if you just have a few caches (<10) you may not need to 
go for any cache groups at all but the more caches you have the more the 
need for cache group pops up, it will create lots of file handles and 
use lots of memory that can be kept under control with cache groups.


Mikael

Den 2018-12-31 kl. 14:01, skrev Clay Teahouse:

Hello All,
I am new to ignite and have several general questions. I'd appreciate 
your feedback.


1) Cache groups: according to the ignite documentation, cache groups 
help with scaling and performance but might hurt reads. Where is the 
balance?


2) Capacity planning: If I reading the docs correctly, with native 
persistence enabled, you do not need to specify cache eviction. If so, 
assuming I have data and compute affinity enabled, how do I size my 
nodes, to make sure my data stays in cache, considering volume 
discrepancy in different class of data? Say for example, I have the 
data for Canada and India, with India data being 10 times the data for 
Canada. How do I size my nodes to make sure the last month data for 
India and Canada stay in cache?


3) Data pin to cache: How do I make sure certain data never gets 
evicted (with native persistence enabled)? For example, I want my 
dimension data to always stay in cache.


4) Read through: If I am using native persistence, do I need to 
explicitly load the cache, once the data is on disk and no longer in 
cache, or doing a read to the data on the disk, will load the cache? 
If yes, is that true about SQL select as well? Is this possible with 
3rd party persistence as well, say, postgresql.


5) Service chaining: Is there an example of service chaining that you 
can point me to?


6) How do I implement service pipelining in apache ignite? Would 
continuous query be the mechanism? Any examples?


7) Streaming: Are there examples on how to define watermarks, i.e., 
input completeness with regard to the event timestamp?




thank you,
Clay


Re: Question about write speed.

2018-12-29 Thread Mikael

Hi!

5-6k records per what ? second ? anyway, it's not easy to answer, what 
kind of index if any, how big values you have and so on, 6000 records 
per second sound ok but could be faster.


Any normal usage would be with multiple nodes, if you are going to use 
just 1 node you do not have much use of Ignite, there are faster solutions.


Not sure I get the idea with the settings, a replicated cache will only 
have one copy anyway if you only have 1 node.


What is it you want to do and what kind of performance do you need ? can 
you use multiple nodes and load balance the writing between them ?


I have an application that writes around 15000+ records per second on a 
single node with streaming without problems (affinity string keys and 
small values, <100 bytes), and it can handle around 5000 entries per 
second with put( set), all with persistence enabled.


Mikael

Den 2018-12-29 kl. 05:01, skrev yangjiajun:

Hello.

I use ignite 2.6  as a sql database and enable persistence(full_sync).I only
start one node.All tables are replicated.I use streaming while insert
data.The write speed is about 5~6k records per table.Is such speed
normal?Can I improve it?

Here is my config file:
example-default.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t2059/example-default.xml>



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Did anyone write custom Affinity function?

2018-12-18 Thread Mikael

Hi!

3 different affinity keys on 3 nodes is not enough, it's a hash code 
that is used so you may end up with 2 groups (or even all 3) on one 
node, if you need it to work that way you will need to create your own 
affinity function.


Any way you might be able to increase the number of group id's in some 
way ? if you for example had 30 group id's you would have a much better 
distribution of the entries.


Mikael

Den 2018-12-18 kl. 15:14, skrev ashishb008:

Hello,

As of now, we have 3 nodes. We use group ID as affinity key, and we have 3
group IDs (1, 2 and 3). And we limit cache partitions to group IDs. Overall
nodes=group IDs=cache partitions. So that each node have equal number of
partitions.

But it doesn't distribute cache partitions across the nodes. What it does,
put 2 partitions on one node, 1 partition on another node, and nothing on
remaining one.

Thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Page size

2018-12-18 Thread Mikael

Hi!

I can only set the page size on the dataStorageConfiguration ? it is not 
possible to have different page size per dataRegionConfiguration ?


All my caches have small values, less then 100 bytes but I have one 
cache with larger items, some a few hundred kb up to 2-3 mb, I would 
think performance will not be the best with 4kb page size for this but 
maybe I should try to store them some other way, would IGFS be a better 
choice for very large values ? IGFS just store the files in a k/v cache 
anyway ?


Mikael




Re: Did anyone write custom Affinity function?

2018-12-18 Thread Mikael

Hi!

How many nodes do you have and what are you using for the affinity key ? 
there is of course a small chance that the distribution is not even but 
in most cases it will work fine as long as it has enough different 
values to choose from, but it depends a little bit on what you use for 
affinity key, say you have 10 nodes and you use a "customer id" as 
affinity key and you only have 8 customers it will not work of course.


Mikael

Den 2018-12-18 kl. 06:24, skrev ashishb008:

I want all nodes in a cluster to have equal number of cache partitions. With
default Affinity function it is not happening, why default does not use all
nodes in a cluster? Will it be okay to write custom Affinity function? And
what will we lose doing so? Did anyone write custom Affinity function?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Best Practises for upgrading Ignite versions

2018-12-12 Thread Mikael

Hi!

Yes either that or startup another cluster and switch over to that 
before shutting down the old, as far as I understand Gridgain has a 
solution for rolling updates.


Mikael


Den 2018-12-12 kl. 01:15, skrev Anand Mehta:

Hi,
We are going to upgrade our Ignite Distributed Clusters from version 2.1 to
version 2.5.3 .
Are there are guidelines on the best practices of upgrading the cluster with
minimal downtime / data loss?
Is the standard practice to shut down the entire cluster, upgrade and
restart?

Thanks,
Anand



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Infinispan vs Ignite?

2018-12-10 Thread Mikael

Hi!

I may not be better, it depends on what you are going to use it for, if 
you just need a JCache then try them both and compare, do you need 
persistence, SQL, messages, affinity collocation, off heap storage, 
compute grid, machine learning, distributed queues, semaphores, services 
or any of the other features Ignite offers ? Ignite is so much more than 
just a JCache implementation, and Infinispan has it's features, and even 
if you just need a JCache you really do have to test because one might 
be better (faster ?) for your use case (number of nodes, type of data, 
how you access the data and so on).


Mikael

Den 2018-12-11 kl. 05:04, skrev the_palakkaran:

Andrey,.

I am trying to set up a distributed datagrid for my application. With
Ignite, I can create a cluster of cache nodes for this purpose.

This can alao be achieved on infinispan I believe.

So what really makes ignite better than infinsipan was my concern.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: SQL support for (insert if not exists, update if exists) operation.

2018-12-05 Thread Mikael
I guess you have to use UPDATE/INSERT, if the records exists most of the 
time it's not a huge performance killer:


if( update ... == 0)

 insert ....

Mikael

Den 2018-12-05 kl. 11:37, skrev Ilya Kasnacheev:

Hello!

Then answer seems to be "no" then, I guess.

Regards,
--
Ilya Kasnacheev


ср, 5 дек. 2018 г. в 12:49, Ray <mailto:ray...@cisco.com>>:


Hello Ilya,

Thanks for the reply.
I've tried "Merge into", it does not satisfy my needs.

For example,
I create a table with
create table a(a varchar, b varchar,c varchar, primary key(a));

And I inserted one record into table a with
merge into a(a,b) values('1','1');

Then I want to update this record by adding value to column c.
But when I try this sql

merge into a(a,c) values('1','1');

The value of column b is deleted.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite distribution configuration.

2018-12-02 Thread Mikael

Hi!

I don't think there is an easy answer, the configuration depends on so 
many things, try the default configuration and see how it goes and work 
your way from there, the documentation is great and explains well what 
all the options do, so it's easy to play around with.


Just configure a cache with 1 backup and see how it goes.

There are a lot of things to consider, can you use affinity keys, will 
all data fit in ram, do you need transactions, are you going to query 
data with SQL, how are you going at access the data and so on, I think 
it is more important you get a data model you can work with, you can 
always play around with the cache configuration later.


Mikael


Den 2018-12-03 kl. 07:59, skrev Viraj Rathod:

I’m a new user of apache ignite.

I want to know if my data is supposed to be partitioned amongst 3 
nodes and data of each node is supposed to have a backup on the other 
two nodes. The data is JSON key value pairs in 150 columns and 1 
million rows.

How will the configuration file look like?
Can anyone explain so that I can configure it for my project.

Thanks.
--
Regards,
Viraj Rathod


Re: Use case difference between CacheLoader, CacheWriter and CacheStore?

2018-11-27 Thread Mikael

Hi!

CacheLoader and CacheWriter are JCache interfaces, CacheStore is an 
Ignite interface that is just there for convenience to put all in the 
same class.


https://apacheignite.readme.io/v2.6/docs/3rd-party-store

"While Ignite allows you to configure the |CacheLoader| and 
|CacheWriter| separately, it is very awkward to implement a 
transactional store within 2 separate classes, as multiple |load| and 
|put| operations have to share the same connection within the same 
transaction. To mitigate that, Ignite provides 
|org.apache.ignite.cache.store.CacheStore| interface which extends both, 
|CacheLoader| and |CacheWriter|."


Mikael

Den 2018-11-27 kl. 10:35, skrev the_palakkaran:

Hi,

What is the difference in use case between CacheLoader, CacheWriter and
CacheStore ? CacheStore contains implementations for methods in the other
two, so why should I use them is what makes me confused.

When loading a record using cache store, the query was getting executed in
all the nodes, will this be the same if I use CacheLoader?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Question about add new nodes to ignite cluster.

2018-11-26 Thread Mikael

Hi!

You have it at the first paragraph in the documentation

"If Ignite persistence is enabled, Ignite enforces the /baseline 
topology/ concept which represents a set of server nodes in the cluster 
that will persist data on disk."


The baseline topology is needed for persistence, if your nodes does not 
store data they do not need to be part of the topology, they can still 
do everything else.


Mikael

Den 2018-11-26 kl. 10:20, skrev Justin Ji:

I added 3 nodes to existing cluster, but not add them to topology, like
below:

Cluster state: active
Current topology version: 93

Baseline nodes:
 ConsistentID=0ded99c1-b19c-4ced-ba3a-06abe233c6c8, STATE=ONLINE
 ConsistentID=2c252722-a3c4-4f55-8d26-d216329fddbb, STATE=ONLINE
 ConsistentID=4ac614ce-4e4c-4bb4-a479-2b47f71138bf, STATE=ONLINE

Number of baseline nodes: 3

Other nodes:
 ConsistentID=47ae0f2e-ae8e-468f-93a8-85fbb6ff8000
 ConsistentID=751da65f-99f7-488c-91d1-71d0365ffa4d
 ConsistentID=8a6b8281-50d8-4d0d-8933-0e51b67994a2
Number of other nodes: 3


We can see that the new nodes were not added to the topology. But what
puzzles me is that these three new nodes are starting to process data
requests, even though I didn't add them to the topology.

So what is the meaning of this topology? And what is the difference between
the nodes in and not in the topology?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Long activate time with tens of millions data

2018-11-21 Thread Mikael

Hi!

Do you use persistence ? if so, do you have 150 different cahes ? if 
that is the case I would think trying to use cache groups would help a 
lot if that is possible for you


Mikael

Den 2018-11-21 kl. 09:03, skrev yangjiajun:

I have a ignite node which is version 2.6.I use it as a database.It stores
tens of millions data in about 150 tables.Every time I try to reactivate
after restart,it takes very long time.But the cpu is very low.Is there any
way to improve it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite startu is very slow

2018-11-15 Thread Mikael

Hi!

This one looks a bit fishy to me:

"Failed to wait for initial partition map exchange. Possible reasons are:..."

Do you use transactions ?

Mikael

Den 2018-11-16 kl. 07:09, skrev kvenkatramtreddy:

Hi Team,

I am using Snapshot version of 2.7(Ignite ver.
2.7.0.20180806#19700101-sha1:DEV). Startup time is having between 8-30 mins.

My complete data set also very small, it is around 3 GB to 4 GB on Disk.



Please find attached logs and configuration in it.

example-ignite.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t1700/example-ignite.xml>
ignite.log
<http://apache-ignite-users.70518.x6.nabble.com/file/t1700/ignite.log>

FYI: I am using this snapshot because 2.6 was giving page corrupted or
Upperboud errors. It is stopped as soon as I have updated it to 2.7
Developer snapshot build.

Thanks & Regards,
Venkat




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Events question

2018-11-09 Thread Mikael

Hi!

The event documentation say you can query events, so they are stored 
locally, how long does it store them ? can I control this in any way ? 
say I just want to use event listeners and not interested in query them, 
so no use to keep them around once they have been caught by the 
listener, is that possible ?


It sounds like keeping lots of events around after they have been 
triggered would be wasting memory, or maybe it would not make any 
difference ?


Mikael




Re: Why Ignite use so many heap space?

2018-11-08 Thread Mikael

Hi!

You said: "And it still uses 80%-85% of heap memory after I stop my 
application", I assume you mean your client application ?


So after a GC it will still stay at 80% java heap usage ?

I assume you are using off-heap memory ?

I am not sure what the problem is, I am running an application that 
updates around 10.000 cache entries every 10 seconds (persistence 
enabled) and run a lot of other code that generates java heap garbage, 
and the application generates around 40MB garbage per second, it fills 
up the heap in a minute or so but goes down to around 20% after every gc.


How often does you application GC ? if you have 80% heap filled it 
should GC pretty often I would think ?


There is garbage generated, there is not much you can do about that if 
it's not your own code that generates the garbage, Ignite is not bad and 
with the cache keys and values and even index of heap it should work 
fine, but i guess it all depends on what your application do.


Mikael

Den 2018-11-08 kl. 14:07, skrev yangjiajun:

HI.

I have an ignite node  which is version 2.6 and has fixed 12GB heap memory
and 30GB data region.I use it as a database with persistence.It uses 90%-95%
of heap memory when my application is busy(My application uses jdbc thin
connection).And it still uses 80%-85% of heap memory after I stop my
application.Ignite is in bad performance when its heap usage is high.

How to reduce ignite heap usage?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/






Re: Does ignite provide a Comparator for Sort?

2018-11-01 Thread Mikael

Hi!

I don't think so but can't you use an index and an SQL query instead ?

Mikael


Den 2018-11-01 kl. 06:33, skrev Ignite Enthusiast:
I am new to Apache ignite.  I have used HAzelcast extensively and one 
of the features I really liked about it is the Comparator that it 
provides on the Cache Entries.


Does Apache Ignite have one readily available? If not, is it in the works?






Re: how to do i hibernate configuration for Apache ignite database

2018-10-30 Thread Mikael

Hi!

Are there any information missing in the documentation ?

https://apacheignite-mix.readme.io/docs/hibernate-l2-cache

Mikael



Den 2018-10-31 kl. 06:24, skrev Malashree:

how to do hibernate configuration using ignite database.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





  1   2   >