Re: .NET thin client multithreaded?

2019-08-23 Thread Denis Magda
Please continue using Ignite.NET thick client until we release partition-awareness for the thin one. That feature has been already developed and to be released in Ignite 2.8. Presently, the thin client sends all the request via the proxy which is one of the server nodes it's connected to. While

Re: IgniteQueue.removeAll() throwing NPE

2019-08-23 Thread dmagda
The issue is being discussed on SO: https://stackoverflow.com/questions/57473783/ignite-2-5-ignitequeue-removeall-throwing-npe Created a ticket: https://issues.apache.org/jira/browse/IGNITE-12101 -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Memory usage metric doesn't come down when memory is freed up

2019-08-23 Thread colinc
@Ilya KasnacheevReply - in reference to your comment about the IOOM condition - is there any acceptable way to stop a full cache from killing the node that it lives on? Or is this always recommended against? A custom data region, perhaps? -- Sent from:

Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread Maxim Muzafarov
It seems to me that it is a bug in the implementation when mmap set to `false` value. I'll try to check. Just for my curious, can you clarify why the `false` value is used? According to the comment [1] using mmap=true with the LOG_ONLY mode shows the best pefromance results. [1]

Re: Cache Miss Using Thin Client

2019-08-23 Thread Stanislav Lukyanov
Hi, I'm thinking this could be related to differences in the binary marshaller configuration. Are you using Java thin client? What version? What is the cache key type? Are you setting a BinaryConfiguration explicitly on the client or server? Thanks, Stan On Fri, Aug 23, 2019 at 3:38 PM wrote:

Re: Question on submitted post

2019-08-23 Thread Stanislav Lukyanov
Hi, It looks like the issue is that you're ending up sending an instance of your gRPC server inside your service. This approach is generally incorrect. What you should do is - not pass gRPC to the service instance - add an init() method implementation to your service - in your init() start your

Re: ZooKeeper Discovery - Handling large number of znodes and their cleanup

2019-08-23 Thread Stanislav Lukyanov
Hi Abhishek, What's your Ignite version? Anything else to note about the cluster? E.g. frequent topology changes (clients or servers joining and leaving, caches starting and stopping)? What was the topology version when this happened? Regarding the GC. Try adding

Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread ihalilaltun
Hi Mmuzaf IGNITE_WAL_MMAP is false in our environment. Here is the configuration; http://www.springframework.org/schema/beans; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=" http://www.springframework.org/schema/beans

Re: Rebalancing only to backups

2019-08-23 Thread Maxim Muzafarov
Hello, I would not recommend using NONE rebalance mode since it is not stable enough (this is true, that cache for this mode will not be rebalanced, but you always should keep in mind then your 6th node return to the cluster you should call manual rebalancing -- method rebalance() on cache API).

Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread Maxim Muzafarov
Hello, Did you change IGNITE_WAL_MMAP system variable? (true by default). Can you also attach your Ignite configuration file? I've checked the log you provided and it seems to me that during the file rollover WAL procedure the current wal-file is closed, but the WalWriter thread corresponding to

Re: questions

2019-08-23 Thread narges saleh
Hello Ilya There are parallel streams inserting data for all the countries into different nodes (and caches) and there are parallel queries against the distributed database for different countries, aggregating the data, in some cases inserting back the data, and others returning results. Yes, for

Cache Miss Using Thin Client

2019-08-23 Thread simon.keen
Hello, I have one Spring Boot app running as a client node which uses SpringCacheManager + @Cacheable annotation on a service call. This is demonstrating expected read-through behaviour. I have a second app where I'm trying to implement the same behaviour using the thin-client. This is able

Re: Capacity planning for production deployment on kubernetes

2019-08-23 Thread Denis Mekhanikov
Shiva, What version of Ignite do you use? Before version 2.7 Ignite used a different mechanism to limit the size of the WAL history. It used DataStorageConfiguration#walHistorySize property, which is currently deprecated. This is what’s explained on the internal documentation page. In Ignite

Re: Memory usage metric doesn't come down when memory is freed up

2019-08-23 Thread colinc
An update on this - the test works as expected on Ignite versions 2.6 and earlier. It appears to be a bug introduced in Ignite 2.7. I have raised the following jira ticket to track: https://issues.apache.org/jira/browse/IGNITE-12096 -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: .NET thin client multithreaded?

2019-08-23 Thread Alex Shapkin
Hello, Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster? Yes, that’s correct. Threads will share the same TCP connection by default. But

Re: Using Ignite as blob store?

2019-08-23 Thread colinc
>From anecdotal experience of storing larger objects (up to say 10MB) in Ignite, I find that the overall access performance is significantly better than storing lots of small objects. The main thing to watch out for is that very large objects can cause unbalanced data distribution. Similar to

Re: questions

2019-08-23 Thread Ilya Kasnacheev
Hello! I don't think that partitioning by country or city is a good idea, since this distribution will be very uneven. You can have different ways of minimizing network hops, while keeping distributed nature of your database. Database isn't really distributed when for a given city query, only

Re: questions

2019-08-23 Thread narges saleh
Hello Ilya, I agree with you that partitioning based on month was a bad example, because most will be idle. Country or customer are better examples of my case. There are limited number of them, but they are disproportionate and they are always active. Let's take the country example. I need to

Re: Using Ignite as blob store?

2019-08-23 Thread Pavel Kovalenko
Denis, You can't set page size greater than 16Kb due to our page memory limitations. чт, 22 авг. 2019 г. в 22:34, Denis Magda : > How about setting page size to more KBs or MBs based on the average value? > That should work perfectly fine. > > - > Denis > > > On Thu, Aug 22, 2019 at 8:11 AM

Re: questions

2019-08-23 Thread Ilya Kasnacheev
Hello! Partitioning based on let's say user id is usually fair, because there usually are 100,000ths of users and neither of those owns disproportionate amount of data. Partitioning by month is especially bad, since in a given months, all of partitions will be basically idle save for one, and

Re: Using Ignite as blob store?

2019-08-23 Thread colinc
I understand from this post: https://stackoverflow.com/questions/50116444/unable-to-increase-pagesize/50121410#50121410 that the maximum page size is 16K. Is that still true? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Memory usage metric doesn't come down when memory is freed up

2019-08-23 Thread colinc
Yes - avoiding an Ignite out of memory condition is exactly what I'm trying to do. The question is - how can I do this if memory metrics aren't reliable? Does anyone have experience of successfully monitoring contracting Ignite memory consumption? Or does anyone have any more general thoughts on

Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread ihalilaltun
Hi Dmagda Here is the all log files that can get from the server; ignite.zip gc.zip gc-logs-continnued

.NET thin client multithreaded?

2019-08-23 Thread Eduard Llull
Hello everyone, We have just developed a gRPC service in .NET core that performs a bunch of cache gets for every RPC. We've been using the Apache.NET nuGet starting the Ignite node in client mode (thick client) but we just changed it to use the thin client and we see much much worse response