Re: .NET thin client multithreaded?

2019-08-23 Thread Denis Magda
Please continue using Ignite.NET thick client until we release
partition-awareness for the thin one. That feature has been already
developed and to be released in Ignite 2.8.

Presently, the thin client sends all the request via the proxy which is one
of the server nodes it's connected to. While the thick client always goes
to a node that keeps a key. The proxy is a bottleneck and that's why you
see such a performance drop.

-
Denis


On Thu, Aug 22, 2019 at 11:49 PM Eduard Llull  wrote:

> Hello everyone,
>
> We have just developed a gRPC service in .NET core that performs a bunch
> of cache gets for every RPC. We've been using the Apache.NET nuGet starting
> the Ignite node in client mode (thick client) but we just changed it to use
> the thin client and we see much much worse response times: from 4ms average
> and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the
> times get event worse under load: peaks of 115ms average, 1s 95th
> percentile.
>
> We were expecting some level of degradation in the response times when
> changeing from the thick to the thin client but not as much. In fact,
> trying to reduce the impact, we've deployed a Ignite node in client mode on
> every host where we have our gRPC service deployed and the gRPC service
> connects to the local Ignite node.
>
> The gRPC service receives several tens of concurrent requests when under
> load, but we instantiate one single ApacheClient (Ignition.StartClient())
> shared by all the threads that are serving the RPC requests. I've seen in
> the Java Thin Client documentation (
> https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading)
> the following:
>
> Thin client is single-threaded and thread-safe. The only shared resource
> is the underlying communication channel, and only one thread reads/writes
> to the channel while other threads are waiting for the operation to
> complete.
> Use multiple threads with thin client connection pool to improve
> performance
>
> Presently thin client has no feature to create multiple threads to improve
> throughput. You can create multiple threads by getting thin client
> connection from a pool in your application to improve throughput.
>
> But there is no such warning in the .NET thin client documentation (
> https://apacheignite-net.readme.io/docs/thin-client).
>
> Is it possible that the huge increase in the reponse times comes from
> contention when multiple gRPC threads are using the same thin client (thus,
> the same ClientSocket) to communicate with the cluster?
>
> In the mean time we will use a thin client pool as recommended in the Java
> documentation to see if it improves the performance.
>
>
> Thank you very much.
>


Re: IgniteQueue.removeAll() throwing NPE

2019-08-23 Thread dmagda
The issue is being discussed on SO:
https://stackoverflow.com/questions/57473783/ignite-2-5-ignitequeue-removeall-throwing-npe

Created a ticket:
https://issues.apache.org/jira/browse/IGNITE-12101



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory usage metric doesn't come down when memory is freed up

2019-08-23 Thread colinc
@Ilya KasnacheevReply - in reference to your comment about the IOOM condition
- is there any acceptable way to stop a full cache from killing the node
that it lives on? Or is this always recommended against? A custom data
region, perhaps?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread Maxim Muzafarov
It seems to me that it is a bug in the implementation when mmap set to
`false` value. I'll try to check.

Just for my curious, can you clarify why the `false` value is used?
According to the comment [1] using mmap=true with the LOG_ONLY mode
shows the best pefromance results.

[1] 
https://issues.apache.org/jira/browse/IGNITE-6339?focusedCommentId=16281803&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16281803

On Fri, 23 Aug 2019 at 18:15, ihalilaltun  wrote:
>
> Hi Mmuzaf
>
> IGNITE_WAL_MMAP is false in our environment.
>
> Here is the configuration;
> 
>
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
>  value="/etc/apache-ignite/ignite-log4j2.xml"/>
> 
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
>
> 
>
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
>
> 
> 
> 
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
>
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache Miss Using Thin Client

2019-08-23 Thread Stanislav Lukyanov
Hi,

I'm thinking this could be related to differences in the binary marshaller
configuration.
Are you using Java thin client? What version? What is the cache key type?
Are you setting a BinaryConfiguration explicitly on the client or server?

Thanks,
Stan

On Fri, Aug 23, 2019 at 3:38 PM  wrote:

> Hello,
>
> I have one Spring Boot app running as a client node which uses
> SpringCacheManager + @Cacheable annotation on a service call. This is
> demonstrating expected read-through behaviour.
>
> I have a second app where I'm trying to implement the same behaviour using
> the thin-client. This is able to successfully "get" entries put in the
> cache through this application but not those using the application above,
> even if the key appears to be the same.
>
> Both applications are using a key class from the same dependency and are
> obviously populated with the same attributes. I've used the "query" method
> on the cache to retrieve all the cache entries, have verified they're using
> the same server node, the entries are there and so on.
>
> Any ideas why the "get" method from thin-client cannot find entries "put"
> by the client node? Or, any suggestions on appropriate logging to assist
> diagnosis?
>
> Thanks,
>
> Simon.
>
> This e-mail and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this e-mail
> in error, please notify the sender immediately, delete it from your system
> and do not copy, disclose or otherwise act upon any part of this e-mail or
> its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays Group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any Internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this e-mail may be monitored by the Barclays Group for operational or
> business reasons.
>
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Barclays Group is personal to the
> sender and is not given or endorsed by the Barclays Group.
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc, Barclays Bank plc and
> Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays
> Bank plc are authorised by the Prudential Regulation Authority and
> regulated by the Financial Conduct Authority and the Prudential Regulation
> Authority. Clydesdale Financial Services Limited is authorised and
> regulated by the Financial Conduct Authority.
>


Re: Question on submitted post

2019-08-23 Thread Stanislav Lukyanov
Hi,

It looks like the issue is that you're ending up sending an instance of
your gRPC server inside your service. This approach is generally incorrect.
What you should do is
- not pass gRPC to the service instance
- add an init() method implementation to your service
- in your init() start your gRPC server

Stan

On Thu, Aug 22, 2019 at 10:52 AM Pascoe Scholle 
wrote:

> Hi there,
>
> How long does it usually take, for a post to be seen on the forum? Maybe I
> made a mistake so I will provide my question here. Excuse me if I am being
> impatient:
>
>
> =
> Good afternoon everyone,
>
> I have recently run into an issue and I think the problem lies in the
> server node configuration. I will attache the output of the stack trace at
> the bottom, however I first wish to explain what the software does and how
> we are using ignite.
>
> I start multiple server nodes with peerClassEnabled set to true, using  a
> TcpDiscoveryVmIpFinder and dont set anything other than a port range for
> the ipFinder.
>
> Using the REST protocol a ComputeTaskAdapter task is executed which starts
> a service and this in turn starts a grpc server, I have placed some scala
> code to show what I mean.
>
> class StartService extends ComputeTaskAdapter[String, Any]{
>   var ignite: Ignite = null;
>   @IgniteInstanceResource
>   def setIgnite(someIgnite: Ignite): Unit = {
> ignite = someIgnite
>   }
>
>  def map(...)={
> ...
> // port is an integer
> val server = new GrpcServer(ignite, port);
>
> val service = new ServiceImpl(name, server);
> /*
> within the method execute of the Service interface, server.start() is
> called
> */
>
> val serviceconfig = new ServiceConfiguration();
>   serviceconfig.setName(name);
>   serviceconfig.setTotalCount(1);
>   serviceconfig.setMaxPerNodeCount(1);
>   ignite.services().deploy(serviceconfig);
> ...
> }
>
> }
>
> this task returns a map with some non important variables.
>
> The grpc server takes the ignite instance created within the above
> mentioned computeTask as a variable, I am not sure if this could be the
> cause of the issue.
>
> Using grpc protocol, we create a ComputeTask which is executed by the grpc
> server some more code below:
>
> class GrpcServer(val ignite:Ignite, val port:Int) extends ..Some Grpc
> stuff..{
>
> def someGrpcProtocol(request: Message):Future[String]={
> val newTask = new SomeTask();
>
> ignite.compute(ignite.cluster()).execute(newTask, someinput);
> Future("Request is being processed");
> }
>
> }
>
>
> If a single server node is started, the program runs without problems.
> However, adding more nodes and trying to execute the new tasks on a remote
> node or on a node that has a certain attribute gives me a massive stack
> trace in the face.
> Basically, if I want to execute a task on a node where the service and
> grpc server do not reside, the exception happens.
>
> I have placed all custom classes within a jar that lies in the libs folder
> of the ignite-bin project.
> We are currently on version 2.7
>
> If you require anything else just let me know, ill be on it asap.
>
> Thanks for any help that may come my way.
>
> Cheers!
>
> Here is most of the stack trace:
> class org.apache.ignite.binary.BinaryObjectException: Failed to read field
> [name=server]
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:192)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)

Re: ZooKeeper Discovery - Handling large number of znodes and their cleanup

2019-08-23 Thread Stanislav Lukyanov
Hi Abhishek,

What's your Ignite version? Anything else to note about the cluster? E.g.
frequent topology changes (clients or servers joining and leaving, caches
starting and stopping)? What was the topology version when this happened?

Regarding the GC. Try adding -XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCApplicationConcurrentTime to your logging options, and share
the GC logs. Sometimes there are long pauses which can be seen in the logs
which are not GC pauses. Check the "Total time for which application
threads were stopped" and "Stopping threads took".

Stan

On Wed, Aug 21, 2019 at 7:17 PM Abhishek Gupta (BLOOMBERG/ 731 LEX) <
agupta...@bloomberg.net> wrote:

> Hello,
> I'm using ZK based discovery for my 6 node grid. Its been working smoothly
> for a while until suddenly my ZK node went OOM. Turns out there were 1000s
> of znodes, many with data about ~1M + there were suddenly a lot of stuff ZK
> requests (tx log was huge).
>
> One symptom on the grid to notes is that when this happened my nodes were
> heavily stalling (this is a separate issue to discuss - they're stalling
> with lots of high JVM pauses but GC logs appear alright) and were also
> getting heavy write from DataStreamers.
>
> I see the joinData znode having many 1000s of persistent children. I'd
> like to undersstand why so many znodes were created under 'jd' and what's
> the best way to prevent this and clean up these child nodes under jd.
>
>
> Thanks,
> Abhishek
>
>
>
>
>


Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread ihalilaltun
Hi Mmuzaf

IGNITE_WAL_MMAP is false in our environment.

Here is the configuration;


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>
















































































-
İbrahim Halil Altun
Senior Software Engineer @ Segmentify
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Rebalancing only to backups

2019-08-23 Thread Maxim Muzafarov
Hello,

I would not recommend using NONE rebalance mode since it is not stable
enough (this is true, that cache for this mode will not be rebalanced,
but you always should keep in mind then your 6th node return to the
cluster you should call manual rebalancing -- method rebalance() on
cache API).

Currently, it is not possible to clearly achieve your case without
Baseline topology (native persistence must be enabled, can you?), but
you can try these things:
- configure cache eviction policy [1] to avoid OOM
- use rebalanceDelay property [2] -- timeout in milliseconds when the
rebalance procedure should start for a particular cache


[1] https://apacheignite.readme.io/docs/evictions
[2] https://apacheignite.readme.io/v1.1/docs/rebalancing

On Thu, 15 Aug 2019 at 19:06, Abhishek Gupta (BLOOMBERG/ 731 LEX)
 wrote:
>
> Exactly - what is available for persistence, I was wondering if it is 
> available for in-mem only. So for now I'll just need to configure rebalance 
> mode to non-NONE and live with point 1.
>
> Thanks evgenii!
>
>
> From: e.zhuravlev...@gmail.com At: 08/15/19 11:37:16
> To: Abhishek Gupta (BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
> Subject: Re: Rebalancing only to backups
>
> Hi Abhishek,
>
> That's how it works now if you have enabled Persistence. Actually, that's the 
> main reason why BaselineTopology was introduced - we don't want to move a lot 
> of data between nodes if we know that node will return soon after a failure: 
> https://apacheignite.readme.io/docs/baseline-topology
> If you want to force rebalance, you can manually change the BaselineTopology.
>
> As far as I know, BaselineTopology concept was also introduced for in-memory 
> caches and will be released as a part of Apache Ignite 2.8. Also, there will 
> be some configurable timeout, after which baseline topology will be changed 
> automatically if you want it.
>
> Best Regards,
> Evgenii
>
> чт, 15 авг. 2019 г. в 17:16, Abhishek Gupta (BLOOMBERG/ 731 LEX) 
> :
>>
>> (pardon me if this mail is by chance a duplicate - it got bounced back when 
>> I sent it earlier from nabble)
>>
>> Hello,
>> I have 6 node grid and I've configured it with 1 backup. I want to have
>> partition rebalancing but only in the following way.
>> If one of the six nodes goes down, then some primary and backup partitions
>> go down with it but there is no data loss since the backup for those are
>> present on one of the other five nodes. So there is only single copy for
>> these paritions
>>
>> i. At this point I do not want that the 5 nodes rebalance all the paritions
>> amongst themselves such that each one has 1 primary and 1 backup
>> ii. When this 6th node comes back up, I want the partitions in the other
>> nodes which are living with only a single copy, so hydrate this fresh 6th
>> one with copies of the partitions that has only one copy before.
>>
>> Why i? Because I don't want a situation of cascading OOMs
>> Why ii? Obvious reason so as to have the 2nd copy for all partitions.
>>
>> Is this possible? If not what's the best way to come close to this?
>>
>> Thanks,
>> Abhi
>
>


Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread Maxim Muzafarov
Hello,

Did you change IGNITE_WAL_MMAP system variable? (true by default).
Can you also attach your Ignite configuration file?

I've checked the log you provided and it seems to me that during the
file rollover WAL procedure the current wal-file is closed, but the
WalWriter thread corresponding to this file is not stopped. Usually,
it works fine but why it does not work in your case I don't know.


On Fri, 23 Aug 2019 at 10:13, ihalilaltun  wrote:
>
> Hi Dmagda
>
> Here is the all log files that can get from the server;
> ignite.zip
> 
> gc.zip 
> gc-logs-continnued
> 
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: questions

2019-08-23 Thread narges saleh
Hello Ilya

There are parallel streams inserting data for all the countries into
different nodes (and caches) and there are parallel queries against the
distributed database for different countries, aggregating the data, in
some cases inserting back the data, and others returning results. Yes, for
a given query, only one or two caches might get hit. But if the volume of
data for a given city is too big, the query might hit multiple caches; and
hence my question. How do I keep these caches as close as possible to each
other?

What would be some of the ways to minimize the network hops? How can I keep
the data with the same affinity as close as possible to each other,
preferably on the same physical node or neighboring nodes (but across
multiple ignite nodes, and caches)?

Thanks and I am sorry for dragging this.


On Fri, Aug 23, 2019 at 5:19 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I don't think that partitioning by country or city is a good idea, since
> this distribution will be very uneven.
>
> You can have different ways of minimizing network hops, while keeping
> distributed nature of your database. Database isn't really distributed when
> for a given city query, only one node is taking all the load and the rest
> is idle.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 23 авг. 2019 г. в 13:15, narges saleh :
>
>> Hello Ilya,
>>  I agree with you that partitioning based on month was a bad example,
>> because most will be idle. Country or customer are better examples of my
>> case. There are limited number of them, but they are disproportionate and
>> they are always active. Let's take the country example. I need to search
>> and aggregate the volume of sales in each city and by country. I have a
>> couple of hundreds countries.
>> Let me ask a basic question.  If my queries/aggregations are based on
>> cities and countries, do I need to partition based on countries (or even
>> cities)?  I want to avoid network hops for my searches and aggregations as
>> much as possible (I do not slow writes either but I am aware of the trade
>> off between read/writes and replication and partitioning). What do I define
>> my affinity key on and what do I partition on?
>>
>> thanks again for your help.
>>
>> On Fri, Aug 23, 2019 at 4:03 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Partitioning based on let's say user id is usually fair, because there
>>> usually are 100,000ths of users and neither of those owns disproportionate
>>> amount of data.
>>>
>>> Partitioning by month is especially bad, since in a given months, all of
>>> partitions will be basically idle save for one, and there would be a lot of
>>> contention.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 22 авг. 2019 г. в 19:31, narges saleh :
>>>
 I am not sure you can find real world examples where caches can be
 evenly partitioned, if the partitioning factor is an affinity key. I
 comparing, with partitioning case with relational databases, say
 partitioning based on month of the year. I definitely don't have 100s of
 departments but I do have 10s of departments, but departments are very
 disproportional in size.
 As for rebalancing case, the pods will be added to the system as the
 volume increases, so I'd assume that would prompt ignite to rebalance.

 On Thu, Aug 22, 2019 at 11:00 AM Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> 1) No. Ignite only rebalances data when nodes are joining or leaving
> cluster.
> 2) Ignite's affinity is not really well suited to such detailed manual
> assignment. It is assumed that your cache has large number of partitions
> (e.g. 1024) and data is distributed evenly between all partitions. Having
> department as affinity key is suboptimal because there's not many
> departments and they usually vary in size. That's the kind of distribution
> that you want to avoid.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 22 авг. 2019 г. в 18:37, narges saleh :
>
>> Thanks Ilya for replies.
>> 1)  Doesn't ignite rebalance the nodes if there are additional nodes
>> available and the data doesn't fit the cache current ignite node? 
>> Consider
>> a scenario where I have 100 pods on a physical node, assuming pod = 
>> ignite
>> node.
>> 2)  I am not sure what you mean by confining half of cache to one
>> cluster and another half to another node. If my affinity key is 
>> department
>> id, why can't I have department A on a partitioned cache, one partition 
>> on
>> one node in cluster A, and the other partition on another node on another
>> cluster.
>>
>> I might be misunderstanding the whole, and I'd appreciate
>> clarification.
>>
>> On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> 1) When there 

Cache Miss Using Thin Client

2019-08-23 Thread simon.keen
Hello,

I have one Spring Boot app running as a client node which uses 
SpringCacheManager + @Cacheable annotation on a service call. This is 
demonstrating expected read-through behaviour.

I have a second app where I'm trying to implement the same behaviour using the 
thin-client. This is able to successfully "get" entries put in the cache 
through this application but not those using the application above, even if the 
key appears to be the same.

Both applications are using a key class from the same dependency and are 
obviously populated with the same attributes. I've used the "query" method on 
the cache to retrieve all the cache entries, have verified they're using the 
same server node, the entries are there and so on.

Any ideas why the "get" method from thin-client cannot find entries "put" by 
the client node? Or, any suggestions on appropriate logging to assist diagnosis?

Thanks,

Simon.


This e-mail and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this e-mail in error, 
please notify the sender immediately, delete it from your system and do not 
copy, disclose or otherwise act upon any part of this e-mail or its attachments.
Internet communications are not guaranteed to be secure or virus-free. The 
Barclays Group does not accept responsibility for any loss arising from 
unauthorised access to, or interference with, any Internet communications by 
any third party, or from the transmission of any viruses. Replies to this 
e-mail may be monitored by the Barclays Group for operational or business 
reasons.
Any opinion or other information in this e-mail or its attachments that does 
not relate to the business of the Barclays Group is personal to the sender and 
is not given or endorsed by the Barclays Group.
Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc, Barclays Bank plc and 
Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays Bank 
plc are authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and the Prudential Regulation Authority. Clydesdale 
Financial Services Limited is authorised and regulated by the Financial Conduct 
Authority.


Re: Capacity planning for production deployment on kubernetes

2019-08-23 Thread Denis Mekhanikov
Shiva,

What version of Ignite do you use?
Before version 2.7 Ignite used a different mechanism to limit the size of the 
WAL history. It used DataStorageConfiguration#walHistorySize property, which is 
currently deprecated. This is what’s explained on the internal documentation 
page.

In Ignite versions starting from 2.7 DataStorageConfiguration#maxWalArchiveSize 
property is used. It specifies the maximum size of WAL history in bytes.
As you said, it’s 4 times the checkpoint buffer size. Since you didn’t change 
the size of the checkpoint buffer, by default it’s 1 GB in your case (rules for 
its calculation can be found here: 
https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size)
So, maximum WAL history size is 4 GB.
Plus you need to add WAL size itself which is (64 MB per segment) x (10 
segments) = 640 megabytes by default.
If you use Ignite 2.7 or newer, then 10 GB should be enough for WAL with 
archive. Failures should be investigated in this case.

But Ignite 2.6 and older use a different approach, so there is no strict 
limitation on the WAL archive size.

Denis
On 22 Aug 2019, 22:35 +0300, Denis Magda , wrote:
> Please share the whole log file. It might be the case that something goes 
> wrong with volumes you attached to Ignite pods.
>
> -
> Denis
>
>
> > On Thu, Aug 22, 2019 at 8:07 AM Shiva Kumar  
> > wrote:
> > > Hi Denis,
> > >
> > > Thanks for your response,
> > > yes in our test also we have seen OOM errors and pod crash.
> > > so we will follow the recommendation for RAM requirements and also I was 
> > > checking to ignite documentation on disk space required for WAL + WAL 
> > > archive.
> > > here in this link  
> > > https://apacheignite.readme.io/docs/write-ahead-log#section-wal-archive
> > >
> > > it says: archive size is defined as 4 times the size of the checkpointing 
> > > buffer and checkpointing buffer is a function of the data region 
> > > (https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size)
> > >
> > > but in this link 
> > > https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-SubfoldersGeneration
> > >
> > > under Estimating disk space section it explains something to estimate 
> > > disk space required for WAL but it is not clear, can you please help me 
> > > the correct recommendation for calculating the disk space required for 
> > > WAL+WAL archive.
> > >
> > > In one of my testing, I configured 4GB for data region and 10GB for 
> > > WAL+WAL archive but our pods crashing as disk mounted for WAL+WAL archive 
> > > runs out of space.
> > >
> > > [ignite@ignite-cluster-ignite-node-2 ignite]$ df -h
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > overlay         158G   39G  112G  26% /
> > > tmpfs            63G     0   63G   0% /dev
> > > tmpfs            63G     0   63G   0% /sys/fs/cgroup
> > > /dev/vda1       158G   39G  112G  26% /etc/hosts
> > > shm              64M     0   64M   0% /dev/shm
> > > /dev/vdq        9.8G  9.7G   44M 100% /opt/ignite/wal
> > > /dev/vdr         50G  1.4G   48G   3% /opt/ignite/persistence
> > > tmpfs            63G   12K   63G   1% 
> > > /run/secrets/kubernetes.io/serviceaccount
> > > tmpfs            63G     0   63G   0% /proc/acpi
> > > tmpfs            63G     0   63G   0% /proc/scsi
> > > tmpfs            63G     0   63G   0% /sys/firmware
> > >
> > >
> > > and this is the error message in ignite node:
> > >
> > > "ERROR","JVM will be halted immediately due to the failure: 
> > > [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class 
> > > o.a.i.IgniteCheckedException: Failed to archive WAL segment 
> > > [srcFile=/opt/ignite/wal/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0006.wal,
> > >  
> > > dstFile=/opt/ignite/wal/archive/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0236.wal.tmp]]]"
> > >
> > > > On Thu, Aug 22, 2019 at 8:04 PM Denis Mekhanikov 
> > > >  wrote:
> > > > > Shivakumar,
> > > > >
> > > > > Such allocation doesn’t allow full memory utilization, so it’s 
> > > > > possible, that nodes will crash because of out of memory errors.
> > > > > So, it’s better to follow the given recommendation.
> > > > >
> > > > > If you want us to investigate reasons of the failures, please provide 
> > > > > logs and configuration of the failed nodes.
> > > > >
> > > > > Denis
> > > > > On 21 Aug 2019, 16:17 +0300, Shiva Kumar , 
> > > > > wrote:
> > > > > > Hi all,
> > > > > > we are testing field use case before deploying in the field and we 
> > > > > > want to know whether below resource limits are suitable in 
> > > > > > production.
> > > > > > There are 3 nodes (3 pods on kubernetes) running. Each having below 
> > > > > > configuration
> > > > > >
> > > > > >                            DefaultDataRegion: 60GB
> > > > > >                                                 JVM: 32GB
> > > > > > Resource allocated for each contain

Re: Memory usage metric doesn't come down when memory is freed up

2019-08-23 Thread colinc
An update on this - the test works as expected on Ignite versions 2.6 and
earlier. It appears to be a bug introduced in Ignite 2.7. I have raised the
following jira ticket to track:

https://issues.apache.org/jira/browse/IGNITE-12096



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: .NET thin client multithreaded?

2019-08-23 Thread Alex Shapkin
Hello,

Is it possible that the huge increase in the reponse times comes from 
contention when multiple gRPC threads are using the same thin client (thus, the 
same ClientSocket) to communicate with the cluster?

Yes, that’s correct. Threads will share the same TCP connection by default.

But there is no such warning in the .NET thin client documentation 
(https://apacheignite-net.readme.io/docs/thin-client).

I think we need to update the docs to include that warning.

In the mean time we will use a thin client pool as recommended in the Java 
documentation to see if it improves the performance.

Well, in general yes, it should help you increase the performance. 
Also it’s worth to know how many server nodes do you have in a cluster? Is your 
data well-collocated?

A thin client utilizes a single connection to a single node, but the requested 
data could be located at a different one, and thus could cause an additional 
network overhead.
Please, refer to the affinity awareness wiki page [1].

[1] - 
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

From: Eduard Llull
Sent: Friday, August 23, 2019 9:49 AM
To: user@ignite.apache.org
Subject: .NET thin client multithreaded?

Hello everyone,

We have just developed a gRPC service in .NET core that performs a bunch of 
cache gets for every RPC. We've been using the Apache.NET nuGet starting the 
Ignite node in client mode (thick client) but we just changed it to use the 
thin client and we see much much worse response times: from 4ms average and 
15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the times 
get event worse under load: peaks of 115ms average, 1s 95th percentile.

We were expecting some level of degradation in the response times when 
changeing from the thick to the thin client but not as much. In fact, trying to 
reduce the impact, we've deployed a Ignite node in client mode on every host 
where we have our gRPC service deployed and the gRPC service connects to the 
local Ignite node.

The gRPC service receives several tens of concurrent requests when under load, 
but we instantiate one single ApacheClient (Ignition.StartClient()) shared by 
all the threads that are serving the RPC requests. I've seen in the Java Thin 
Client documentation 
(https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading)
 the following:

Thin client is single-threaded and thread-safe. The only shared resource is the 
underlying communication channel, and only one thread reads/writes to the 
channel while other threads are waiting for the operation to complete.
Use multiple threads with thin client connection pool to improve performance
Presently thin client has no feature to create multiple threads to improve 
throughput. You can create multiple threads by getting thin client connection 
from a pool in your application to improve throughput.
But there is no such warning in the .NET thin client documentation 
(https://apacheignite-net.readme.io/docs/thin-client).

Is it possible that the huge increase in the reponse times comes from 
contention when multiple gRPC threads are using the same thin client (thus, the 
same ClientSocket) to communicate with the cluster?

In the mean time we will use a thin client pool as recommended in the Java 
documentation to see if it improves the performance.


Thank you very much.



Re: Using Ignite as blob store?

2019-08-23 Thread colinc
>From anecdotal experience of storing larger objects (up to say 10MB) in
Ignite, I find that the overall access performance is significantly better
than storing lots of small objects. The main thing to watch out for is that
very large objects can cause unbalanced data distribution. Similar to
over-use of affinity.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: questions

2019-08-23 Thread Ilya Kasnacheev
Hello!

I don't think that partitioning by country or city is a good idea, since
this distribution will be very uneven.

You can have different ways of minimizing network hops, while keeping
distributed nature of your database. Database isn't really distributed when
for a given city query, only one node is taking all the load and the rest
is idle.

Regards,
-- 
Ilya Kasnacheev


пт, 23 авг. 2019 г. в 13:15, narges saleh :

> Hello Ilya,
>  I agree with you that partitioning based on month was a bad example,
> because most will be idle. Country or customer are better examples of my
> case. There are limited number of them, but they are disproportionate and
> they are always active. Let's take the country example. I need to search
> and aggregate the volume of sales in each city and by country. I have a
> couple of hundreds countries.
> Let me ask a basic question.  If my queries/aggregations are based on
> cities and countries, do I need to partition based on countries (or even
> cities)?  I want to avoid network hops for my searches and aggregations as
> much as possible (I do not slow writes either but I am aware of the trade
> off between read/writes and replication and partitioning). What do I define
> my affinity key on and what do I partition on?
>
> thanks again for your help.
>
> On Fri, Aug 23, 2019 at 4:03 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Partitioning based on let's say user id is usually fair, because there
>> usually are 100,000ths of users and neither of those owns disproportionate
>> amount of data.
>>
>> Partitioning by month is especially bad, since in a given months, all of
>> partitions will be basically idle save for one, and there would be a lot of
>> contention.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 22 авг. 2019 г. в 19:31, narges saleh :
>>
>>> I am not sure you can find real world examples where caches can be
>>> evenly partitioned, if the partitioning factor is an affinity key. I
>>> comparing, with partitioning case with relational databases, say
>>> partitioning based on month of the year. I definitely don't have 100s of
>>> departments but I do have 10s of departments, but departments are very
>>> disproportional in size.
>>> As for rebalancing case, the pods will be added to the system as the
>>> volume increases, so I'd assume that would prompt ignite to rebalance.
>>>
>>> On Thu, Aug 22, 2019 at 11:00 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 1) No. Ignite only rebalances data when nodes are joining or leaving
 cluster.
 2) Ignite's affinity is not really well suited to such detailed manual
 assignment. It is assumed that your cache has large number of partitions
 (e.g. 1024) and data is distributed evenly between all partitions. Having
 department as affinity key is suboptimal because there's not many
 departments and they usually vary in size. That's the kind of distribution
 that you want to avoid.

 Regards,
 --
 Ilya Kasnacheev


 чт, 22 авг. 2019 г. в 18:37, narges saleh :

> Thanks Ilya for replies.
> 1)  Doesn't ignite rebalance the nodes if there are additional nodes
> available and the data doesn't fit the cache current ignite node? Consider
> a scenario where I have 100 pods on a physical node, assuming pod = ignite
> node.
> 2)  I am not sure what you mean by confining half of cache to one
> cluster and another half to another node. If my affinity key is department
> id, why can't I have department A on a partitioned cache, one partition on
> one node in cluster A, and the other partition on another node on another
> cluster.
>
> I might be misunderstanding the whole, and I'd appreciate
> clarification.
>
> On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> 1) When there is an overflow, either page eviction kicks in, or, if
>> it is disabled, you get an IgniteOOM, after which the node is no longer
>> usable. Please avoid overflowing any data regions since there's no 
>> graceful
>> handling currently.
>> 2) I don't think so. You can't easily confine half of cache's data to
>> one cluster group and another half to other group.
>>
>> Such scenarios are not recommended. We expect that all partitions
>> have same amount of data. Not that there are a few gargantuan partitions
>> that don't fit in a single node.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 20 авг. 2019 г. в 06:29, narges saleh :
>>
>>> Hello All,
>>>
>>> I'd appreciate your answers to my questions.
>>>
>>> 1) Assuming I use affinity key among 4 caches, and they all end up
>>> on the same ignite node. What happens where is an overflow? Does the
>>> overflow data end up on a joined node? How do I keep the related data 
>>> from
>>> a

Re: questions

2019-08-23 Thread narges saleh
Hello Ilya,
 I agree with you that partitioning based on month was a bad example,
because most will be idle. Country or customer are better examples of my
case. There are limited number of them, but they are disproportionate and
they are always active. Let's take the country example. I need to search
and aggregate the volume of sales in each city and by country. I have a
couple of hundreds countries.
Let me ask a basic question.  If my queries/aggregations are based on
cities and countries, do I need to partition based on countries (or even
cities)?  I want to avoid network hops for my searches and aggregations as
much as possible (I do not slow writes either but I am aware of the trade
off between read/writes and replication and partitioning). What do I define
my affinity key on and what do I partition on?

thanks again for your help.

On Fri, Aug 23, 2019 at 4:03 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Partitioning based on let's say user id is usually fair, because there
> usually are 100,000ths of users and neither of those owns disproportionate
> amount of data.
>
> Partitioning by month is especially bad, since in a given months, all of
> partitions will be basically idle save for one, and there would be a lot of
> contention.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 22 авг. 2019 г. в 19:31, narges saleh :
>
>> I am not sure you can find real world examples where caches can be evenly
>> partitioned, if the partitioning factor is an affinity key. I comparing,
>> with partitioning case with relational databases, say partitioning based on
>> month of the year. I definitely don't have 100s of departments but I do
>> have 10s of departments, but departments are very disproportional in size.
>> As for rebalancing case, the pods will be added to the system as the
>> volume increases, so I'd assume that would prompt ignite to rebalance.
>>
>> On Thu, Aug 22, 2019 at 11:00 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> 1) No. Ignite only rebalances data when nodes are joining or leaving
>>> cluster.
>>> 2) Ignite's affinity is not really well suited to such detailed manual
>>> assignment. It is assumed that your cache has large number of partitions
>>> (e.g. 1024) and data is distributed evenly between all partitions. Having
>>> department as affinity key is suboptimal because there's not many
>>> departments and they usually vary in size. That's the kind of distribution
>>> that you want to avoid.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 22 авг. 2019 г. в 18:37, narges saleh :
>>>
 Thanks Ilya for replies.
 1)  Doesn't ignite rebalance the nodes if there are additional nodes
 available and the data doesn't fit the cache current ignite node? Consider
 a scenario where I have 100 pods on a physical node, assuming pod = ignite
 node.
 2)  I am not sure what you mean by confining half of cache to one
 cluster and another half to another node. If my affinity key is department
 id, why can't I have department A on a partitioned cache, one partition on
 one node in cluster A, and the other partition on another node on another
 cluster.

 I might be misunderstanding the whole, and I'd appreciate clarification.

 On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> 1) When there is an overflow, either page eviction kicks in, or, if it
> is disabled, you get an IgniteOOM, after which the node is no longer
> usable. Please avoid overflowing any data regions since there's no 
> graceful
> handling currently.
> 2) I don't think so. You can't easily confine half of cache's data to
> one cluster group and another half to other group.
>
> Such scenarios are not recommended. We expect that all partitions have
> same amount of data. Not that there are a few gargantuan partitions that
> don't fit in a single node.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 20 авг. 2019 г. в 06:29, narges saleh :
>
>> Hello All,
>>
>> I'd appreciate your answers to my questions.
>>
>> 1) Assuming I use affinity key among 4 caches, and they all end up on
>> the same ignite node. What happens where is an overflow? Does the 
>> overflow
>> data end up on a joined node? How do I keep the related data from all the
>> caches close to each other when the volume of exceeds a single node?
>>
>> 2) Is there a concept of cluster affinity, meaning having a cluster
>> group defined based on some affinity key? For example, if I have two
>> departments A and B, can I have a cluster group for department A and
>> another for department B?
>>
>> Thanks,
>> Narges
>>
>


Re: Using Ignite as blob store?

2019-08-23 Thread Pavel Kovalenko
Denis,

You can't set page size greater than 16Kb due to our page memory
limitations.

чт, 22 авг. 2019 г. в 22:34, Denis Magda :

> How about setting page size to more KBs or MBs based on the average value?
> That should work perfectly fine.
>
> -
> Denis
>
>
> On Thu, Aug 22, 2019 at 8:11 AM Shane Duan  wrote:
>
>> Thanks, Ilya. The blob size varies from a few KBs to a few MBs.
>>
>> Cheers,
>> Shane
>>
>>
>> On Thu, Aug 22, 2019 at 5:02 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> How large are these blobs? Ignite is going to divide blobs into <4k
>>> chunks. We have no special optimizations for storing large key-value pairs.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 22 авг. 2019 г. в 02:53, Shane Duan :
>>>
 Hi Igniters, is it a good idea to use Ignite(with persistence) as a
 blob store? I did run some testing with a small dataset, and it looks
 performing okay, even with a small off-heap mem for the data region.

 Thanks!

 Shane

>>>


Re: questions

2019-08-23 Thread Ilya Kasnacheev
Hello!

Partitioning based on let's say user id is usually fair, because there
usually are 100,000ths of users and neither of those owns disproportionate
amount of data.

Partitioning by month is especially bad, since in a given months, all of
partitions will be basically idle save for one, and there would be a lot of
contention.

Regards,
-- 
Ilya Kasnacheev


чт, 22 авг. 2019 г. в 19:31, narges saleh :

> I am not sure you can find real world examples where caches can be evenly
> partitioned, if the partitioning factor is an affinity key. I comparing,
> with partitioning case with relational databases, say partitioning based on
> month of the year. I definitely don't have 100s of departments but I do
> have 10s of departments, but departments are very disproportional in size.
> As for rebalancing case, the pods will be added to the system as the
> volume increases, so I'd assume that would prompt ignite to rebalance.
>
> On Thu, Aug 22, 2019 at 11:00 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> 1) No. Ignite only rebalances data when nodes are joining or leaving
>> cluster.
>> 2) Ignite's affinity is not really well suited to such detailed manual
>> assignment. It is assumed that your cache has large number of partitions
>> (e.g. 1024) and data is distributed evenly between all partitions. Having
>> department as affinity key is suboptimal because there's not many
>> departments and they usually vary in size. That's the kind of distribution
>> that you want to avoid.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 22 авг. 2019 г. в 18:37, narges saleh :
>>
>>> Thanks Ilya for replies.
>>> 1)  Doesn't ignite rebalance the nodes if there are additional nodes
>>> available and the data doesn't fit the cache current ignite node? Consider
>>> a scenario where I have 100 pods on a physical node, assuming pod = ignite
>>> node.
>>> 2)  I am not sure what you mean by confining half of cache to one
>>> cluster and another half to another node. If my affinity key is department
>>> id, why can't I have department A on a partitioned cache, one partition on
>>> one node in cluster A, and the other partition on another node on another
>>> cluster.
>>>
>>> I might be misunderstanding the whole, and I'd appreciate clarification.
>>>
>>> On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 1) When there is an overflow, either page eviction kicks in, or, if it
 is disabled, you get an IgniteOOM, after which the node is no longer
 usable. Please avoid overflowing any data regions since there's no graceful
 handling currently.
 2) I don't think so. You can't easily confine half of cache's data to
 one cluster group and another half to other group.

 Such scenarios are not recommended. We expect that all partitions have
 same amount of data. Not that there are a few gargantuan partitions that
 don't fit in a single node.

 Regards,
 --
 Ilya Kasnacheev


 вт, 20 авг. 2019 г. в 06:29, narges saleh :

> Hello All,
>
> I'd appreciate your answers to my questions.
>
> 1) Assuming I use affinity key among 4 caches, and they all end up on
> the same ignite node. What happens where is an overflow? Does the overflow
> data end up on a joined node? How do I keep the related data from all the
> caches close to each other when the volume of exceeds a single node?
>
> 2) Is there a concept of cluster affinity, meaning having a cluster
> group defined based on some affinity key? For example, if I have two
> departments A and B, can I have a cluster group for department A and
> another for department B?
>
> Thanks,
> Narges
>



Re: Using Ignite as blob store?

2019-08-23 Thread colinc
I understand from this post:
https://stackoverflow.com/questions/50116444/unable-to-increase-pagesize/50121410#50121410

that the maximum page size is 16K. Is that still true?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory usage metric doesn't come down when memory is freed up

2019-08-23 Thread colinc
Yes - avoiding an Ignite out of memory condition is exactly what I'm trying
to do. The question is - how can I do this if memory metrics aren't
reliable?

Does anyone have experience of successfully monitoring contracting Ignite
memory consumption? Or does anyone have any more general thoughts on how to
avoid IOOM other than using native persistence?

To be clear, the problem with the metrics is not caused by the IOOM
condition. See the amended example below:

package mytest;

import org.apache.ignite.DataRegionMetrics;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.DataRegionConfiguration;
import org.apache.ignite.configuration.DataStorageConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.failure.NoOpFailureHandler;
import org.junit.Test;

public class MemoryTest2 {

private static final String CACHE_NAME = "cache";
private static final String DEFAULT_MEMORY_REGION = "Default_Region";
private static final long MEM_SIZE = 100L * 1024 * 1024;


@Test
public void testOOM() throws InterruptedException {
try (Ignite ignite = startIgnite("IgniteMemoryMonitorTest1")) {
fillDataRegion(ignite);
CacheConfiguration cfg = new
CacheConfiguration<>(CACHE_NAME);
cfg.setStatisticsEnabled(true);
IgniteCache cache =
ignite.getOrCreateCache(cfg);

// Clear all entries from the cache to free up memory 
memUsed(ignite);
cache.clear();
cache.removeAll();
cache.put("Key", "Value");
memUsed(ignite);
cache.destroy();
Thread.sleep(5000);

// Should now report close to 0% but reports 59% still
memUsed(ignite);
}
}

private Ignite startIgnite(String instanceName) {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName(instanceName);
cfg.setDataStorageConfiguration(createDataStorageConfiguration());
cfg.setFailureHandler(new NoOpFailureHandler());
return Ignition.start(cfg);
}

private DataStorageConfiguration createDataStorageConfiguration() {
return new DataStorageConfiguration()
.setDefaultDataRegionConfiguration(
new DataRegionConfiguration()
.setName(DEFAULT_MEMORY_REGION)
.setInitialSize(MEM_SIZE)
.setMaxSize(MEM_SIZE)
.setMetricsEnabled(true));
}

private void fillDataRegion(Ignite ignite) {
byte[] megabyte = new byte[1024 * 1024];
IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME);
for (int i = 0; i < 50; i++) {
cache.put(i, megabyte);
memUsed(ignite);
}
}

private void memUsed(Ignite ignite) {
DataRegionConfiguration defaultDataRegionCfg =
ignite.configuration()
.getDataStorageConfiguration()
.getDefaultDataRegionConfiguration();
String regionName = defaultDataRegionCfg.getName();
DataRegionMetrics metrics = ignite.dataRegionMetrics(regionName);
float usedMem = metrics.getPagesFillFactor() *
metrics.getTotalAllocatedPages() * metrics.getPageSize();
float pctUsed = 100 * usedMem / defaultDataRegionCfg.getMaxSize();
System.out.println("Memory used: " + pctUsed + "%");
}
} 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failure with "Failed to write buffer." error

2019-08-23 Thread ihalilaltun
Hi Dmagda

Here is the all log files that can get from the server;
ignite.zip
  
gc.zip   
gc-logs-continnued

  



-
İbrahim Halil Altun
Senior Software Engineer @ Segmentify
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/