n try to create the thread on
developer user list:
http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-2-7-release-td34076i40.html
BR,
Andrei
1/29/2020 1:58 AM, Abhishek Gupta (BLOOMBERG/ 919 3RD A) пишет:
Hello!
I've got a 6 node Ignite 2.7.5 grid. I had this strange issue where
multiple nodes hit the following exception -
[ERROR] [sys-stripe-53-#54] GridCacheIoManager - Failed to process message
[senderId=f4a736b6-cfff-4548-a8b4-358d54d19ac6, messageType=class
o.a.i.i.processors.cache.dis
unity would welcome a pull request.
Regards,
Stephen
On 28 Oct 2019, at 12:14, Abhishek Gupta (BLOOMBERG/ 919 3RD A)
wrote:
Thanks Ilya for your response.
Even if my value objects were not large, nothing stops clients from doing a
getAll with say 100,000 keys. Having some kind of throt
cheev
пн, 21 окт. 2019 г. в 23:17, Abhishek Gupta (BLOOMBERG/ 731 LEX)
:
In my otherwise stably running grid (on 2.7.5) I sometimes see intermittent
GridDhtPartitionsExchangeFuture warning. This warning the occurs periodically
and then goes away after some time. I couldn't find any do
Thanks Ilya for your response.
Even if my value objects were not large, nothing stops clients from doing a
getAll with say 100,000 keys. Having some kind of throttling would still be
useful.
-Abhishek
- Original Message -
From: Ilya Kasnacheev
To: ABHISHEK GUPTA
CC: user
Hello,
I've benchmarked my grid for users (clients) to do getAll with upto 100
keys at a time. My value objects tend to be quite large and my worry is if
there are errant clients might at times do a getAll with a larger number of
keys - say 1000. If that happens I worry about GC issues/hum
o Map.
Regards,
--
Ilya Kasnacheev
пн, 21 окт. 2019 г. в 19:15, Abhishek Gupta (BLOOMBERG/ 731 LEX)
:
I should have mentioned I'm using String->BinaryObject in my cache. My binary
object itself has a large number of field->value pairs (a few thousand). As I
run my ingestion jobs using d
In my otherwise stably running grid (on 2.7.5) I sometimes see intermittent
GridDhtPartitionsExchangeFuture warning. This warning the occurs periodically
and then goes away after some time. I couldn't find any documentation or other
threads about this warning and its implications.
* What is t
rg/apache/ignite/IgniteDataStreamer.html
Regards,
Anton
From: Abhishek Gupta (BLOOMBERG/ 731 LEX)
Sent: Thursday, October 17, 2019 1:29 AM
To: user@ignite.apache.org
Subject: Pending Requests queue bloating
Hello,
I'm using G1GC with 24G on each of the 6 nodes in my grid. I saw issue while
ingesting
g the minimal
changes I need to make to patch 2.7.5. Or work around it?
Thanks,
Abhishek
From: user@ignite.apache.org At: 10/16/19 21:14:10To: Abhishek Gupta
(BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
Subject: RE: Pending Requests queue bloating
Hello,
First of all, what is the
Hello,
I'm using datastreamers to ingest large amounts of data in batches. So the
load on the grid is pretty spiky Some time I'm seeing pretty heavy GCing and
that causes the ingestion to slow down on the grid, but the client continues to
pump data which makes the GC pauses worse because I
nks,
Abhishek
From: ilya.kasnach...@gmail.com At: 09/26/19 11:33:36To: Abhishek Gupta
(BLOOMBERG/ 731 LEX )
Cc: user@ignite.apache.org
Subject: Re: Grid suddenly went in bad state
Hello!
"Failed to read data from remote connection" in absence of other errors points
to potential
Attached now.
From: Abhishek Gupta (BLOOMBERG/ 731 LEX) At: 09/19/19 18:17:18To:
user@ignite.apache.org
Subject: Grid suddenly went in bad state
Hello,
I've got a 6 node grid with maxSize (dataregionconfig) set to 300G each.
The grid seemed to be performing normally until at one
Hello,
I've got a 6 node grid with maxSize (dataregionconfig) set to 300G each.
The grid seemed to be performing normally until at one point it started logging
"Partition states validation has failed for group" warning - see attached log
file. This kept happening for about 3 minutes and t
Thanks Ilya! Good to have confirmation.
From: ilya.kasnach...@gmail.com At: 09/11/19 11:57:16To: Abhishek Gupta
(BLOOMBERG/ 731 LEX )
Cc: user@ignite.apache.org
Subject: Re: Grid failure on frequent cache creation/destroying
Hello!
I'm afraid that's https://issues.apache.org/j
Hello,
We have a grid of 6 nodes with a main cache. We noticed something
interesting today where while the regular ingestion was on with the mainCache.
We have an operational tool that created and destroys a cache
(tempCacheByExplorerApp) using REST API on each of the 6 nodes. While doing
Hello,
I'm using ZK based discovery for my 6 node grid. Its been working smoothly
for a while until suddenly my ZK node went OOM. Turns out there were 1000s of
znodes, many with data about ~1M + there were suddenly a lot of stuff ZK
requests (tx log was huge).
One symptom on the grid to
Exactly - what is available for persistence, I was wondering if it is available
for in-mem only. So for now I'll just need to configure rebalance mode to
non-NONE and live with point 1.
Thanks evgenii!
From: e.zhuravlev...@gmail.com At: 08/15/19 11:37:16To: Abhishek Gupta
(BLOOMBERG
(pardon me if this mail is by chance a duplicate - it got bounced back when I
sent it earlier from nabble)
Hello,
I have 6 node grid and I've configured it with 1 backup. I want to have
partition rebalancing but only in the following way.
If one of the six nodes goes down, then some primary
I'm using datastreamers to ingest data from files into the cache. I've a need
to do an 'upsertion' to the data being ingested, therefore I'm using
streamReceiver too.
See attached java class and log snippet. When we run the code calling addData
on the datastreamer after a while, we start seei
Yes I do have a server nodes which is hosted on this ip address port
10.133.71.210:47500", I have run it by running ignite.sh.
It does not seem to be working. Any idea?
-Abhishek
On Fri, Jul 7, 2017 at 4:24 AM, vkulichenko
wrote:
> You're starting a client node and it connect to server cluster
Hi,
Please verify me
22 matches
Mail list logo