nested classes.
On Tue, Jul 3, 2018, 4:20 AM Denis Mekhanikov wrote:
> David,
>
> So, the problem is that the same class is loaded multiple times and it
> wastes the metaspace, right?
> Could you share a reproducer?
>
> Denis
>
> вт, 3 июл. 2018 г. в 0:58, David Harv
We have a custom stream receiver that makes affinity calls. This all
functions properly, but we see a very large number of the following
messages for the same two classes. We also just tripped a 2GB limit on
Metaspace size, which we came close to in the past.
[18:41:50,365][INFO][pub-#6954%Grid
Denis does have a point. When we were trying to run using GP2 storage,
the cluster would simply lock up for an hour. Once we moved to local SSDs
on i3 instances those issues went away (but we needed 2.5 to have the
streaming rate hold for up as we had a lot of data loaded). The i3
instances a
transactions are easy to use: see examples, org.apache.ignite.
examples.datagrid.store.auto
We use them in the stream receiver.You simply bracket the get/put in
the transaction, but use a timeout, then bracket that with an "until done"
while loop, perhaps added a sleep to backoff.
We ended up
You can start a transaction in the stream receiver to make it atomic.
On Fri, Jun 29, 2018, 1:02 PM breischl wrote:
> StreamTransformer does an invoke() pretty much exactly like what I'm doing,
> so that would not seem to change anything.
>
>
> https://github.com/apache/ignite/blob/master/module
MYou must increase the Linux NOFILE ulimit when running Ignite. The
documentation describes how to do this.
On Sun, Jun 24, 2018, 12:47 PM 胡海麟 wrote:
> Hi,
>
> Re-post message 'cause I failed to post my logs pasted.
>
> I have got repeated Too many open files exceptions since sometime.
> ==
Is https://issues.apache.org/jira/browse/IGNITE-8780 a regression in 2.5 ?
On Thu, Jun 14, 2018 at 7:03 AM, Pavel Kovalenko wrote:
> DocDVZ,
>
> Most probably you faced with the following issue https://issues.apache.
> org/jira/browse/IGNITE-8780.
> You can try to remove END file marker, in this
Certain kinds of requests (e.g., remote calls) carry enough information for
the receiver of the message to do peer class loading. The main purpose of
peer class loading is to avoid the need to restart the server to install
software, and the peer class loading is therefore done by the server
reque
We have a 2.4 ignite peristence cluster on AWS with 8 i3.xlarge nodes,
where the default memory region size is 160GB.When loading a large
amount of data through a custom StreamReceiver, over time we see the
throughput decrease, as well as network and SSD I/O, and one node with
higher CPU usage
We built a cluster with 8 nodes using Ignite persistence and 1 backup, and
had two nodes fail at different times, the first being storage did not get
mounted and ran out of space early, and the second an SSD failed. There
are some things that we could have done better, but this event brings up
th
The biggest thing for us using Ignite Persistence on AWS was finding that
we needed to use i3 instances for the SSD write performance. Ignite
persistences and GP2 was a poor experience.
On Tue, May 15, 2018 at 2:43 AM, John Gardner
wrote:
> We've been running the GridGain version of Ignite 8
Are you trying to stop the cluster or a node? If the former, then you can
use control.sh to deactivate the cluster before stopping the nodes.
On Tue, May 8, 2018, 7:39 PM Randall Harmon
wrote:
> @dharvey - no, cluster is not deactivated.
>
> @anton - 26gb of data is under ignite data directory
You have deactivated the cluster, e.g., using visor?
On Mon, May 7, 2018 at 9:08 PM, Randall Harmon
wrote:
>
> What is the recommended way to signal Ignite to shutdown cleanly, so that
> WAL file doesn't have to be replayed (?) when Ignite restarts? We are
> loading a large set of data from mul
exception if at least one future has been failed.
>
> Could you please refine what kind of flaw do you suspect?
>
> As for callback added by listen(), it is called right _after_ the future
> gets to 'done' state, thus flush() completion does not guarantee callback's
>
orks.
>
> Yakov Zhdanov
>
> чт, 26 апр. 2018 г., 0:30 David Harvey :
>
>> We had assumed that if we did, in one thread:
>> IgniteFuture f = streamer.addData(...);
>> f.listen(...)
>>
>> streamer.flush();
>>
>> assert( a
We had assumed that if we did, in one thread:
IgniteFuture f = streamer.addData(...);
f.listen(...)
streamer.flush();
assert( all the prior futures have completed); << this triggered.
I can't determine if this a bug, or just that the description could use
improvement.
I've had problems with the visor cache command failing when a client is
connected, which was related to TCP port firewalls. Visor was trying to
make a connection that was blocked. I don't remember whether top failed.
On Mon, Apr 9, 2018 at 7:10 AM, vbm wrote:
> Hi Dave
>
> In the ignite pod
You have 4 nodes running, but are they part of a single cluster? If you
look at the log file for one of the server nodes, has it connected to 4
servers or only one?
PS. I've found this avoids the need to look up the URL "bin/ignitevisor.sh
-cfg=$CONFIG_URI".
On Sat, Apr 7, 2018 at 8:40 AM, vbm
Assuming Ignite Persistence, you can create a cache in a specific Data
Regions, but I'm unclear whether this properties can be set per region. We
are setting them in
org.apache.ignite.configuration.DataStorageConfiguration. What you seem
to be asking for is to set these per Data Region.
Nakandala
wrote:
> I verified the java version is the same.
>
> The interesting thing here is the code runs correctly if I disable
> persistence.
>
>
>
> I wonder whether there is an issue in enabling persistence or I am doing
> something wrong here.
>
> On Tue, A
PS. I'm actually remembering it was due to a mismatch of Java7 vs 8, which
does not make that much sense, because we have been running Java7 clients
recently with no issues.
On Tue, Apr 3, 2018 at 3:05 PM, David Harvey wrote:
> I had those "UNKNOWN PAIR" issues early on, a
I had those "UNKNOWN PAIR" issues early on, and it seemed to be some kind
of version problem, like the client was not running at the same rev.
On Tue, Apr 3, 2018 at 2:15 PM, Supun Nakandala
wrote:
> Hi all,
>
> I am trying to setup Apache Ignite with persistence enabled and facing a
> deseriali
When using Ignite Persistence, I have found that you can simply replicate
the folders, if you deactivate the cluster first.
You need the wal and store, as well as two folders under $IGNITE_HOME/work
which are named something like marshaller* and binary*
On Tue, Apr 3, 2018 at 2:52 AM, Naveen wro
When I did this, I found that with Ignite Persistence, there is a lot of
write amplification (many times more bytes written to SSD than data bytes
written) the during the checkpoints, which makes sense because ignite
writes whole pages, and each record written dirties pieces of many pages.
The SS
Note that 2.4 has significant improvements WRT rebalancing, including
baselines: https://apacheignite.readme.io/docs/cluster-activation
On Mon, Apr 2, 2018 at 7:39 AM, Raymond Wilson
wrote:
> I’ve been reading how Ignite manages rebalancing as a result of topology
> changes here: https://apache
By default, DataStreamer will only send a full buffer, unless you
explicitly flush or close it, or as suggested, implicitly close it. You
can set a flush timer also.
The last time I looked (2.3), flush timer is implemented to flush
periodically, and this is unaffected by when data was last added
I had stopped at BinaryObject, and didn't follow the indirection though
type() to BinaryType. I think I assumed that this had only information at
the higher level, and wouldn't drill down into the fields.
This also answers a question about how to enumerate the fields.
Thanks.
-DH
On Wed, Mar 28,
As you already know, you cannot have the config file in the discovery
bucket, because discovery enumerates the contents of the discovery bucket.
But the credentials in the config bucket are not yet visible, so they
cannot be used. You can make the config bucket public, but not with the
key stor
1. You can call ignite.start() twice in the same JVM, and you get two
Ignite nodes (there are some restrictions around the same cluster I don't
understand). An ignite node may be a client or a server. I believe
can have two client nodes in the same JVM connected to two different
clusters.
1. I had created IGNITE-7905 for this interaction with userVersion and
perhaps only ignite.Activate() issued from the client. I think it deals
with the code for Activate() using nested classes. The server decided that
it already had a version of the ignite code loaded, and wasn't goin
For MySql we are reading the binlog in row,format and sending the updates,
with the LSN appended on a,data streamer with a custom Stream Reciever that
uses the lsn to detect out of order cases. We get eventually consistent
results only.
On Fri, Mar 23, 2018, 3:48 AM Naveen wrote:
> Hi
>
> We a
Based this latest description, simply not specifying an affinity key here
would be sufficient. But presumably you were specifying the affinity key
to cause co-location. The reason a lookup on the K of the KV pair is fast
is because it can hash to a node. If the affinity key was not included
What is your goal? If you have a unique key that does not contain the
affinity key, and the primary key contains both fields, you can create an
index on that unique key, so that you could have fast node local lookups
using SqlQuery().If you want to find an object only by that original
unique
IIgnite persistence is per data region
https://apacheignite.readme.io/docs/memory-configuration#section-data-regions
On Mon, Mar 19, 2018 at 10:11 AM, Naveen wrote:
> Hi
>
> Looks like dual persistence - 3rd party (Oracle) and native persistence -
> both are supported
>
> "*Starting with Apache
Yes, the affinity key must be part of the primary key. Welcome to my
world
On Fri, Mar 16, 2018 at 3:23 AM, Naveen wrote:
> Hi Mike
>
> I have created a table called CITY
>
> : jdbc:ignite:thin://127.0.0.1> CREATE TABLE City ( city_id LONG PRIMARY
> KEY, name VARCHAR) WITH "template=repl
You can give it a Map also, but that will not be substantially
faster. Because the data streamer routes by affinity key, has to process
each record, and route it to the buffer for the node with that key. The
data streamer will collect 8192 records on the client node for each server
node befo
I have a question about SQL query via JDBC cancellation, as well as a
performance question.
I've connected Ignite from SQL/Workbench using the Thin driver, however via
ssh -L proxy and I issued
"select count(*) from table1 where indexedField = x;" and that took 1:40
(m:ss)
"select count(*) from t
When I hit something like this, the logs suggested increasing
socketWriteTimeout, and that helped for me.
On Fri, Mar 2, 2018 at 7:58 AM, KR Kumar wrote:
> Hi - I have a three node cluster with c4.2xlarge machines with ignite
> persistence enabled. While doing a load testing of the cluster, afte
ElasticSearch has this feature (where you exclude a node) and we use it all
the time in production.
For example, if AWS has planned maintenance on instance with local SSDs, we
want to add a new node, and remove the old node, without a temporary
decrease in the number of backups. If you are runn
re
> of the underlying physical host?
>
>
>
> *From:* David Harvey [mailto:dhar...@jobcase.com]
> *Sent:* Friday, February 23, 2018 10:03 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Large durable caches
>
>
>
> Using the local SSDs, which are radically faster tha
Using the local SSDs, which are radically faster than EBS.
On Thu, Feb 22, 2018 at 10:34 AM, lawrencefinn wrote:
> I could try a different AWS instance. Im running these tests on r4.8xlarge
> boxes, which are pretty beefy and "ebs optimized". I tried the same tests
> using IO1 disks at 20,000
I have had more success with i3 instances
On Feb 21, 2018 10:01 PM, "KR Kumar" wrote:
Hi All - I am using Ignite Persistence ( 2.3) for storing events that we
receive from different sources. Currently the write and read throughput is
2k to 4k per second on a t2.xlarge and c4.2xlarge machines on
We are pulling in a large number of records from an RDB, and reorganizing
the data so that our analytics will be much faster.
I'm running Sumo, and have looked at all of the log files from all the
nodes, and the only things are checkpoints and GC logs. The checkpoints
are fast, and occur at a low
I have a 8 node cluster with 244GB/node, and I see a behavior I don't have
any insight into, and which doesn't make sense.
I'm using a custom StreamReceiver which starts a transaction that starts a
transaction and updates 4 partitioned caches, 2 of which should be local
updates. Ignite Persisten
44 matches
Mail list logo