Hello, I started ignite node and checked log file.
I found TCP ports in logs
>>> Local ports : TCP:8080 TCP:11213 TCP:47102 TCP:49100 TCP:49200
I set 49100, 49200 port at configuration file for ignite node and client
connector port.
but I don't know the others port exactly.
I found a summary
Hi,
My question is specifically for clo.apply(key, data) that I invoked
in CacheStoreAdapter.loadCache
method.
So, does this method (clo.apply) overrides value for the keys which are
already present in cache or it just skips?
My observation is that its not overriding value for the keys which are
Hello, I have 2 kind of questions.
[1] Is there a reference for set Data Region & Cache configuration??
I don't know exactly how to configure data so, I looking for
reference or best practice of cache region set up.
Is it normal case divide data region by domain/business purpose area??
Hi,
I have the following log.
[2020-05-13 09:33:49] [pub-#14505] INFO
c.n.b.e.l.ActiveSyncServiceEventListener - Event [id=CommandEvent] has
been handled on ContextKey{key='0021fc3b-b293-4f8a-b62d-25c51ec52586'}.
What's does the [pub-#14505] mean? The number of thread in public pool? Or
the
Thank you so much!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Evgenii,
The storage used is not SSD.
We will use different versions of ignite for further testing, such as
ignite2.8.
Ignite is configured as follows:
http://www.springframework.org/schema/beans;
Hi,
Can you share full logs and configuration? What disk so you use?
Evgenii
вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com>:
> Among them:
> CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)
>
> Ignite sys cache: ~ 27 minutes
>
> PLM_ITEM:~3 minutes(repicated,1.9K)
>
>
> 在 2020/5/12
Hi,
Attach the logs and I'll take a look. Include the details of your sql
request: # of rows/size of objects/
indexs/sql queries/etc..
It might be using the disk and swapping memory back and forth.
https://apacheignite.readme.io/docs/durable-memory-tuning#pages-writes-throttling
Hi,
1) loadCache() is implementation dependent, by default it just adds new
records to the cache .
see example:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/CacheLoadOnlyStoreExample.java
Take a look at jdbc example as well:
Hi,
Use these tips for memory planning:
https://apacheignite.readme.io/docs/capacity-planning#memory-capacity-planning-example
https://apacheignite.readme.io/docs/capacity-planning#capacity-planning-faq
(this has a spreadsheet w/capacity calculator)
You could make you cache partitioned,
There is no way to define nested collection of addresses as SQL fields. The
problem is that there is no such types in JDBC, so, it just won't work. So,
if you want to use SQL, just have separate tables for these objects.
вт, 12 мая 2020 г. в 06:07, narges saleh :
> Thanks Evgenii.
> My next
Hi,
I am using ignite 2.6. I am trying refresh few caches which are already
loaded during server startup using cache loaders.
To refresh caches I am invoking IgniteCache.loadCache method. But it seems
that it is not updating data in caches.
1) Is it the expected behavior?
2) Do I have to clear the
We've already tried that but we are receiving these errors. We are using
index in a correct way only
We are using Ignite persistence.
```
[15:32:19,481][WARNING][long-qry-#110][LongRunningQueryManager] Query
execution is too long [duration=3898ms, type=MAP, distributedJoin=false,
Hi Pavel,
The reproducer is not the actual use case which is too big to use - it's a
small example using the same mechanisms. I have not used a data streamer
before, I'll read up on it.
I'll try running the reproducer again against 2.8 (I used 2.7.6 for the
reproducer).
Thanks,
Raymond.
On
Hi forks,
We have several servers with persistence enabled, and two of them have
small memory, so they are not enough to store the amount of data
allocated to them.
Now the problem is that SQL takes a long time to execute. We monitored
the disk IO of these two servers and used the following
Among them:
CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)
Ignite sys cache: ~ 27 minutes
PLM_ITEM:~3 minutes(repicated,1.9K)
在 2020/5/12 下午9:08, 38797715 写道:
Hi community,
We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
We have 7 tables to test, and the data volume
Hi community,
We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
We have 7 tables to test, and the data volume is
respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others
are partitioned(backup = 1)
VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch
Thanks Evgenii.
My next two questions are, assuming I go with option 1.1:
1) How do I define these nested addresses via query entities, assuming, I'd
use binaryobjects when inserting. There can be multiple primary addresses
and secondary addresses. E.g., {john,{primary-address:[addr1, addr2],
Hi Raymond,
First, I could not reproduce the issue. Attached program runs to completion
on my machine.
Second, I see a few issues with the attached code:
- Cache.PutIfAbsent is used instead of DataStreamer
- ICacheEntryEventFilter is used to remove cache entries, and is called
twice - on add and
Hello!
Ignite SQL table with index on score descending would fit it nicely.
You will have to convert JSON into BinaryObject or just extract score as a
column.
Regards,
--
Ilya Kasnacheev
пн, 11 мая 2020 г. в 19:11, adipro :
> The K is key with type String.
> The V is the value with type
Hello,
I've created the ticket for this issue:
https://issues.apache.org/jira/browse/IGNITE-13000
Thanks,
Pavel
вс, 10 мая 2020 г. в 04:54, Emmanuel Neri :
> Hello, I`m a Ignite user and i have a problem using Ignite with SQL at
> 2.7.5 version.
>
> I use vertx-jdbc-client dependency at my
Well, it appears I was wrong. It reappeared. :(
I thought I had sent a reply to this thread but cannot find it, so I am
resending it now.
Attached is a c# reproducer that throws Ignite out of memory errors in the
situation I outlined above where cache operations against a small cache
with
Hi Alex,
Are you saying if we disable WAL Archiving, will there be any problem?
Let's say if I disable WAL archiving and if 6 out of 10 WAL segments are
filled then what happens during OS crash?
When we restart the server, will all the operations in those 6 segments be
updated on the disc data
23 matches
Mail list logo