I believe this is not supported with Ignite 2.x due to Native Buffer access. It will only be part of Ignite 3 which is still Beta.I haven't tested it myself though, so my comment should be confirmed by others.On 2/29/24, 14:14 Dinakar Devineni wrote:
Hi,
Has anyone tried
Why can you then not do the same for the Ignite nodes? You create a
start-ignite-nodes.sh script that has all the host addresses and then
run a command like "ssh start-ignite.sh" for all host ip
addresses. This of course is most efficient if you have exchanged ssh
keys between the servers so you
. Please consider sharing more info. FWIW, that's the other
side of the old "bike shed" parable if you are seeking input from others.
On Fri, Mar 24, 2023, 06:55 Thomas Kramer wrote:
Hi all,
do you have any feedback on this? Or is this rather a question for
StackOverflow?
Hi all,
do you have any feedback on this? Or is this rather a question for
StackOverflow?
Thanks.
On 21.03.23 10:00, Thomas Kramer wrote:
I'm considering Ignite for a ondemand-scalable microservice-oriented
architecture. I'd use the memory cache for shared data across the
microservices
I'm considering Ignite for a ondemand-scalable microservice-oriented
architecture. I'd use the memory cache for shared data across the
microservices. Maybe I would also use Ignite compute for distributed
tasks, however I believe the MOA philosophy would recommend REST for this.
My question is
are sensitive to situation and outcomes.
Are you trying to cull training data into training, tuning, and
validation subsets?
Maybe there's a colocation approach that would suffice.
On Thu, Sep 29, 2022, 12:26 Thomas Kramer wrote:
Right, I don't want to use CacheMode.LOCAL because it's
:43, Kramer написал(а):
Coming back to my original question:
CacheConfiguration cfg = new CacheConfiguration<>();
cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setAffinity(new LocalAffinityFunction());
Will the above code still create a cluster wide lock with partition
map exchange event even
cache will be hosted on local node only?
Gesendet: Dienstag, 27. September 2022 um 18:50 Uhr
Von: "Thomas Kramer"
An: user@ignite.apache.org
Betreff: Re: Creating local cache without cluster-wide lock
I'm using CacheBasedDataset to filter a subset from a distributed cache
of a
of the key.
On 27 Sep 2022, at 15:36, Thomas Kramer wrote:
I understand creating a new cache dynamically requires a cluster-wide
lock with partition map exchange event to create the cache on all nodes.
This is unnecessary traffic when only working with local caches.
For local-only caches I assume
I understand creating a new cache dynamically requires a cluster-wide
lock with partition map exchange event to create the cache on all nodes.
This is unnecessary traffic when only working with local caches.
For local-only caches I assume this wouldn't happen. But CacheMode.LOCAL
is deprecated.
locking (in
which case, one of your transactions will fail on the commit.)
On 15 Sep 2022, at 10:56, Thomas Kramer wrote:
Modifying previous example. Would this still potentially result in
deadlock?
First:
#1 tx.start();
#2 cacheA.put(1, 1);
#3 cacheB.put(2, 2);
#4 tx.commit();
Second:
#5
in this scenario for the deadlock or will the cache
be locked on any key value?
Thanks!
On 15.09.22 11:36, Stephen Darlington wrote:
The important part is that they’re both waiting for each other to
complete. Whether it’s one cache or ten is not significant.
On 14 Sep 2022, at 12:44, Thomas Kramer wrote
) respectively.
14 сент. 2022 г., в 12:39, Thomas Kramer написал(а):
Hi,
does the group here have any suggestion on this? I'm trying to find
the root of the deadlock we're getting on the production servers from
time to time.
So I'm trying to better understand why this can happen, and maybe
looking
to
better understand.
Thanks!
On 05.09.22 22:48, Thomas Kramer wrote:
I'm experiencing a transaction deadlock and would like to understand
how to find out the cause of it.
Snipped from the log I get:
/Deadlock detected:
K1: TX1 holds lock, TX2 waits lock.
K2: TX2 holds lock, TX1 waits lock
I'm experiencing a transaction deadlock and would like to understand how
to find out the cause of it.
Snipped from the log I get:
/Deadlock detected:
K1: TX1 holds lock, TX2 waits lock.
K2: TX2 holds lock, TX1 waits lock.
Transactions:
TX1 [txId=GridCacheVersion [topVer=273263429,
I am storing my ML training data in a cache with binary data:
IgniteCache cache =
ignite.cache("cache").withKeepBinary();
I can't seem to understand how to use the Ignite ML classes like
Preprocessor and DatasetBuilder with this cache? Do I first have to
create another IgniteCache or Map and
Thanks a lot. That makes it very clear!
On 10.06.22 13:27, Pavel Tupitsyn wrote:
To be honest, that was surprising to me too. Not documented anywhere.
I've filed a ticket to rectify this [1]
> how would I get the update count for this type of SQL queries?
Update count is returned as the only
Has anyone solved this or identified its severity?
Thanks,
Thomas.
On 01.02.22 11:03, Thomas Kramer wrote:
Hi,
I am switching from 2.8.1 to 2.12.0 and encounter an issue when
closing thin JDBC connection. This is my test case:
/ public static void main(String[] args) throws Exception
Hi,
I am switching from 2.8.1 to 2.12.0 and encounter an issue when closing
thin JDBC connection. This is my test case:
/ public static void main(String[] args) throws Exception//
// {//
// try (Ignite ignite = Ignition.start())//
// {//
//
ta to nodes
If you only need to process the data, but not store it, I would suggest using IgniteCompute.
Yes, sending byte[] is efficient. 30MB is not that much and should be fine.
On Wed, Nov 17, 2021 at 12:34 PM Thomas Kramer <don.tequ...@gmx.de> wrote:
I'll need to transfer large amo
I'll need to transfer large amounts of binary data (in ~30MB chunks)
from the compute job sender to the nodes that run the compute jobs. The
data will be needed in the compute jobs but each chunk is only needed on
one node, while another chunk is needed on another node that computes
the same job.
Hi,
thanks for the hint using the control.sh script. It looks like the
server node with .111 IP address is having some very old transactions:
Matching transactions:
TcpDiscoveryNode [id=78b6df9f-7b77-46ea-9aef-a1b61918b258,
addrs=[110.10.123.111], order=1, ver=2.8.1#20200521-sha1:86422096,
Hi all,
I have the situation that at least one DB row of my persistent cache
seems locked and I can't change it anymore. Everytime I want to change
it using SQL a TransactionTimeoutException happens like this:
class org.apache.ignite.transactions.TransactionTimeoutException: Failed
to acquire
The idea is that the server, just before it broadcasts the callable
tasks to all nodes, creates the semaphore with the desired number of
parallel executions. Then the code below makes sure that exactly this
number of nodes only run the logic while all other nodes will fail on
tryAcquire and
Hi Krish,
what I do in this case is I do not block on the cluster nodes when
trying to acquire the semaphore. So if the semaphore could not be
acquired, which will be the case for all nodes except for one, the
callable method just immediately returns.
So you may call something like this:
Hi Ilya,
unfortunately this also didn't help improving query performance. Not
sure what else I can try. Or maybe it is expected? In my opinion it
shouldn't take that long as the query without the ORDER BY clause is
super fast. Since there is a index on the order field I would expect
this should
Hi,
when running Ignite inside a docker container, and the container is
being stopped, what is the safest way to handle shutdown event?
Currently I'm using a ShutdownHook to the Java runtime and try to
quickly deactivate and disconnect the cluster. But I get an exception
like this:
class
27 matches
Mail list logo