Re: L2-cache slow/not working as intended

2020-11-12 Thread Bastien Durel
Le mardi 10 novembre 2020 à 17:39 +0300, Ilya Kasnacheev a écrit :
> Hello!
> 
> You can make it semi-persistent by changing the internal Ignite node
> type inside Hibernate to client (property clientMode=true) and
> starting a few stand-alone nodes (one per each VM?)
> 
> This way, its client will just connect to the existing cluster
> with data already there.
> 
> You can also enable Ignite persistence, but I assume that's not what
> you want.

Hello.

Ignite is already started in client mode before initializing hibernate,
and connected to a few stand-alone servers.

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: L2-cache slow/not working as intended

2020-11-09 Thread Bastien Durel
Le lundi 09 novembre 2020 à 19:11 +0300, Ilya Kasnacheev a écrit :
> Hello!
> 
> Why Hibernate won't use it for reads of that user, I don't know, it's
> outside of scope of Ignite.
> 
> Putting 1,000,000 records in 5 minutes sounds reasonable, especially
> since L2 population is optimized for latency, not throughput (as
> opposed to e.g. CacheLoader).

Hello,

I'm OK if the L2C make 5 minutes to load (as I said, there will
probably never be such a query in the real application), the real
problem here is that this cache does not persist between Sessions, and
therefore is recreated each time.

It may be a configuration problem, but reading [1], I cannot find why

Regards,

[1] 
https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: L2-cache slow/not working as intended

2020-11-09 Thread Bastien Durel
Le lundi 09 novembre 2020 à 14:09 +0300, Ilya Kasnacheev a écrit :
> Hello!
> Putting 1 million entries of a single query in L2 cache does not
> sound like a reasonable use of L2 cache.

Hello.

No one will probably read the whole Event database at once with the
product, but it was a read-speed test, so we needed some big chunk data
to see problems ... You can see on the other post I made the L2C does
not even caches the only one User I have in my test db.

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: L2-cache slow/not working as intended

2020-11-09 Thread Bastien Durel
ntId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,364] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,365] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,365] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,365] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,366] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,366] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,366] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,366] 
org.apache.ignite.cache.hibernate.HibernateTransactionalAccessStrategy: Put 
[cache=fr.data.wa.db.hb.Event, key=HibernateKeyWrapper 
[entry=fr.data.wa.db.hb.Event, tenantId=null], 
val=CacheEntry(fr.data.wa.db.hb.Event)]
DEBUG [2020-11-09 10:00:06,389] fr.data.wa.resources.TestResource: got 
element#65536
INFO  [2020-11-09 10:00:06,465] 
org.hibernate.engine.internal.StatisticalLoggingSessionEventListener: Session 
Metrics {
24135 nanoseconds spent acquiring 1 JDBC connections;
26856 nanoseconds spent releasing 1 JDBC connections;
44297 nanoseconds spent preparing 1 JDBC statements;
112090383 nanoseconds spent executing 1 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
30735991895 nanoseconds spent performing 10 L2C puts;
1426335 nanoseconds spent performing 1 L2C hits;
0 nanoseconds spent performing 0 L2C misses;
69531442 nanoseconds spent executing 1 flushes (flushing a total of 11 
entities and 2 collections);
3555 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 
entities and 0 collections)
}

The first session is Authentification, which misses the User, and the
second one te Event's get ... This is the second time I called the same
function on this server since it starts.

Regards,


чт, 5 нояб. 2020 г. в 01:59, Bastien Durel :
> Hello,
> 
> I'm using an ignite cluster to back an hibernate-based application.
> I
> configured L2-cache as explained in
>  
> https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache
> 
> (config below)
> 
> I've ran a test reading a 1M-elements cache with a consumer
> counting
> elements. It's very slow : more than 5 minutes to run.
> 
> Session metrics says it was the LC2 puts that takes most time (5
> minutes and 3 seconds of a 5:12" operation)
> 
> INFO  [2020-11-05 09:51:15,694]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListene
> r: Session Metrics {
>     33350 nanoseconds spent acquiring 1 JDBC connections;
>     25370 nanoseconds spent releasing 1 JDBC connections;
>     571572 nanoseconds spent preparing 1 JDBC statements;
>     1153110307 nanoseconds spent executing 1 JDBC statements;
>     0 nanoseconds spent executing 0 JDBC batches;
>     303191158712 nanoseconds spent performing 100 L2C puts;
>     23593547 nanoseconds spent performing 1 L2C hits;
>     0 nanoseconds spent performing 0 L2C misses;
>     370656057 nanoseconds spent executing 1 flushes (flushing a
> total of 101 entities and 2 collections);
>     4684 nanoseconds spent executing 1 partial-flushes (flushing a
> total of 0 entities and 0 collections)
> }
> 
> It seems long, event for 1M puts, but ok, let's say the L2C is
> initialized now, and it will be better next time ? So I ran the
> query
> again, but it took 5+ minutes again ...
> 
> INFO  [2020-11-05 09:58:02,538]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListene
> r: Session Metrics {
>     28982 nanoseconds spent acquiring 1 JDBC

L2-cache slow/not working as intended

2020-11-05 Thread Bastien Durel
ld();
}

Event cache config:

































clientId

eventDate

num






































clientId

eventDate

num

type

source








































Hibernate Event entity :

package fr.data.wa.db.hb;
@Entity(name = "event")
@Table(name = "\"EventCache\".event")
@Cacheable
@org.hibernate.annotations.Cache(usage = CacheConcurrencyStrategy.TRANSACTIONAL)
public class Event implements java.io.Serializable {
private static final long serialVersionUID = -3536013579696669860L;


@Id
@Column(name = "client_id", insertable = false, updatable = false)
private long clientId;
@Id
@Column(name = "event_date")
private Date eventDate;
    @Id
@Column
private long num;
/* [...] */
}

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: Tracing configuration

2020-10-30 Thread Bastien Durel
Le vendredi 30 octobre 2020 à 18:03 +0300, Maxim Muzafarov a écrit :
> Hello Bastien,
> 
> Is the issue [1] is the same as you faced with?
> It seems to me it will be available in 2.9.1 (or 2.10).
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-13640

Hello,

It may be, but I don't know enough about maven to see what the patch is
about to change.

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Tracing configuration

2020-10-30 Thread Bastien Durel
Hello,

I'd like to activate tracing to investigate a slowdown, I have
succeeded (I think) to activate trace gathering by linking
optional/ignite-opencensus/ into libs, then using control.sh

Command [TRACING-CONFIGURATION] started
Arguments: --tracing-configuration

Scope, Label, Sampling Rate, included scopes
DISCOVERY,,1.0,[]
EXCHANGE,,0.0,[]
COMMUNICATION,,1.0,[]
TX,,1.0,[]
Command [TRACING-CONFIGURATION] finished with code: 0

But I don't find how to direct traces on my collector.

I put a call to 

io.opencensus.exporter.trace.zipkin.ZipkinTraceExporter.createAndRegister(url, 
serviceName);

in the static section of a class loaded by the server at start, but I
get some NoClassDefFoundError.
I miss at least zipkin-reporter-2.7.14.jar, zipkin-2.12.0.jar and
zipkin-sender-urlconnection-2.7.14.jar for the Zipkin part ,but now I
get an error for io/opencensus/exporter/trace/util/TimeLimitedHandler

Is this normal that zipkin & part of opencensus are missing from the
released distribution ? (I'm using Debian package), or must I use a
totally different way of configuring collection ?

Thanks. Best regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: removing ControlCenterAgent

2020-10-30 Thread Bastien Durel
Le jeudi 29 octobre 2020 à 12:07 +, Mekhanikov Denis a écrit :
> Hi!
> 
> The issue is that Control Center Agent puts its configuration to the
> meta-storage. 
> Ignite has an issue with processing data in meta-storage with class
> that is not present on all nodes: 
> https://issues.apache.org/jira/browse/IGNITE-13642
> Effectively it means that you can't remove control-center-agent from
> a cluster that worked with it previously.
> 
> You have a few options how to solve it:
> - Add control-center-agent to class path of all nodes and disable it
> using management.sh --off. Classes and configuration will be there,
> but it won't do anything. You'll be able to remove the library after
> an upgrade to the version that doesn't have this bug. Hopefully, it
> will be fixed in Ignite 2.9.1
> 
> - Remove the metastorage directory from the persistence directory on
> all nodes. It will lead to removal of Control Center Agent
> configuration along with Baseline Topology history.
> You will need to do that together with removal of the control-center-
> agent library.
> NOTE that removal of metastorage is a dangerous operation and can
> lead to data loss. I recommend using the first option if it works for
> you.
> Make a copy of persistence directories before removing anything.
> After the removal and a restart the baseline topology will be reset.
> Make sure that first activation will lead to the same BLT like before
> the restart to avoid data loss.
> 
Hello,

Thanks for info. I've removed the db directory on all nodes, as most of
my data is in 3rd-party storage, and I can live without event logs that
uses ignite storage, as we're not in production.

We'll keep this in mind to avoid future problems.

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: removing ControlCenterAgent

2020-10-28 Thread Bastien Durel
I forget to attach my configuration (I removed the cache config details)I'm 
using the debian package, so I run the cluster with xml
configuration.

Regards,


-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr



ignite.xml
Description: XML document


removing ControlCenterAgent

2020-10-28 Thread Bastien Durel
(GridManagerAdapter.java:302)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967)
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935)
... 11 more
Caused by: class org.apache.ignite.spi.IgniteSpiException: Unable to unmarshal 
key=metastorage.cluster.id.tag
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:2018)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1189)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299)
... 13 more
[17:52:45,271][INFO][tcp-disco-sock-reader-[2f3f6f3a 
192.168.43.29:39675]-#6%ClusterWA%-#50%ClusterWA%][TcpDiscoverySpi] Finished 
serving remote node connection [rmtAddr=/192.168.43.29:39675, rmtPort=39675

And the running node has this :

[17:52:45,223][INFO][tcp-disco-sock-reader-[9a3233c6 
192.168.43.30:54951]-#4%ClusterWA%-#55%ClusterWA%][TcpDiscoverySpi] Finished 
serving remote node connection [rmtAddr=/192.168.43.30:54951, rmtPort=54951
[17:52:45,246][INFO][tcp-disco-msg-worker-[crd]-#2%ClusterWA%-#46%ClusterWA%][GridEncryptionManager]
 Joining node doesn't have stored group keys 
[node=9a3233c6-3a6c-4be0-b5e7-19cdff30f69e]
[17:52:45,266][WARNING][disco-pool-#56%ClusterWA%][TcpDiscoverySpi] Unable to 
unmarshal key=metastorage.cluster.id.tag

If I start the nodes in the reverse order, it has this :

[17:56:52,426][INFO][tcp-disco-sock-reader-[4b8b92f5 
192.168.43.29:42557]-#4%ClusterWA%-#53%ClusterWA%][TcpDiscoverySpi] Finished 
serving remote node connection [rmtAddr=/192.168.43.29:42557, rmtPort=42557
[17:56:52,446][INFO][tcp-disco-msg-worker-[crd]-#2%ClusterWA%-#46%ClusterWA%][GridEncryptionManager]
 Joining node doesn't have stored group keys 
[node=4b8b92f5-1753-4b1b-9902-476c925fa49d]
[17:56:52,466][WARNING][disco-pool-#54%ClusterWA%][TcpDiscoverySpi] Unable to 
unmarshal key=metastorage.cluster.id.tag

Is there a way to recover ?

Thanks,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Shared counter

2020-08-26 Thread Bastien Durel
Hello,

I whish to know if there is a supported way to implement some kind of
shared counter in ignite, where any node could increment or decrement a
value, which would be decremented automatically if a node is leaving
the cluster ?
I know I can use an AtomicInteger but there will be no decrement on
exit, i guess ?

Should I use a cache with  (summing all counters) and
manually evict rows when I get a EVT_NODE_FAILED/EVT_NODE_LEFT event,
or is there a better way ?

Thanks,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
12 avenue Raspail, 94250 GENTILLY France
www.data.fr



Re: third-party persistance and junction table

2020-07-24 Thread Bastien Durel
Hello,

OK, so I'll stick with my "dummy char(1) column as value" hack.

Thanks,

Le jeudi 23 juillet 2020 à 17:41 +0300, Andrei Aleksandrov a écrit :
> Hi,
> 
> Unfortunately, Ignite doesn't support such kind of relations out of
> the 
> box. Ignite just translates it to third party data storage that used
> as 
> cache-store.
> 
> It's expected that inserts and updates will be rejected in case if
> they 
> break some rules.
> 
> BR,
> Andrei
> 7/21/2020 11:16 AM, Bastien Durel пишет:
> > Hello,
> > 
> > I have a junction table in my model, and used the web console to
> > generate ignite config and classes from my SQL database
> > 
> > -> There is a table user with id (long) and some data
> > -> There is a table role with id (long) and some data
> > -> There is a table user_role with user_id (fk) and role_id (fk)
> > 
> > Reading cache from table works, I can query ignite with jdbc and I
> > get
> > my relations as expected.
> > 
> > But if I want to add a new relation, the query :
> > insert into "UserRoleCache".user_role(USER_ID, ROLE_ID)
> > values(6003, 2)
> > is translated into this one, sent to postgresql :
> > UPDATE public.user_role SET  WHERE (user_id=$1 AND role_id=$2)
> > 
> > Which obviously is rejected.
> > 
> > The web console generated a cache for this table, with UserRole
> > & UserRoleKey types, which each contains userId and roleId Long's.
> > 
> > Is there a better (correct) way to handle these many-to-many
> > relations
> > in ignite (backed by RDBMS) ?
> > 
> > Regards,
> > 
-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
12 avenue Raspail, 94250 GENTILLY France
www.data.fr



third-party persistance and junction table

2020-07-21 Thread Bastien Durel
Hello,

I have a junction table in my model, and used the web console to
generate ignite config and classes from my SQL database

-> There is a table user with id (long) and some data
-> There is a table role with id (long) and some data
-> There is a table user_role with user_id (fk) and role_id (fk)

Reading cache from table works, I can query ignite with jdbc and I get
my relations as expected.

But if I want to add a new relation, the query :
insert into "UserRoleCache".user_role(USER_ID, ROLE_ID) values(6003, 2)
is translated into this one, sent to postgresql :
UPDATE public.user_role SET  WHERE (user_id=$1 AND role_id=$2)

Which obviously is rejected.

The web console generated a cache for this table, with UserRole
& UserRoleKey types, which each contains userId and roleId Long's.

Is there a better (correct) way to handle these many-to-many relations
in ignite (backed by RDBMS) ?

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
12 avenue Raspail, 94250 GENTILLY France
www.data.fr