Re: Inserting date into ignite with spark jdbc

2020-11-05 Thread Humphrey
Hello, I made a reproducer  here
  

Can 2 classes to run,
* nl/hlopez/ignitesparkjdbc/server/ServerApplication.kt
* nl/hlopez/ignitesparkjdbc/spark/SparkApplication.kt

ServerApplication starts a Ignite Server Node
SparkApplication starts the spark application that connects to the Ignite
Server.

You can see that method *writeUsingSpringConfig* works but method
*writeUsingJdbc* throws exception.



aealexsandrov wrote
> Hi,
> 
> It will be great if you share the reproducer.
> 
> BR,
> Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ReadFromBackup, Primary_SYNC, Backups

2020-11-05 Thread Mahesh Renduchintala
Hi,
Can you please give some feedback on the below?



From: Mahesh Renduchintala 
Sent: Tuesday, November 3, 2020 8:20 AM
To: user@ignite.apache.org 
Subject: ReadFromBackup, Primary_SYNC, Backups

Hi

I have a large SQL table (12 million records) in cacheMode partitioned.
This table is distributed over two server nodes


-1-

When running a large SELECT from a thick client node,  could data be fetched 
from the backup instead of primary partitions?

Below is the configuration.










/>

We are seeing some performance improvement, since we made regardFromBackup= 
False and a few other things.

-2-
in a two server system, if readFromBackup = False for a cache, and if one 
server fails,

would 2nd server stop serving client requests since some partition data is in 
the backup?


-3-
is it possible that readFromBackup=True and Primary_Sync mode for a cache could 
give inconsistent data for cacheMode Replicated in a 2 server node?


-4-
if I increase backup= 10, in a 2 server system, would it mean that there are 10 
backups.

I am guessing, ignite would keep a single back up on each server, not 5 and 5 
on each server.
a new backup is created for that cache for any new node joining the cluster.
is this right understanding?




Strong

regards
mahesh



[2.9.0]Entryprocessor cannot be hot deployed properly via UriDeploymentSpi

2020-11-05 Thread 18624049226

Hi community,

Entryprocessor cannot be hot deployed properly via UriDeploymentSpi,the 
operation steps are as follows:


1.put jar in the specified folder of uriList;

2.Use example-deploy.xml,start two ignite nodes;

3.Use the DeployClient to deploy the service named "deployService";

4.Execute the test through ThickClientTest, and the result is correct;

5.Modify the code of DeployServiceImpl and DeployEntryProcessor, for 
example, change "Hello" to "Hi", then repackage it and put it into the 
specified folder of uriList;


6.Redeploy services by RedeployClient;

7.Execute the test again through ThickClientTest, and the result is 
incorrect,we will find that if the Entryprocessor accessed by the 
service is on another node, the Entryprocessor uses the old version of 
the class definition.



<>


Re: Connecting to RDBMS - Failed to instantiate Spring XML application context

2020-11-05 Thread akorensh
Hi,
  If you look at the message, on the very bottom it tells you:

  Caused by: java.lang.ClassNotFoundException:
com.mysql.jdbc.jdbc2.optional.MysqlDataSource

  This means that the class in question is not on the classpath of the Java
VM.

   Add the relevant library to the classpath and it should work.
   
  To diagnose:
   1. start w/out this bean, and use visualvm/jinfo/jps or the logs inside
ignite to see what the classpath is.
   2. if you see that the relevant library is not there, add it to the place
where ignite will look for it.
  usually in the ${ignite_home}/libs dir. 
   3. restart w/out that bean and verify that the classpath contains your
jar file
   4. Once you verified that the classpath is correct, start w/the bean in
question.

 see:
https://ignite.apache.org/docs/latest/installation/installing-using-docker#deploying-user-libraries

https://stackoverflow.com/questions/17151058/how-can-i-view-the-classpath-and-jvm-args-of-an-executing-java-program-in-window

Also Ignite will show you the classpath of the process:
 look for: [INFO ][main][IgniteKernal%node1] Classpath value:
 (here you need to start ignite w/the vm system property IGNITE_QUIET=false)
   https://ignite.apache.org/docs/latest/logging#overview

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: WAL and WAL Archive volume size recommendation

2020-11-05 Thread Denis Magda
Hello Facundo,

Just go ahead and disable the WAL archives. You need the archives for the
point-in-time-recovery feature that is supported by GridGain. I'll check
with the community why we have the archives enabled by default in a
separate discussion.
https://ignite.apache.org/docs/latest/persistence/native-persistence#disabling-wal-archive

-
Denis


On Thu, Nov 5, 2020 at 11:37 AM facundo.maldonado <
maldonadofacu...@gmail.com> wrote:

> Well, I found some useful numbers between two pages in the documentation.
>
> "By default, there are 10 active segments."  wal ref
> <
> https://ignite.apache.org/docs/latest/persistence/native-persistence#write-ahead-log>
>
>
> "The number of segments kept in the archive is such that the total size of
> all segments does not exceed the specified size of the WAL archive.
> By default, the maximum size of the WAL archive (total space it occupies on
> disk) is defined as 4 times the size of the checkpointing buffer."
> wal-archive ref
> <
> https://ignite.apache.org/docs/latest/persistence/native-persistence#wal-archive>
>
>
> "The default buffer size is calculated as a function of the data region
> size:
>
> Data Region Size   Default Checkpointing Buffer Size
> < 1 GB MIN (256 MB, Data_Region_Size)
> between 1 GB and 8 GB Data_Region_Size / 4
> > 8 GB 2 GB"   checkpoint buffer size
> > <
> https://ignite.apache.org/docs/latest/persistence/persistence-tuning#adjusting-checkpointing-buffer-size>
>
>
> So, if i have:
> data region max size: 5Gb
> storage vol size: 10Gi
> I can set:
> WAL vol size: 1Gb  # WAL size is 10 * wal segment 64Mb
> WAL archive vol size: 5Gi
> # 4 times checkpoint size
> # region < 8Gb, checkpoint size is region/4 --> wal archive size is equals
> to region size
> # region > 8Gb, checkpoint is 2 Gb --> wal archive is at least 4*2Gb == 8GB
>
> With those settings, I can keep the test running some more time but the pod
> keeps crashing.
> At least, it seems that I'm not getting the same error as before.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: WAL and WAL Archive volume size recommendation

2020-11-05 Thread facundo.maldonado
Well, I found some useful numbers between two pages in the documentation.

"By default, there are 10 active segments."  wal ref

  

"The number of segments kept in the archive is such that the total size of
all segments does not exceed the specified size of the WAL archive.
By default, the maximum size of the WAL archive (total space it occupies on
disk) is defined as 4 times the size of the checkpointing buffer." 
wal-archive ref

  

"The default buffer size is calculated as a function of the data region
size:

Data Region Size   Default Checkpointing Buffer Size
< 1 GB MIN (256 MB, Data_Region_Size)
between 1 GB and 8 GB Data_Region_Size / 4
> 8 GB 2 GB"   checkpoint buffer size
> 
>   

So, if i have:
data region max size: 5Gb
storage vol size: 10Gi
I can set: 
WAL vol size: 1Gb  # WAL size is 10 * wal segment 64Mb
WAL archive vol size: 5Gi
# 4 times checkpoint size
# region < 8Gb, checkpoint size is region/4 --> wal archive size is equals
to region size
# region > 8Gb, checkpoint is 2 Gb --> wal archive is at least 4*2Gb == 8GB

With those settings, I can keep the test running some more time but the pod
keeps crashing.
At least, it seems that I'm not getting the same error as before.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Connecting to RDBMS - Failed to instantiate Spring XML application context

2020-11-05 Thread ABDumalagan
Hello all,

This is my first time using Apache Ignite, and I am working directly with
the given example configuration files in the binary release folder. 

Currently I am using PuTTY, and my objective is to load data from an
existing Oracle database into cache. The first and only step I've taken
right now is to create my own XML file with the  CacheJdbcBlobStore
configurations

 
, and I pass the file as a parameter to the script /bash ignite [file
path]/. 

The error I get is class /org.apache.ignite.IgniteException: Failed to
instantiate Spring XML application context/, and I think it's caused by
/org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find
class/

I have tried to  set and export USER_LIBS to the
apache-ignite-2.8.1-bin/benchmarks/libs file path, however I still get the
same error. 

Below I have my XML file config and error log - any advice or help is
appreciated! 


*XML File
*

http://www.springframework.org/schema/beans;
       xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
       xsi:schemaLocation=
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd;>
    
        
        
        
    
    
        
            
               
                
                    
                    
                        
                        
                        
                    
                
            
        

       
        
            
                
                   
                   
                    
                        
                            
                               
                                127.0.0.1:47500..47509
                            
                        
                    
                
            
        
    



*Error*

class org.apache.ignite.IgniteException: Failed to instantiate Spring XML
application context (make sure all classes used in Spring configuration are
present at CLASSPATH)
[springUrl=file:/app/apache/apache-ignite-2.8.1-bin/examples/config/example-cache-blob-store.xml]
        at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
        at org.apache.ignite.Ignition.start(Ignition.java:349)
        at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
instantiate Spring XML application context (make sure all classes used in
Spring configuration are present at CLASSPATH)
[springUrl=file:/app/apache/apache-ignite-2.8.1-bin/examples/config/example-cache-blob-store.xml]
        at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:387)
        at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
        at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
        at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:710)
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:911)
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
        at org.apache.ignite.Ignition.start(Ignition.java:346)
        ... 1 more
Caused by: org.springframework.beans.factory.CannotLoadBeanClassException:
Cannot find class [com.mysql.jdbc.jdbc2.optional.MysqlDataSource] for bean
with name 'mysqlDataSource' defined in URL
[file:/app/apache/apache-ignite-2.8.1-bin/examples/config/example-cache-blob-store.xml];
nested exception is java.lang.ClassNotFoundException:
com.mysql.jdbc.jdbc2.optional.MysqlDataSource
        at
org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1397)
        at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineTargetType(AbstractAutowireCapableBeanFactory.java:638)
        at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:607)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1496)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1018)
        at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:737)
        at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
        at

WAL and WAL Archive volume size recommendation

2020-11-05 Thread Pelado
Hi everyone, I'm running a POC on a small deployment in a kubernetes
environment and after a few minutes of load testing, the data node fails
with this message:

ss o.a.i.i.processors.cache.persistence.StorageException:* Failed to
archive WAL segment*
[srcFile=/opt/work/wal/node00-ef1e49d3-1c67-4527-9a24-bae580a5ed91/0005.wal,
dstFile=/opt/work/walarchive/node00-ef1e49d3-1c67-4527-9a24-bae580a5ed91/0065.wal.tmp]]]
org.apache.ignite.internal.processors.cache.persistence.StorageException:
Failed to archive WAL segment
[srcFile=/opt/work/wal/node00-ef1e49d3-1c67-4527-9a24-bae580a5ed91/0005.wal,
dstFile=/opt/work/walarchive/node00-ef1e49d3-1c67-4527-9a24-bae580a5ed91/0065.wal.tmp]
.
Caused by: java.nio.file.FileSystemException:
/opt/work/wal/node00-ef1e49d3-1c67-4527-9a24-bae580a5ed91/0005.wal
->
/opt/work/walarchive/node00-ef1e49d3-1c67-4527-9a24-bae580a5ed91/0065.wal.tmp:
*No space left on device*

I have one data node, with a cache, persistence enabled and I have 3 PVC
one for each of storage, WALand WALarchive.
I load data from a kafka topic using a Kafka Streamer running in a
different pod.
Incoming load (at the topic) is about 5K records per second.
Average record size is 1.8 Kb.

Data region is configured with a maxSize of 5 Gb
Storage volumen with 10 GB
Wal volumen with 2 GB
Wal archive with 2 GB. (also tried 3 and 4)

The rest of the settings  (page size, wal segment size, etc) are with
default values.
Ignite version is 2.9.0.

My question is, Is there some recommendation on the size these volumes
should have respective on the storage size, record size or some other
factor?
Maybe wal segment? If I increase the wal segment from 64Mb (default size)
to lets say 512 Mb, How much should I increase WAL and WAL archive volumes?

Thanks,
-- 
Facundo Maldonado


Ignite Client Node Stopped : OOM Error

2020-11-05 Thread Ravi Makwana
Hi,

We are using Apache Ignite 2.7.0 binary and servers are using Linux OS &
app servers are using Windows OS.We are using Apache Ignite .Net APIs.

Recently we have noticed that our application is not stopped with an OOM
error.

App server has 32 GB RAM & We are specifying JVM Heap = 8 GB

I am sharing a client log and could you please suggest why the client node
is not responding & what is the possible cause and how to resolve this
issue?


Thanks & Regards,


logs.rar
Description: Binary data


Re: Large Heap with lots of BinaryMetaDataHolders

2020-11-05 Thread ssansoy
Hi Andrew any thoughts on this? thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client App Object Allocation Rate

2020-11-05 Thread ssansoy
Hi was there any update on this? thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to get column names for a query in Ignite thin client mode

2020-11-05 Thread Shravya Nethula
Hi Alex,

Thank you for the information.
Is there a possibility of getting the datatypes in thick client mode?


Regards,

Shravya Nethula,

BigData Developer,

[cid:0d6f1002-8e08-4cec-a2dd-85d2391ee2f7]

Hyderabad.


From: Alex Plehanov 
Sent: Thursday, November 5, 2020 12:03 PM
To: user@ignite.apache.org 
Cc: Bhargav Kunamneni 
Subject: Re: How to get column names for a query in Ignite thin client mode

Currently, only field names can be obtained, there is no information about 
field data types in thin client protocol.

ср, 4 нояб. 2020 г. в 13:58, Shravya Nethula 
mailto:shravya.neth...@aline-consulting.com>>:
Ilya and Alex,

Thank you for information.
Can you please also suggest how to get the datatypes of those columns obtained 
from the query?



Regards,

Shravya Nethula,

BigData Developer,

[cid:1759717cc2dd11dd2572]

Hyderabad.


From: Alex Plehanov mailto:plehanov.a...@gmail.com>>
Sent: Tuesday, November 3, 2020 12:13 PM
To: user@ignite.apache.org 
mailto:user@ignite.apache.org>>
Subject: Re: How to get column names for a query in Ignite thin client mode

Columns information is read by thin-client only after the first data request, 
so you need to read at least one row to get columns.

вт, 3 нояб. 2020 г. в 09:31, Ilya Kazakov 
mailto:kazakov.i...@gmail.com>>:
Hello, Shravya! It is very interesting! I am trying to reproduce your case, and 
what I see. I can see column names in the thin client only after query 
execution.

For example:

ClientConfiguration clientConfig = new 
ClientConfiguration().setAddresses("127.0.0.1");
try(IgniteClient thinClient = Ignition.startClient(clientConfig)){
SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM T1");
FieldsQueryCursor cursor = thinClient.query(sql);
cursor.getAll();
int count = cursor.getColumnsCount();
System.out.println(count);
List columnNames = new ArrayList<>();
for (int i = 0; i < count; i++) {
String columnName = cursor.getFieldName(i);
columnNames.add(columnName);
}
System.out.println("columnNames:::"+columnNames);
}

But if this is the correct behavior I do not know yet, I will try to find out.


Ilya Kazakov

вт, 3 нояб. 2020 г. в 12:51, Shravya Nethula 
mailto:shravya.neth...@aline-consulting.com>>:
Hi,

For Ignite thick client, the column names for a given sql query are coming up 
as expected with the following code:
public class ClientNode {

public static void main(String[] args) {
IgniteConfiguration igniteCfg = new IgniteConfiguration();
igniteCfg.setClientMode(true);

Ignite ignite = Ignition.start(igniteCfg);
IgniteCache foo = ignite.getOrCreateCache("foo");

SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
FieldsQueryCursor cursor = foo.query(sql);
int count = cursor.getColumnsCount();
List columnNames = new ArrayList<>();

for (int i = 0; i < count; i++) {
  String columnName = cursor.getFieldName(i);
  columnNames.add(columnName);
}
System.out.println("columnNames:::"+columnNames);

 } }

 Output:
 columnNames:::[ID, NAME, LAST_NAME, AGE, CITY_ID, EMAIL_ID]



On the other hand, for thin client, the column names are coming up as empty 
list.
The following is the code:

public class ClientNode {

public static void main(String[] args) {
ClientConfiguration clientConfig = new ClientConfiguration();
cc.setUserName("username");
cc.setUserPassword("password");

IgniteClient thinClient = Ignition.startClient(clientConfig);

SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
FieldsQueryCursor cursor = thinClient.query(sql);
int count = cursor.getColumnsCount();
List columnNames = new ArrayList<>();

for (int i = 0; i < count; i++) {
  String columnName = cursor.getFieldName(i);
  columnNames.add(columnName);
}
System.out.println("columnNames:::"+columnNames);

 } }

Output:
columnNames:::[ ]

While using IgniteCache.query(SqlFieldsQuery), the column names are coming up. 
But while using IgniteClient.query(SqlFieldsQuery), the column names are not 
coming up. Are we missing any configurations? Is there something wrong in the 
code? And also is there anyway in which we can identify the datatype of columns 
given in the query! We are looking for the datatype of the columns in the query 
but not the datatype of columns in the table!

Any help here will be much appreciated!
Thanks in advance!



Regards,

Shravya Nethula,

BigData Developer,

[cid:1759717cc2dbe2ca0301]

Hyderabad.


Re: New Node - Rebalancing

2020-11-05 Thread Maxim Muzafarov
Hi Mahesh,

In addition to the Denis answer, please, check the new rebalancing
metrics for added for each cache group (2.8, 2.9 releases):
https://issues.apache.org/jira/browse/IGNITE-12193

On Tue, 3 Nov 2020 at 22:35, Denis Magda  wrote:
>
> Hi Mahesh,
>
> Use these metrics to monitor the progress:
>
> JMX: 
> https://ignite.apache.org/docs/latest/monitoring-metrics/metrics#monitoring-rebalancing
> Rebalancing widget of Control Center: 
> https://www.gridgain.com/docs/control-center/latest/monitoring/configuring-widgets#rebalance-widget
>
>
> -
> Denis
>
>
> On Tue, Nov 3, 2020 at 11:14 AM Mahesh Renduchintala 
>  wrote:
>>
>> Hi,
>>
>> As soon as we add a new server node into the cluster, rebalancing starts 
>>  this is clear.
>> is there a way to know when the rebalancing successfully ends on the new 
>> server node?
>> Caches in the cluster are both replicated and partitioned.
>>
>> regards
>> Mahesh
>>
>>


Re: [ignite 2.9.0] thin clients cannot access the Ignite Service deployed through UriDeploymentSpi( java.lang.ClassNotFoundException)

2020-11-05 Thread Alex Plehanov
Hello,

Thanks for the report, I will try to fix it shortly.

ср, 4 нояб. 2020 г. в 12:35, 18624049226 <18624049...@163.com>:

> Hi community,
>
> The operation steps are as follows:
>
> 1.use ignite.sh  example-deploy.xml start a server node
>
> 2.Put the service jar package in the /home/test/deploy directory
>
> 3.Deploy services using DeployClient
>
> 4.If you use ThickClientTest and ThinClientTest to access the service
> respectively, you will find that the ThickClientTest access is
> successful, but the ThinClientTest access is abnormal. The error is
> java.lang.ClassNotFoundException.
>
> See ticket below for details:
>
> https://issues.apache.org/jira/browse/IGNITE-13633
>
>
>


L2-cache slow/not working as intended

2020-11-05 Thread Bastien Durel
Hello,

I'm using an ignite cluster to back an hibernate-based application. I
configured L2-cache as explained in
https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache

(config below)

I've ran a test reading a 1M-elements cache with a consumer counting
elements. It's very slow : more than 5 minutes to run.

Session metrics says it was the LC2 puts that takes most time (5
minutes and 3 seconds of a 5:12" operation)

INFO  [2020-11-05 09:51:15,694] 
org.hibernate.engine.internal.StatisticalLoggingSessionEventListener: Session 
Metrics {
33350 nanoseconds spent acquiring 1 JDBC connections;
25370 nanoseconds spent releasing 1 JDBC connections;
571572 nanoseconds spent preparing 1 JDBC statements;
1153110307 nanoseconds spent executing 1 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
303191158712 nanoseconds spent performing 100 L2C puts;
23593547 nanoseconds spent performing 1 L2C hits;
0 nanoseconds spent performing 0 L2C misses;
370656057 nanoseconds spent executing 1 flushes (flushing a total of 
101 entities and 2 collections);
4684 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 
entities and 0 collections)
}

It seems long, event for 1M puts, but ok, let's say the L2C is
initialized now, and it will be better next time ? So I ran the query
again, but it took 5+ minutes again ...

INFO  [2020-11-05 09:58:02,538] 
org.hibernate.engine.internal.StatisticalLoggingSessionEventListener: Session 
Metrics {
28982 nanoseconds spent acquiring 1 JDBC connections;
25974 nanoseconds spent releasing 1 JDBC connections;
52468 nanoseconds spent preparing 1 JDBC statements;
1145821128 nanoseconds spent executing 1 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
303763054228 nanoseconds spent performing 100 L2C puts;
1096985 nanoseconds spent performing 1 L2C hits;
0 nanoseconds spent performing 0 L2C misses;
317558122 nanoseconds spent executing 1 flushes (flushing a total of 
101 entities and 2 collections);
5500 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 
entities and 0 collections)
}

Why does the L2 cache had to be filled again ? Isn't his purpose was to
share it between Sessions ?

Actually, disabling it make the test runs in less that 6 seconds.

Why is L2C working that way ?

Regards,


**

I'm running 2.9.0 from Debian package

Hibernate properties :
hibernate.cache.use_second_level_cache: true
hibernate.generate_statistics: true
hibernate.cache.region.factory_class: 
org.apache.ignite.cache.hibernate.HibernateRegionFactory
org.apache.ignite.hibernate.ignite_instance_name: ClusterWA
org.apache.ignite.hibernate.default_access_type: READ_ONLY

Method code:
@GET
@Timed
@UnitOfWork
@Path("/events/speed")
public Response getAllEvents(@Auth AuthenticatedUser auth) {
AtomicLong id = new AtomicLong();
StopWatch watch = new StopWatch();
watch.start();
evtDao.findAll().forEach(new Consumer() {

@Override
public void accept(Event t) {
long cur = id.incrementAndGet();
if (cur % 65536 == 0)
logger.debug("got element#{}", cur);
}
});
watch.stop();
return Response.ok().header("X-Count", 
Long.toString(id.longValue())).entity(new Time(watch)).build();
}

Event cache config: