Handling Of Partition loss

2019-09-16 Thread Akash Shinde
Hi,
I am trying to recover lost data in case of partition loss.
In my ignite configuration native persistence is *off*.
I have started event listener on EVT_CACHE_REBALANCE_PART_DATA_LOST  event.
This listener will get lost partition list using cache.lostPartitions()
method.
The issue is that listener gets call per partition. So if there 100
partition loss due to single node termination then 100 time this
listener will get called and last multiple calls to the listener will fetch
all lost partition list.

*Lets take a scenario:*
Started two server nodes  Node A and Node B.  Started cache with
partition mode and the number of backup set to 0 in order to facilitate
simulation of partition loss scenarios
Started event listener on both node listening  to
EVT_CACHE_REBALANCE_PART_DATA_LOST  event.

Number of partitions on node A = 500
Number of partitions on node B = 524

Now stop node B. After termination of node B listener running on node A
gets call multiple time per partition.
I have printed logs on listener

primary partition size after loss:1024
*Lost partion Nos.1*
IgniteThread [compositeRwLockIdx=1, stripe=-1, plc=-1,
name=exchange-worker-#42%springDataNode%]::*[0]*
Event Detail:CacheRebalancingEvent [cacheName=ASSET_GROUP_CACHE, part=0,
discoNode=TcpDiscoveryNode [id=1bb17828-3556-499f-a4e6-98cfdc1d11fb,
addrs=[0:0:0:0:0:0:0:1, 10.113.14.98, 127.0.0.1], sockAddrs=[],
discPort=47501, order=2, intOrder=2, lastExchangeTime=1568357181089,
loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false],
discoEvtType=12, discoTs=1568357376683, discoEvtName=NODE_FAILED,
nodeId8=499400ac, msg=Cache rebalancing event.,
type=CACHE_REBALANCE_PART_DATA_LOST, tstamp=1568357376714]
primary partition size after loss:1024
*Lost partion Nos.2*
IgniteThread [compositeRwLockIdx=1, stripe=-1, plc=-1,
name=exchange-worker-#42%springDataNode%]::*[0, 1]*
Event Detail:CacheRebalancingEvent [cacheName=ASSET_GROUP_CACHE, part=1,
discoNode=TcpDiscoveryNode [id=1bb17828-3556-499f-a4e6-98cfdc1d11fb,
addrs=[0:0:0:0:0:0:0:1, 10.113.14.98, 127.0.0.1], sockAddrs=[],
discPort=47501, order=2, intOrder=2, lastExchangeTime=1568357181089,
loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false],
discoEvtType=12, discoTs=1568357376683, discoEvtName=NODE_FAILED,
nodeId8=499400ac, msg=Cache rebalancing event.,
type=CACHE_REBALANCE_PART_DATA_LOST, tstamp=1568357376726]
primary partition size after loss:1024
*Lost partion Nos.3*
IgniteThread [compositeRwLockIdx=1, stripe=-1, plc=-1,
name=exchange-worker-#42%springDataNode%]::*[0, 1, 2]*
Event Detail:CacheRebalancingEvent [cacheName=ASSET_GROUP_CACHE, part=2,
discoNode=TcpDiscoveryNode [id=1bb17828-3556-499f-a4e6-98cfdc1d11fb,
addrs=[0:0:0:0:0:0:0:1, 10.113.14.98, 127.0.0.1], sockAddrs=[],
discPort=47501, order=2, intOrder=2, lastExchangeTime=1568357181089,
loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false],
discoEvtType=12, discoTs=1568357376683, discoEvtName=NODE_FAILED,
nodeId8=499400ac, msg=Cache rebalancing event.,
type=CACHE_REBALANCE_PART_DATA_LOST, tstamp=1568357376726]
primary partition size after loss:1024
*Lost partion Nos.4*
IgniteThread [compositeRwLockIdx=1, stripe=-1, plc=-1,
name=exchange-worker-#42%springDataNode%]::*[0, 1, 2, 4]*
Event Detail:CacheRebalancingEvent [cacheName=ASSET_GROUP_CACHE, part=4,
discoNode=TcpDiscoveryNode [id=1bb17828-3556-499f-a4e6-98cfdc1d11fb,
addrs=[0:0:0:0:0:0:0:1, 10.113.14.98, 127.0.0.1], sockAddrs=[],
discPort=47501, order=2, intOrder=2, lastExchangeTime=1568357181089,
loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false],
discoEvtType=12, discoTs=1568357376683, discoEvtName=NODE_FAILED,
nodeId8=499400ac, msg=Cache rebalancing event.,
type=CACHE_REBALANCE_PART_DATA_LOST, tstamp=1568357376736]
primary partition size after loss:1024
*Lost partion Nos.5*
*.*
*.*
*.*
*.*
IgniteThread [compositeRwLockIdx=1, stripe=-1, plc=-1,
name=exchange-worker-#42%springDataNode%]::[0, 1, 2, 4, 5, 6, 7, 11, 13,
17, 22, 26, 28, 29, 30, 33, 34, 37, 38, 41, 43, 45, 47, 48, 49, 50, 55, 58,
61, 62, 64, 65, 68, 70, 71, 75, 77, 79, 81, 82, 85, 87, 88, 89, 90, 93,
100, 101, 102, 104, 110, 112, 114, 116, 121, 123, 125, 126, 132, 133, 135,
137, 138, 139, 140, 144, 145, 146, 147, 149, 150, 151, 154, 156, 157, 158,
163, 164, 165, 169, 170, 172, 173, 176, 178, 180, 182, 183, 184, 185, 195,
196, 198, 199, 203, 204, 212, 213, 215, 217, 219, 220, 222, 223, 224, 226,
227, 230, 233, 234, 236, 237, 240, 242, 245, 248, 250, 251, 253, 255, 257,
258, 263, 265, 266, 267, 269, 270, 272, 273, 275, 276, 277, 278, 281, 282,
283, 287, 288, 292, 293, 295, 296, 297, 298, 300, 301, 302, 305, 308, 309,
310, 311, 313, 314, 315, 318, 319, 320, 322, 323, 324, 326, 327, 328, 329,
330, 331, 332, 333, 336, 340, 342, 344, 347, 348, 349, 351, 352, 353, 354,
355, 357, 362, 364, 369, 370, 371, 373, 374, 375, 376, 380, 382, 383, 387,
389, 394, 395, 396, 397, 398, 401, 402, 403, 407, 408, 409, 410, 411, 412,
413, 416, 417, 421, 424, 425, 427, 430, 431, 433, 435, 437, 438, 439, 440,
441, 442,

Re: Ignite query performance with lots of joins

2019-09-16 Thread Denis Magda
Ignite SQL engine cannot be considered as a competitor (in a single-node
scenario) to old-good RDBMS like Postgres or MySQL. Those DBs have been
being developed for decades and are optimized for single-machine
deployments. While our efforts (Ignite community) were put into distributed
optimizations when an application has to scale out and utilize RAM in the
distributed fashion. That's why we have that recommendation to avoid any
misunderstanding. Probably, it needs to be rewritten a bit for more clarity.

Btw, if you are around Silicon Valley, step by my session at PostgresConf
this week:
https://postgresconf.org/conferences/SV2019/program/proposals/postgresql-with-in-memory-computing-faster-transactions-and-analytics-d6bb1225-4721-46ce-beda-e44f5e7c333e

As for your specific case, I think that a primary bottleneck is a number of
JOINs. It makes sense to rewrite this query first and then scale-out for
bigger benefits.

Ivan Pavluknin, Stan Lukyanov, Pavel Vinokurov, could you folks please
check this thread out and this repo suggesting any optimizations?
https://github.com/spoutnik-be/h2-ignite-perf.git


-
Denis


On Mon, Sep 16, 2019 at 6:08 AM spoutnik_be  wrote:

> About the quote on Stackoverflow:
> "Ignite/GridGain is optimized for multi-nodes deployments with RAM as a
> primary storage. Don’t try to compare a single-node GridGain cluster to a
> relational database that was optimized for such single-node configurations.
> You should deploy a multi-node GridGain cluster with the whole copy of data
> in RAM."
>
> Not sure how to interpret the above statement. The support for SQL is an
> attractive feature of Ignite/Gridgain, but if it doesn't perform on a
> single
> node with little data I don't see how it will perform on a multi-node
> cluster.
>
> What would be then your recommendation? Should we implement a SQL converter
> to translate queries into something else Ignite could run faster?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Data region LRU offheap algo not working

2019-09-16 Thread Alexandr Shapkin
Hello!

As far as my understanding goes, I see that eviction works ok.

>2019-09-09 11:04:03.557 WARN  [sys-stripe-5-#6%TemenosGrid%] 
>IgniteCacheDatabaseSharedManager - Page-based evictions started. Consider 
>increasing 'maxSize' on Data Region configuration: 1G_Region

This message tells that now eviction process is started and will free pages 
according to the algorithm.
it means that when Ignite requests for a new page and there are no available 
ones, then Ignite will apply eviction first and then use those pages for a new 
values.
So you will not see any significant memory changes during eviction. 
I.e. Ignite will show that the data region is full, but still allows you to 
write new values.

There are no any special messages log messages regarding evicted pages. Only 
the first one, when eviction comes to the play.
I think you may try to look into getEvictionRate() metrics [1] for details. 
Also you should be able to see it in Visor GUI.

[1] - 
https://apacheignite.readme.io/docs/memory-metrics#section-data-region-metrics

From: rick_tem
Sent: Tuesday, September 10, 2019 10:12 AM
To: user@ignite.apache.org
Subject: Data region LRU offheap algo not working

Hi,

I am trying to find out why it appears the RANDOM_LRU algo doesn't seem to
work with the following config.  Log attached, as well...after the log of
the below...

2019-09-09 11:04:03.557 WARN  [sys-stripe-5-#6%TemenosGrid%]
IgniteCacheDatabaseSharedManager - Page-based evictions started. Consider
increasing 'maxSize' on Data Region configuration: 1G_Region

after a few minutes you see memory steadily decrease.  What information in
the log will help me determine how many pages are freed, etc?

Thanks,
Rick

dateRepo1.out
  
dataRepo2.out
  

























--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Running a C++ Thick Node compute func from a Java Thick Node

2019-09-16 Thread Alexandr Shapkin
Json, 

Could you share an example of why do you need this feature?

In general, platform-specific nodes should be able to execute only native code. 
I.e. C++ -> C++, C#-> C# etc.

But since a thick client is kind of a wrapper around Java one it’s quite easy 
to call a Java code on a remote node.


From: Igor Sapego
Sent: Monday, September 16, 2019 6:14 PM
To: user
Subject: Re: Running a C++ Thick Node compute func from a Java Thick Node

Hello,

Such function is not supported for now. You can raise a ticket
if it's something you'd like to have in Ignite.


Best Regards,
Igor


On Mon, Sep 16, 2019 at 6:00 PM codie  wrote:
Hello,
I have two nodes running with a TCP Discovery Spi, one is a C++ Thick 
node and one is a Java Thick node. The C++ node has the examples compute 
func "CountWords" registered. How can I trigger this function from the 
Java thick nodes? Does "RegisterComputeFunc" not inform other nodes, 
that the compute function can be found on this specific c++ node?

Thanks,
Json



RE: Unsubscribe

2019-09-16 Thread Alexandr Shapkin
Hello,  

Please,  send an email to user-unsubscr...@ignite.apache.org and 
dev-unsubscr...@ignite.apache.org

More details https://ignite.apache.org/community/resources.html#mail-lists

From: Deepa Kolwalkar
Sent: Monday, September 16, 2019 12:37 PM
To: user@ignite.apache.org; d...@ignite.apache.org
Subject: Unsubscribe

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you



Re: Get all cache entries

2019-09-16 Thread Taras Ledkov

Hi,


Please take a look at IgniteCache#query.
You can use ScanQuery example [1] .

[1]. 
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java#L121



16.09.2019 18:04, Кузин Никита (Nikita Kuzin) пишет:

Hello!

What is more preferable way to get all elements from cache? (Something 
like IgniteStreamer, but other direction)


Thank you

_
С уважением, Никита Кузин
Ведущий программист-разработчик

* Интернейшнл АйТи Дистрибьюшн*
*
*
e-mail: nku...@iitdgroup.ru 
тел.: 84995021375 доб. 320
моб. тел.: 79260948887
115114, Москва, Дербеневская ул., 20-27


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Running a C++ Thick Node compute func from a Java Thick Node

2019-09-16 Thread Igor Sapego
Hello,

Such function is not supported for now. You can raise a ticket
if it's something you'd like to have in Ignite.

Best Regards,
Igor


On Mon, Sep 16, 2019 at 6:00 PM codie  wrote:

> Hello,
> I have two nodes running with a TCP Discovery Spi, one is a C++ Thick
> node and one is a Java Thick node. The C++ node has the examples compute
> func "CountWords" registered. How can I trigger this function from the
> Java thick nodes? Does "RegisterComputeFunc" not inform other nodes,
> that the compute function can be found on this specific c++ node?
>
> Thanks,
> Json
>
>


Get all cache entries

2019-09-16 Thread Nikita Kuzin
Hello!

What is more preferable way to get all elements from cache? (Something like 
IgniteStreamer, but other direction)

Thank you

_
С уважением, Никита Кузин
Ведущий программист-разработчик

 Интернейшнл АйТи Дистрибьюшн

e-mail: nku...@iitdgroup.ru
тел.: 84995021375 доб. 320
моб. тел.: 79260948887
115114, Москва, Дербеневская ул., 20-27


Running a C++ Thick Node compute func from a Java Thick Node

2019-09-16 Thread codie

Hello,
I have two nodes running with a TCP Discovery Spi, one is a C++ Thick 
node and one is a Java Thick node. The C++ node has the examples compute 
func "CountWords" registered. How can I trigger this function from the 
Java thick nodes? Does "RegisterComputeFunc" not inform other nodes, 
that the compute function can be found on this specific c++ node?


Thanks,
Json



Re: Ignite query performance with lots of joins

2019-09-16 Thread spoutnik_be
About the quote on Stackoverflow:
"Ignite/GridGain is optimized for multi-nodes deployments with RAM as a
primary storage. Don’t try to compare a single-node GridGain cluster to a
relational database that was optimized for such single-node configurations.
You should deploy a multi-node GridGain cluster with the whole copy of data
in RAM."

Not sure how to interpret the above statement. The support for SQL is an
attractive feature of Ignite/Gridgain, but if it doesn't perform on a single
node with little data I don't see how it will perform on a multi-node
cluster.

What would be then your recommendation? Should we implement a SQL converter
to translate queries into something else Ignite could run faster?






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [cpp] Setup a near cache on client and server nodes

2019-09-16 Thread Igor Sapego
Denis,

No ideas here. I think, it is not possible right now from pure C++.

Best Regards,
Igor


On Sat, Sep 14, 2019 at 1:02 AM Denis Magda  wrote:

> Igor,
>
> Any idea how to start a cache dynamically from C++ thick client passing
> near cache settings?
>
> Don't see how we can do it know unless you start a special Java app that
> starts the caches with required settings and dies.
>
> -
> Denis
>
>
> On Fri, Sep 13, 2019 at 2:34 PM Oleg Popov  wrote:
>
>> As I wrote before - I create caches dynamically and cannot declare cache
>> configuration in client.xml because cache doesn't exist yet.
>>
>> --
>> *From: *"Denis Magda" 
>> *To: *"user" 
>> *Sent: *Friday, September 13, 2019 10:50:16 PM
>> *Subject: *Re: [cpp] Setup a near cache on client and server nodes
>>
>> Oleg,
>> You need to add the near cache settings to Ignite client configuration
>> explicitly. Please try out the code snippets from this documentation page:
>> https://apacheignite.readme.io/docs/near-caches
>>
>> -
>> Denis
>>
>>
>> On Fri, Sep 13, 2019 at 12:36 PM Oleg Popov  wrote:
>>
>>> I use thick client. I don't have any records with caches configurations
>>> in my client XML file (I create caches dynamically through REST requests
>>> and caches templates).
>>>
>>> I don't know where I should place a near cache configuration on a
>>> client. Should I explicitly declare a cache configuration and put a near
>>> cache configuration into it  ?
>>>
>>> Could you share a working settings (data node, client node) of a near
>>> cache ?
>>>
>>> --
>>> *From: *"Denis Magda" 
>>> *To: *"user" 
>>> *Sent: *Friday, September 13, 2019 8:46:06 PM
>>> *Subject: *Re: [cpp] Setup a near cache on client and server nodes
>>>
>>> Hello Oleg,
>>> Just to confirm, do you use C++ thin or thick (regular) client? If you
>>> have inserted this property into the configuration on the client side then
>>> it should work. Something might have failed on the visor end.
>>>
>>> Btw, do you see any performance difference after turning on/off the near
>>> cache?
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Thu, Sep 12, 2019 at 12:30 AM Oleg Popov  wrote:
>>>
 Hello.

 Configuration:

 1. N1 и N2 - data nodes (in different k8s clusters).
 2. C1 - client node (outside of k8s clusters; c++ client node).
 3. All caches are replicated. Caches create through REST/caches
 templates.

 Need:

 1. C1 has to have a near cache for better performance.

 Question:

 1. How to enable and use a near cache on client and data nodes ?
 2. Is there any support NearConfiguration in C++ ?

 I have already tried to add:

 

 
 
 >>> class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
 
 
 
 
 

 to a cache template, but ignitevisor shows that near cache is disabled
 ("off" state).


 С уважением, Попов О.В. / Best regards, Popov V Oleg

>>>
>>


RE: Authentication

2019-09-16 Thread Kurt Semba
Hi Andrei,

good to know – thank you.

So we need to distinguish between auth for

  1.  thin clients like JDBC clients and
  2.  thick clients (Java client that wants to join the cluster (as server or 
client))


I will look at GridSecurityProcessor for item 2 but in the meantime: I saw the 
CREATE command to create new SQL users on a freshly started cluster. How would 
you execute that using Java code? Would the app need to start the cluster, then 
use the Ignite JDBC driver to connect to the (PUBLIC) schema of that cluster, 
then run the CREATE SQL command and then exit?

Kurt

From: Andrei Aleksandrov 
Sent: Monday, September 16, 2019 12:13 PM
To: user@ignite.apache.org
Subject: Re: Authentication

External Email: Use caution in opening links or attachments.

Hi,

I guess that here Ignite has some documentation gap. Advanced security out of 
the box will work only with thin connections like webconsole, ODBC/JDBC, etc.

To get cluster node authentication you should add GridSecurityProcessor 
implementation:

https://apacheignite.readme.io/docs/advanced-security#section-enable-authentication

I created ticket on documentation:

https://issues.apache.org/jira/browse/IGNITE-12170

BR,
Andrei
9/16/2019 10:43 AM, Kurt Semba пишет:
Hi all,

I used the web-console to auto-generate some code and then extended the 
ServerNodeCodeStartup.java class according to the documentation to enable 
authentication (which requires to enable persistence) like this:

public static void main(String[] args) throws Exception {
IgniteConfiguration cfg = 
ServerConfigurationFactory.createConfiguration();

// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();

// Enabling the persistence.

storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);

// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);

// Enable authentication
cfg.setAuthenticationEnabled(true);

Ignite ignite = Ignition.start(cfg);

// Activate the cluster.
// This is required only if the cluster is still inactive.
ignite.cluster().active(true);

// Get all server nodes that are already up and running.
Collection nodes = ignite.cluster().forServers().nodes();

// Set the baseline topology that is represented by these nodes.
ignite.cluster().setBaselineTopology(nodes);
}


But when I run this, the output shows “authentication=off” and I can also 
connect a client without providing any user+pass…

[…]
[08:57:13] Security status [authentication=off, tls/ssl=off]
[…]
[08:57:16] Ignite node started OK (id=1f668071, instance name=ImportedCluster6)
[08:57:16] Topology snapshot [ver=1, locNode=1f668071, servers=1, clients=0, 
state=INACTIVE, CPUs=4, offheap=2.3GB, heap=2.6GB]
[08:57:16]   ^-- Baseline [id=0, size=1, online=1, offline=0]
[08:57:16]   ^-- All baseline nodes are online, will start auto-activation
[08:57:16] Ignite node stopped in the middle of checkpoint. Will restore memory 
state and finish checkpoint on node start.
[08:57:16] Both Ignite native persistence and CacheStore are configured for 
cache 'NsdevicesCache'. This configuration does not guarantee strict 
consistency between CacheStore and Ignite data storage upon restarts. Consult 
documentation for more details.

Any idea what I’m doing wrong?

I will also look into enabling TLS but wanted to start with user+pass auth.

Thanks
Kurt


Re: Authentication

2019-09-16 Thread Andrei Aleksandrov

Hi,

I guess that here Ignite has some documentation gap. Advanced security 
out of the box will work only with thin connections like webconsole, 
ODBC/JDBC, etc.


To get cluster node authentication you should add GridSecurityProcessor 
implementation:


https://apacheignite.readme.io/docs/advanced-security#section-enable-authentication

I created ticket on documentation:

https://issues.apache.org/jira/browse/IGNITE-12170

BR,
Andrei

9/16/2019 10:43 AM, Kurt Semba пишет:


Hi all,

I used the web-console to auto-generate some code and then extended 
the ServerNodeCodeStartup.java class according to the documentation to 
enable authentication (which requires to enable persistence) like this:


publicstaticvoidmain(String[] args) throwsException {

IgniteConfigurationcfg = ServerConfigurationFactory.createConfiguration();

// Ignite persistence configuration.

DataStorageConfigurationstorageCfg = newDataStorageConfiguration();

// Enabling the persistence.

storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);

// Applying settings.

cfg.setDataStorageConfiguration(storageCfg);

// Enable authentication

cfg.setAuthenticationEnabled(true);

Igniteignite = Ignition.start(cfg);

// Activate the cluster.

// This is required only if the cluster is still inactive.

ignite.cluster().active(true);

// Get all server nodes that are already up and running.

Collection nodes = ignite.cluster().forServers().nodes();

// Set the baseline topology that is represented by these nodes.

ignite.cluster().setBaselineTopology(nodes);

}

But when I run this, the output shows “authentication=off” and I can 
also connect a client without providing any user+pass…


/[…]/

/[08:57:13] Security status [authentication=off, tls/ssl=off]/

/[…] /

/[08:57:16] Ignite node started OK (id=1f668071, instance 
name=ImportedCluster6)/


/[08:57:16] Topology snapshot [ver=1, locNode=1f668071, servers=1, 
clients=0, state=INACTIVE, CPUs=4, offheap=2.3GB, heap=2.6GB]/


/[08:57:16]   ^-- Baseline [id=0, size=1, online=1, offline=0]/

/[08:57:16]   ^-- All baseline nodes are online, will start 
auto-activation/


/[08:57:16] Ignite node stopped in the middle of checkpoint. Will 
restore memory state and finish checkpoint on node start./


/[08:57:16] Both Ignite native persistence and CacheStore are 
configured for cache 'NsdevicesCache'. This configuration does not 
guarantee strict consistency between CacheStore and Ignite data 
storage upon restarts. Consult documentation for more details./


Any idea what I’m doing wrong?

I will also look into enabling TLS but wanted to start with user+pass 
auth.


Thanks

Kurt



Unsubscribe

2019-09-16 Thread Deepa Kolwalkar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: .net thin client taking time to establish connection to server node

2019-09-16 Thread Pavel Tupitsyn
Hi, try increasing ClientConnectorConfiguration.ThreadPoolSize on server
nodes

On Sun, Sep 15, 2019 at 11:10 AM siva  wrote:

> Hi,
>
> We are using *.net thinclient * and *one serever(.net)*.For less number of
> connections,its establishing connection to the server in milliseconds ,but
> as the number of connections increasing(concurrently), establishing
> connection between thin client and server taking in seconds even some times
> taking upto 1 minute.So, is there any tuning to set or need to increase the
> server nodes?
>
>
>
>
>
>
> Thanks
> siva
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache expiry policy not deleting records from disk(native persistence)

2019-09-16 Thread Artem Budnikov

Hi Denis,

That's on the expiry policies page: 
https://apacheignite.readme.io/docs/expiry-policies


On 13.09.2019 19:46, Denis Magda wrote:
Artem, thanks, could you please share a reference to the updated page? 
Can't find anything here:

https://apacheignite.readme.io/docs/evictions

Shiva, I've restarted the discussion on the dev list, to get to the 
bottom of this gap and how it can be addressed:

http://apache-ignite-developers.2346864.n4.nabble.com/How-to-free-up-space-on-disc-after-removing-entries-from-IgniteCache-with-enabled-PDS-td39839.html

-
Denis


On Fri, Sep 13, 2019 at 7:05 AM Artem Budnikov 
mailto:a.budnikov.ign...@gmail.com>> wrote:


Hi Denis,

I updated the page about eviction policies. Not freeing up space
on disk was not implemented for reasons explained in the dev-list
thread. I'll update the page once more if/when a solution is
implemented.

Artem

On 13.09.2019 00:34, Denis Magda wrote:

Shiva,

Hopefully, someone from the dev community will pick this ticket
up soon and solve the task. In the meantime, Artem, would you
mind documenting this limitation referring to ticket 10862?

-
Denis


On Tue, Sep 10, 2019 at 12:50 AM Shiva Kumar
mailto:shivakumar@gmail.com>> wrote:

I have filed a bug
https://issues.apache.org/jira/browse/IGNITE-12152 but this
is same as https://issues.apache.org/jira/browse/IGNITE-10862
Any idea on the timeline of these tickets?
In the documentation
https://apacheignite.readme.io/v2.7/docs/expiry-policies
it says when native persistence is enabled "*expired entries
are removed from both memory and disk tiers*" but in the disk
it just mark the pages as unwanted pages and same disk space
used by these unwanted pages will be used to store new pages
but it will not remove unwanted pages from disk and so it
will not release disk space used by these unwanted pages.

here is the developer's discussion link

http://apache-ignite-developers.2346864.n4.nabble.com/How-to-free-up-space-on-disc-after-removing-entries-from-IgniteCache-with-enabled-PDS-td39839.html


On Mon, Sep 9, 2019 at 11:53 PM Shiva Kumar
mailto:shivakumar@gmail.com>>
wrote:

Hi
I have deployed ignite on kubernetes and configured two
seperate persistent volume for WAL and persistence.
The issue Iam facing is same as
https://issues.apache.org/jira/browse/IGNITE-10862

Thanks
Shiva

On Mon, 9 Sep, 2019, 10:47 PM Andrei Aleksandrov,
mailto:aealexsand...@gmail.com>> wrote:

Hello,

I guess that generated WAL will take this disk space.
Please read about WAL here:

https://apacheignite.readme.io/docs/write-ahead-log

Please provide the size of every folder under
/opt/ignite/persistence.

BR,
Andrei

9/6/2019 9:45 PM, Shiva Kumar пишет:

Hi all,
I have set cache expiry policy like this


 
 
          
              
                
                
                
                
                
                  

                      


                      

                  
                

              
          
 


And batch inserting records to one of the table
which is created with above cache template.
Around 10 minutes, I ingested ~1.5GB of data and
after 10 minutes records started reducing(expiring)
when I monitored from sqlline.

0: jdbc:ignite:thin://192.168.*.*:10800> select
count(ID) from DIMENSIONS;


COUNT(ID)


248896

1 row selected (0.86 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800> select
count(ID) from DIMENSIONS;


COUNT(ID)


222174

1 row selected (0.313 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800> select
count(ID) from DIMENSIONS;


COUNT(

Authentication

2019-09-16 Thread Kurt Semba
Hi all,

I used the web-console to auto-generate some code and then extended the 
ServerNodeCodeStartup.java class according to the documentation to enable 
authentication (which requires to enable persistence) like this:

public static void main(String[] args) throws Exception {
IgniteConfiguration cfg = 
ServerConfigurationFactory.createConfiguration();

// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();

// Enabling the persistence.

storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);

// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);

// Enable authentication
cfg.setAuthenticationEnabled(true);

Ignite ignite = Ignition.start(cfg);

// Activate the cluster.
// This is required only if the cluster is still inactive.
ignite.cluster().active(true);

// Get all server nodes that are already up and running.
Collection nodes = ignite.cluster().forServers().nodes();

// Set the baseline topology that is represented by these nodes.
ignite.cluster().setBaselineTopology(nodes);
}


But when I run this, the output shows “authentication=off” and I can also 
connect a client without providing any user+pass…

[…]
[08:57:13] Security status [authentication=off, tls/ssl=off]
[…]
[08:57:16] Ignite node started OK (id=1f668071, instance name=ImportedCluster6)
[08:57:16] Topology snapshot [ver=1, locNode=1f668071, servers=1, clients=0, 
state=INACTIVE, CPUs=4, offheap=2.3GB, heap=2.6GB]
[08:57:16]   ^-- Baseline [id=0, size=1, online=1, offline=0]
[08:57:16]   ^-- All baseline nodes are online, will start auto-activation
[08:57:16] Ignite node stopped in the middle of checkpoint. Will restore memory 
state and finish checkpoint on node start.
[08:57:16] Both Ignite native persistence and CacheStore are configured for 
cache 'NsdevicesCache'. This configuration does not guarantee strict 
consistency between CacheStore and Ignite data storage upon restarts. Consult 
documentation for more details.

Any idea what I’m doing wrong?

I will also look into enabling TLS but wanted to start with user+pass auth.

Thanks
Kurt