[jira] [Created] (IGNITE-6968) Move similar Cache configurations in matrices and models to one Java or XML config

2017-11-20 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-6968:


 Summary: Move similar Cache configurations in matrices and models 
to one Java or XML config
 Key: IGNITE-6968
 URL: https://issues.apache.org/jira/browse/IGNITE-6968
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Aleksey Zinoviev


There are a lot of copy-paste cache configs in matrices and vectors in method 
newCache() which returns configured cache for different data structures

For example
* SparseDistributedMatrixStorage
* BlockVectorStorage
* BlockMatrixStorage
* SplitCache
* FeatureCache
* ProjectionCache
* SparseDistributedVectorStorage
and others

Also, all strategies of cache usage should be documented better (with 
description of choosing one or another parameter value)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Ignite ML dense distributed matrices

2017-11-20 Thread Alexey Zinoviev
Yury, I support one matrix with different strategies if it will have one
API.
Also distributed/local can be a strategy too.

And yet, Artem, Yury, we could think about DenceBlockDistributedMatrix of
course.

Or about Matrix(boolean isDistributed, boolean isBlock, boolean isSparse)
and implement all 8 cases

2017-11-20 18:24 GMT+03:00 Yury Babak :

> Artem,
>
> I think It`s a good idea. We could implement dense matrix as separate
> matrix, but what do you think about common distributed matrix with multiple
> possible storage strategies?
>
> Regards,
> Yury
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Stop Ignite opening 47500 and 47100

2017-11-20 Thread Dmitry Pavlov
Hi, please see examples
in 
org.apache.ignite.internal.processors.cache.persistence.wal.reader.StandaloneNoopCommunicationSpi
and StandaloneNoopDiscoverySpi.

also please make sure classes are annotated @IgniteSpiNoop

This annotation helps Ignite ingernal stuff to identify this implementation
should be considered as stub.

Sincerely,
Dmitriy Pavlov

пн, 20 нояб. 2017 г. в 11:10, karthik :

> I have been trying to implement my own discovery spi and communication spi.
> But i am unable achieve it without errors. I just need Ignite Cache.
> It will be helpful if you can you provide the code or at least mention what
> classes and where i need to make changes.
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


[jira] [Created] (IGNITE-6967) PME deadlock on reassigning service deployment

2017-11-20 Thread Alexandr Kuramshin (JIRA)
Alexandr Kuramshin created IGNITE-6967:
--

 Summary: PME deadlock on reassigning service deployment
 Key: IGNITE-6967
 URL: https://issues.apache.org/jira/browse/IGNITE-6967
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.3
Reporter: Alexandr Kuramshin


With a service deployment when topology change occurs the discovery event 
listener calls {{GridServiceProcessor.reassign()}} causing to acquire a lock on 
utility cache (where the GridServiceAssignments stored) which prevents PME from 
completion.

Stack traces:

{{noformat}}
Thread [name="test-runner-#186%service.IgniteServiceDynamicCachesSelfTest%", 
id=232, state=WAITING, blockCnt=0, waitCnt=8]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at o.a.i.i.IgniteKernal.createCache(IgniteKernal.java:2841)
at 
o.a.i.i.processors.service.IgniteServiceDynamicCachesSelfTest.testDeployCalledBeforeCacheStart(IgniteServiceDynamicCachesSelfTest.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
o.a.i.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
at 
o.a.i.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
at 
o.a.i.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
at java.lang.Thread.run(Thread.java:748)

Thread [name="srvc-deploy-#38%service.IgniteServiceDynamicCachesSelfTest0%", 
id=56, state=WAITING, blockCnt=5, waitCnt=9]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
o.a.i.i.processors.cache.GridCacheContext.awaitStarted(GridCacheContext.java:443)
at 
o.a.i.i.processors.affinity.GridAffinityProcessor.affinityCache(GridAffinityProcessor.java:373)
at 
o.a.i.i.processors.affinity.GridAffinityProcessor.keysToNodes(GridAffinityProcessor.java:347)
at 
o.a.i.i.processors.affinity.GridAffinityProcessor.mapKeyToNode(GridAffinityProcessor.java:259)
at 
o.a.i.i.processors.service.GridServiceProcessor.reassign(GridServiceProcessor.java:1163)
at 
o.a.i.i.processors.service.GridServiceProcessor.access$2400(GridServiceProcessor.java:123)
at 
o.a.i.i.processors.service.GridServiceProcessor$TopologyListener$1.run0(GridServiceProcessor.java:1763)
at 
o.a.i.i.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1976)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)

Locked synchronizers:
java.util.concurrent.ThreadPoolExecutor$Worker@27f723
{{noformat}}

Problematic code:
{{noformat}}
org.apache.ignite.internal.processors.service.GridServiceProcessor#reassign

try (GridNearTxLocal tx = cache.txStartEx(PESSIMISTIC, 
REPEATABLE_READ)) {
GridServiceAssignmentsKey key = new 
GridServiceAssignmentsKey(cfg.getName());

GridServiceAssignments oldAssigns = 
(GridServiceAssignments)cache.get(key);

Map cnts = new HashMap<>();

if (affKey != null) {
ClusterNode n = ctx.affinity().mapKeyToNode(cacheName, 
affKey, topVer);

// WAIT HERE UNTIL PME FINISHED (INFINITELY)
{{noformat}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Suggestion to improve deadlock detection

2017-11-20 Thread Vladimir Ozerov
It doesn’t need all txes. Instead, other nodes will send info about
suspicious txes to it from time to time.

вт, 21 нояб. 2017 г. в 8:04, Dmitriy Setrakyan :

> How does it know about all the Txs?
>
> ⁣D.​
>
> On Nov 20, 2017, 8:53 PM, at 8:53 PM, Vladimir Ozerov <
> voze...@gridgain.com> wrote:
> >Dima,
> >
> >What is wrong with coordinator approach? All it does is analyze small
> >number of TXes which wait for locks for too long.
> >
> >вт, 21 нояб. 2017 г. в 1:16, Dmitriy Setrakyan :
> >
> >> Vladimir,
> >>
> >> I am not sure I like it, mainly due to some coordinator node doing
> >some
> >> periodic checks. For the deadlock detection to work effectively, it
> >has to
> >> be done locally on every node. This may require that every tx request
> >will
> >> carry information about up to N previous keys it accessed, but the
> >> detection will happen locally on the destination node.
> >>
> >> What do you think?
> >>
> >> D.
> >>
> >> On Mon, Nov 20, 2017 at 11:50 AM, Vladimir Ozerov
> >
> >> wrote:
> >>
> >> > Igniters,
> >> >
> >> > We are currently working on transactional SQL and distributed
> >deadlocks
> >> are
> >> > serious problem for us. It looks like current deadlock detection
> >> mechanism
> >> > has several deficiencies:
> >> > 1) It transfer keys! No go for SQL as we may have millions of keys.
> >> > 2) By default we wait for a minute. Way too much IMO.
> >> >
> >> > What if we change it as follows:
> >> > 1) Collect XIDs of all preceding transactions while obtaining lock
> >within
> >> > current transaction object. This way we will always have the list
> >of TXes
> >> > we wait for.
> >> > 2) Define TX deadlock coordinator node
> >> > 3) Periodically (e.g. once per second), iterate over active
> >transactions
> >> > and detect ones waiting for a lock for too long (e.g. >2-3 sec).
> >Timeouts
> >> > could be adaptive depending on the workload and false-pasitive
> >alarms
> >> rate.
> >> > 4) Send info about those long-running guys to coordinator in a form
> >> Map[XID
> >> > -> List]
> >> > 5) Rebuild global wait-for graph on coordinator and search for
> >deadlocks
> >> > 6) Choose the victim and send problematic wait-for graph to it
> >> > 7) Victim collects necessary info (e.g. keys, SQL statements,
> >thread IDs,
> >> > cache IDs, etc.) and throws an exception.
> >> >
> >> > Advantages:
> >> > 1) We ignore short transactions. So if there are tons of short TXes
> >on
> >> > typical OLTP workload, we will never many of them
> >> > 2) Only minimal set of data is sent between nodes, so we can
> >exchange
> >> data
> >> > often without loosing performance.
> >> >
> >> > Thoughts?
> >> >
> >> > Vladimir.
> >> >
> >>
>


Re: Suggestion to improve deadlock detection

2017-11-20 Thread Dmitriy Setrakyan
How does it know about all the Txs?

⁣D.​

On Nov 20, 2017, 8:53 PM, at 8:53 PM, Vladimir Ozerov  
wrote:
>Dima,
>
>What is wrong with coordinator approach? All it does is analyze small
>number of TXes which wait for locks for too long.
>
>вт, 21 нояб. 2017 г. в 1:16, Dmitriy Setrakyan :
>
>> Vladimir,
>>
>> I am not sure I like it, mainly due to some coordinator node doing
>some
>> periodic checks. For the deadlock detection to work effectively, it
>has to
>> be done locally on every node. This may require that every tx request
>will
>> carry information about up to N previous keys it accessed, but the
>> detection will happen locally on the destination node.
>>
>> What do you think?
>>
>> D.
>>
>> On Mon, Nov 20, 2017 at 11:50 AM, Vladimir Ozerov
>
>> wrote:
>>
>> > Igniters,
>> >
>> > We are currently working on transactional SQL and distributed
>deadlocks
>> are
>> > serious problem for us. It looks like current deadlock detection
>> mechanism
>> > has several deficiencies:
>> > 1) It transfer keys! No go for SQL as we may have millions of keys.
>> > 2) By default we wait for a minute. Way too much IMO.
>> >
>> > What if we change it as follows:
>> > 1) Collect XIDs of all preceding transactions while obtaining lock
>within
>> > current transaction object. This way we will always have the list
>of TXes
>> > we wait for.
>> > 2) Define TX deadlock coordinator node
>> > 3) Periodically (e.g. once per second), iterate over active
>transactions
>> > and detect ones waiting for a lock for too long (e.g. >2-3 sec).
>Timeouts
>> > could be adaptive depending on the workload and false-pasitive
>alarms
>> rate.
>> > 4) Send info about those long-running guys to coordinator in a form
>> Map[XID
>> > -> List]
>> > 5) Rebuild global wait-for graph on coordinator and search for
>deadlocks
>> > 6) Choose the victim and send problematic wait-for graph to it
>> > 7) Victim collects necessary info (e.g. keys, SQL statements,
>thread IDs,
>> > cache IDs, etc.) and throws an exception.
>> >
>> > Advantages:
>> > 1) We ignore short transactions. So if there are tons of short TXes
>on
>> > typical OLTP workload, we will never many of them
>> > 2) Only minimal set of data is sent between nodes, so we can
>exchange
>> data
>> > often without loosing performance.
>> >
>> > Thoughts?
>> >
>> > Vladimir.
>> >
>>


Re: Suggestion to improve deadlock detection

2017-11-20 Thread Vladimir Ozerov
Dima,

What is wrong with coordinator approach? All it does is analyze small
number of TXes which wait for locks for too long.

вт, 21 нояб. 2017 г. в 1:16, Dmitriy Setrakyan :

> Vladimir,
>
> I am not sure I like it, mainly due to some coordinator node doing some
> periodic checks. For the deadlock detection to work effectively, it has to
> be done locally on every node. This may require that every tx request will
> carry information about up to N previous keys it accessed, but the
> detection will happen locally on the destination node.
>
> What do you think?
>
> D.
>
> On Mon, Nov 20, 2017 at 11:50 AM, Vladimir Ozerov 
> wrote:
>
> > Igniters,
> >
> > We are currently working on transactional SQL and distributed deadlocks
> are
> > serious problem for us. It looks like current deadlock detection
> mechanism
> > has several deficiencies:
> > 1) It transfer keys! No go for SQL as we may have millions of keys.
> > 2) By default we wait for a minute. Way too much IMO.
> >
> > What if we change it as follows:
> > 1) Collect XIDs of all preceding transactions while obtaining lock within
> > current transaction object. This way we will always have the list of TXes
> > we wait for.
> > 2) Define TX deadlock coordinator node
> > 3) Periodically (e.g. once per second), iterate over active transactions
> > and detect ones waiting for a lock for too long (e.g. >2-3 sec). Timeouts
> > could be adaptive depending on the workload and false-pasitive alarms
> rate.
> > 4) Send info about those long-running guys to coordinator in a form
> Map[XID
> > -> List]
> > 5) Rebuild global wait-for graph on coordinator and search for deadlocks
> > 6) Choose the victim and send problematic wait-for graph to it
> > 7) Victim collects necessary info (e.g. keys, SQL statements, thread IDs,
> > cache IDs, etc.) and throws an exception.
> >
> > Advantages:
> > 1) We ignore short transactions. So if there are tons of short TXes on
> > typical OLTP workload, we will never many of them
> > 2) Only minimal set of data is sent between nodes, so we can exchange
> data
> > often without loosing performance.
> >
> > Thoughts?
> >
> > Vladimir.
> >
>


Fwd: getAverageGetTime/getAveragePutTime APIs of CacheMetrics always return 0

2017-11-20 Thread Denis Magda
Ignite dev community,

Bring this weird bug to your attention. Could you confirm it’s not a “feature” 
of us? Put to the IEP-6 (metrics) list.

—
Denis

> Begin forwarded message:
> 
> From: Denis Magda 
> Subject: Re: getAverageGetTime/getAveragePutTime APIs of CacheMetrics always 
> return 0
> Date: November 20, 2017 at 3:46:27 PM PST
> To: u...@ignite.apache.org
> Reply-To: u...@ignite.apache.org
> 
> Eventually I could reproduce your issue:
> https://issues.apache.org/jira/browse/IGNITE-6966 
> 
> 
> It will be fixed as a part of this endeavor:
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-6%3A+Metrics+improvements
>  
> 
> 
> Thanks for your patience and support in reproducing the bug.
> 
> —
> Denis
> 
> 
>> On Nov 20, 2017, at 4:23 AM, headstar > > wrote:
>> 
>> Thanks for the example! Works fine when running IgniteMetricsExample#main
>> with the provided conf.
>> 
>> One difference to "my" configuration was that I was running the node
>> accessing the cache in client mode. 
>> 
>> If I start a node with the configuration provided in your example and then
>> run IgniteMetricsExample#main in client mode the statistics are 0. 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
>> 
> 



[jira] [Created] (IGNITE-6966) Average time metrics are not calculated for client driven operations

2017-11-20 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-6966:
---

 Summary: Average time metrics are not calculated for client driven 
operations
 Key: IGNITE-6966
 URL: https://issues.apache.org/jira/browse/IGNITE-6966
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Magda
Priority: Critical
 Fix For: 2.4


Cache operations executed from a client-side are not accounted in average time 
metrics. Use this reproducer [1] performing the following:
* Start a server node [2] that will report 
{{getAveragePutTime}}/{{getAverageGetTime}} metrics in a loop.
* Start a client node [3] that will report the same metrics and do cache 
updates/reads.

Both nodes show {{0}} for those metrics.

[1] https://github.com/dmagda/IgniteMetricsExampe/
[2] 
https://github.com/dmagda/IgniteMetricsExampe/blob/master/src/main/java/IgniteMetricsExample.java
[3] 
https://github.com/dmagda/IgniteMetricsExampe/blob/master/src/main/java/IgniteClientMetricsExample.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


RE: IGNITE-6745. Status

2017-11-20 Thread Cergey
My username is cossack5. Please grant me contributor permissions.
As for java 7 version, until moment it is discontinued, all the code should be 
java7-compatible ?

-Original Message-
From: Denis Magda [mailto:dma...@apache.org] 
Sent: Tuesday, November 21, 2017 3:15 AM
To: dev@ignite.apache.org
Subject: Re: IGNITE-6745. Status

Cergey,

What’s you JIRA account? You need to be among Ignite contributors in JIRA to 
assign tickets on yourself.

As for Java 7, yes, we had that discussion many times. Hopefully it will be 
discontinued the next year.

However, as for Java 8 the community is willing to support it by the end of the 
year.

—
Denis

> On Nov 20, 2017, at 1:45 PM, Cergey  wrote:
> 
> Hi,
> I can't assign the ticket to myself - seems I have no rights.
> Also, I see we still support java 7. Maybe it's time to cease it (especially 
> when we have java 9 to worry about) ?
> 
> -Original Message-
> From: Anton Vinogradov [mailto:avinogra...@gridgain.com]
> Sent: Monday, November 20, 2017 2:01 PM
> To: dev@ignite.apache.org
> Subject: Re: IGNITE-6745. Status
> 
> Cergey,
> 
> Please assign https://issues.apache.org/jira/browse/IGNITE-6745 to yourself 
> and change status to Patch Available.
> Also, before asking review, please check that TeamCity status is ok, 
> see 
> https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute#H
> owtoContribute-SubmittingforReview
> for details.
> 
> 
> On Sat, Nov 18, 2017 at 12:25 AM, Denis Magda  wrote:
> 
>> Igniters,
>> 
>> Who is going to take a lead of Java 9 support and can do thorough 
>> review of all the related changes? Here is a set of the tickets and 
>> Cergey solved one of them:
>> https://issues.apache.org/jira/browse/IGNITE-6728
>> 
>> —
>> Denis
>> 
>>> On Nov 16, 2017, at 3:12 PM, Cergey  wrote:
>>> 
>>> Hi, igniters
>>> 
>>> 
>>> 
>>> Why no one commented on the patch and pull request
>>> (https://github.com/apache/ignite/pull/2970) ?  What should I do ?
>>> 
>>> 
>>> 
>>> Regards,
>>> 
>>> Cergey Chaulin
>>> 
>>> 
>>> 
>> 
>> 
> 




Re: IGNITE-6745. Status

2017-11-20 Thread Denis Magda
Cergey,

What’s you JIRA account? You need to be among Ignite contributors in JIRA to 
assign tickets on yourself.

As for Java 7, yes, we had that discussion many times. Hopefully it will be 
discontinued the next year.

However, as for Java 8 the community is willing to support it by the end of the 
year.

—
Denis

> On Nov 20, 2017, at 1:45 PM, Cergey  wrote:
> 
> Hi, 
> I can't assign the ticket to myself - seems I have no rights.
> Also, I see we still support java 7. Maybe it's time to cease it (especially 
> when we have java 9 to worry about) ?
> 
> -Original Message-
> From: Anton Vinogradov [mailto:avinogra...@gridgain.com] 
> Sent: Monday, November 20, 2017 2:01 PM
> To: dev@ignite.apache.org
> Subject: Re: IGNITE-6745. Status
> 
> Cergey,
> 
> Please assign https://issues.apache.org/jira/browse/IGNITE-6745 to yourself 
> and change status to Patch Available.
> Also, before asking review, please check that TeamCity status is ok, see 
> https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute#HowtoContribute-SubmittingforReview
> for details.
> 
> 
> On Sat, Nov 18, 2017 at 12:25 AM, Denis Magda  wrote:
> 
>> Igniters,
>> 
>> Who is going to take a lead of Java 9 support and can do thorough 
>> review of all the related changes? Here is a set of the tickets and 
>> Cergey solved one of them:
>> https://issues.apache.org/jira/browse/IGNITE-6728
>> 
>> —
>> Denis
>> 
>>> On Nov 16, 2017, at 3:12 PM, Cergey  wrote:
>>> 
>>> Hi, igniters
>>> 
>>> 
>>> 
>>> Why no one commented on the patch and pull request
>>> (https://github.com/apache/ignite/pull/2970) ?  What should I do ?
>>> 
>>> 
>>> 
>>> Regards,
>>> 
>>> Cergey Chaulin
>>> 
>>> 
>>> 
>> 
>> 
> 



Re: Data eviction/expiration from Ignite persistence

2017-11-20 Thread Denis Magda
Dmitriy,

That’s about TTL and eviction support for Ignite persistence. Presently if you 
set an expiration or eviction policy for a cache it will be applied for data 
stored in memory. The policy never affects the persistence layer.

—
Denis

> On Nov 20, 2017, at 9:29 AM, Dmitry Pavlov  wrote:
> 
> Hi Denis,
> 
> Is this need covered by PDS + TTL?
> 
> For the very first TTL test, I found some delay after applying TTL with the
> repository enabled: https://issues.apache.org/jira/browse/IGNITE-6964
> 
> And I'm wondering if the user's needs are covered by
> https://apacheignite.readme.io/docs/expiry-policies plus
> https://apacheignite.readme.io/docs/distributed-persistent-store
> 
> Sincerely,
> Dmitriy Pavlov
> 
> сб, 18 нояб. 2017 г. в 12:12, Dmitry Pavlov :
> 
>> Hi Denis,
>> 
>> What is the difference of required by users functionality with TTL cache
>> expiration?
>> 
>> By some posts I can suppose TTL cache is compatible with native
>> persistence.
>> 
>> Sincerely,
>> Dmitriy Pavlov
>> 
>> сб, 18 нояб. 2017 г. в 0:41, Denis Magda :
>> 
>>> Igniters,
>>> 
>>> I’ve been talking to many Ignite users here and there who are already on
>>> Ignite persistence or consider to turn it on. The majority of them are more
>>> than satisfied with its current state and provided capabilities. That’s is
>>> really good news for us.
>>> 
>>> However, I tend to come across the people who ask about
>>> eviction/expiration policies for the persistence itself. Had around 6
>>> conversation about the topic this month only.
>>> 
>>> Usually the requirement is connected with a streaming use case. When an
>>> application streams a lot of data (IoT, metrics, etc.) to the cluster but
>>> the data becomes stale in some period of time (day, couple of days, etc.).
>>> The user doesn’t want to waste the disk space and needs to simple purge the
>>> data from there.
>>> 
>>> My suggestion here is to create a timer task that will remove the stale
>>> data from the cluster. However, since the demand is growing probably it’s a
>>> good time to discuss a feasibility of this feature.
>>> 
>>> Alex G, as the main architect of the persistence, could you share your
>>> thoughts on this? What will it cost to us to support eviction/expiration
>>> for the persistence?
>>> 
>>> —
>>> Denis
>> 
>> 



Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Denis Magda
My 2 cents.

1. Totally for a separate native process that will handle the monitoring of an 
Ignite process. The watchdog process can simply start a JVM tool like jstat and 
parse its GC logs: https://dzone.com/articles/how-monitor-java-garbage 


2. As for the STW handling, I would make a possible reaction more generic. 
Let’s define a policy (enumeration) that will define how to deal with an 
unstable node. The events might be as follows - kill a node, restart a node, 
trigger a custom script using Runtime.exec or other methods.

What’d you think? Specifically on point 2.

—
Denis

> On Nov 20, 2017, at 6:47 AM, Anton Vinogradov  
> wrote:
> 
> Yakov,
> 
> Issue is https://issues.apache.org/jira/browse/IGNITE-6171
> 
> We split issue to
> #1 STW duration metrics
> #2 External monitoring allows to stop node during STW
> 
>> Testing GC pause with java thread is
>> a bit strange and can give info only after GC pause finishes.
> 
> That's ok since it's #1
> 
> On Mon, Nov 20, 2017 at 5:45 PM, Dmitriy_Sorokin 
> wrote:
> 
>> I have tested solution with java-thread and GC logs had contain same pause
>> values of thread stopping which was detected by java-thread.
>> 
>> 
>> My log (contains pauses > 100ms):
>> [2017-11-20 17:33:28,822][WARN ][Thread-1][root] Possible too long STW
>> pause: 507 milliseconds.
>> [2017-11-20 17:33:34,522][WARN ][Thread-1][root] Possible too long STW
>> pause: 5595 milliseconds.
>> [2017-11-20 17:33:37,896][WARN ][Thread-1][root] Possible too long STW
>> pause: 3262 milliseconds.
>> [2017-11-20 17:33:39,714][WARN ][Thread-1][root] Possible too long STW
>> pause: 1737 milliseconds.
>> 
>> GC log:
>> gridgain@dell-5580-92zc8h2:~$ cat
>> ./dev/ignite-logs/gc-2017-11-20_17-33-27.log | grep Total
>> 2017-11-20T17:33:27.608+0300: 0,116: Total time for which application
>> threads were stopped: 0,845 seconds, Stopping threads took: 0,246
>> seconds
>> 2017-11-20T17:33:27.667+0300: 0,175: Total time for which application
>> threads were stopped: 0,0001072 seconds, Stopping threads took: 0,252
>> seconds
>> 2017-11-20T17:33:28.822+0300: 1,330: Total time for which application
>> threads were stopped: 0,5001082 seconds, Stopping threads took: 0,178
>> seconds// GOT!
>> 2017-11-20T17:33:34.521+0300: 7,030: Total time for which application
>> threads were stopped: 5,5856603 seconds, Stopping threads took: 0,229
>> seconds// GOT!
>> 2017-11-20T17:33:37.896+0300: 10,405: Total time for which application
>> threads were stopped: 3,2595700 seconds, Stopping threads took: 0,223
>> seconds// GOT!
>> 2017-11-20T17:33:39.714+0300: 12,222: Total time for which application
>> threads were stopped: 1,7337123 seconds, Stopping threads took: 0,121
>> seconds// GOT!
>> 
>> 
>> 
>> 
>> --
>> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>> 



Re: Ignite Logger & logging file config output

2017-11-20 Thread Denis Magda
Good point. Could you create a ticket and probably contribute this improvement?

—
Denis

> On Nov 20, 2017, at 3:12 AM, Alexey Popov  wrote:
> 
> Hi Igniters,
> 
> Could you please advise why Ignite does not indicate 
> 1) the logger type it uses
> 2) the logger configuration file (name) it applies
> during startup?
> 
> Can we add such output to IgniteLogger implementations?
> 
> Thanks,
> Alexey
> 
> 
> 
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/



Re: SSL for ODBC connection

2017-11-20 Thread Denis Magda
This configuration approach looks clearer to me. +1 for it.

—
Denis

> On Nov 20, 2017, at 12:42 AM, Igor Sapego  wrote:
> 
> Ok, then how about the following set of options:
> 
> ssl_enabled=[true|false]
> ssl_key_file=
> ssl_cert_file=
> 
> 
> Best Regards,
> Igor
> 
> On Tue, Nov 14, 2017 at 5:21 PM, Vladimir Ozerov 
> wrote:
> 
>> I think it would be enough to have a single switch for now.
>> 
>> On Tue, Nov 7, 2017 at 10:04 PM, Denis Magda  wrote:
>> 
>>> Igor,
>>> 
>>> Thanks for the clarification. Please file a ticket if nobody else shares
>> a
>>> feedback soon.
>>> 
>>> —
>>> Denis
>>> 
 On Nov 7, 2017, at 1:23 AM, Igor Sapego  wrote:
 
 Hi Denis,
 
> Could you explain the difference between “allow, prefer and require”
 modes?
 allow - Client will first try connecting without SSL, and then fallback
>>> to
 SSL if it is not allowed to connect without SSL;
 prefer - Client will first try connecting using SSL, and then fallback
>> to
 non-SSL if SSL is not supported by the server;
 disable - Client will only connect using SSL and return error if failed
>>> to
 successfully do so.
 
> BTW, do we really need to have the “disable” one? Guess that having
 ssl_mode set to “disable” will have the same effect as not setting the
 ssl_mode at all.
 This is the matter of the default value of the ssl_mode option. The way
>>> you
 propose it means that you still has "disable" option, it is just is not
 explicit.
 
 Best Regards,
 Igor
 
 On Fri, Nov 3, 2017 at 10:35 PM, Denis Magda 
>> wrote:
 
> Hi Igor,
> 
> Could you explain the difference between “allow, prefer and require”
>>> modes?
> 
> BTW, do we really need to have the “disable” one? Guess that having
> ssl_mode set to “disable” will have the same effect as not setting the
> ssl_mode at all.
> 
> —
> Denis
> 
>> On Nov 3, 2017, at 9:04 AM, Igor Sapego  wrote:
>> 
>> Hi, Igniters,
>> 
>> I'm going to start working on the SSL support for the ODBC
>> connection and I need to hear your opinion.
>> 
>> For the client side I'm going to use OpenSSL library [1], which is
>> standard de-facto for C/C++ applications. Unfortunately its
>> licence is not fully compatible with Apache Licence, so its going
>> to require from users to install OpenSSL themselves.
>> 
>> For the driver I'm going to add following options to connection
>> string:
>> ssl_mode - Determines whether or with what priority a SSL
>>  connection will be negotiated with the server. Options
>>  here are disable, allow, prefer, require.
>> ssl_key_file - Path to the location for the secret key used for the
>>  client certificate.
>> ssl_cert_file - Path to the file of the client SSL certificate.
>> 
>> If the ssl_mode is not set to "disable" then ODBC driver will
>> attempt to find and load OpenSSL library before establishing
>> connection.
>> 
>> For the server side there is already SslContextFactory in the
>> IgniteConfiguration, which is used by all components to determine
>> if the SSL enabled and to figure out connection parameters, so
>> I think it's a good idea to just re-use it for the
> ClientListenerProcessorю
>> 
>> What do you guys think?
>> 
>> [1] - https://www.openssl.org
>> 
>> Best Regards,
>> Igor
> 
> 
>>> 
>>> 
>> 



Invitation: RSVP now for Dec. 13 Bay Area In-Memory Computing Meetup! @ Wed Dec 13, 2017 6pm - 8pm (CST) (dev@ignite.apache.org)

2017-11-20 Thread tom . diederich
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20171214T00Z
DTEND:20171214T02Z
DTSTAMP:20171120T225457Z
ORGANIZER;CN=GridGain Meetups (includes guest talks):mailto:gridgain.com_ci
 4oqrn7b9ia4drtrurq004...@group.calendar.google.com
UID:3dkgvqsi9lg5otqe9pc3mo2...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=dev@ignite.apache.org;X-NUM-GUESTS=0:mailto:dev@ignite.apache.org
CREATED:20171120T225454Z
DESCRIPTION:We have a great meetup planned for the evening of Dec. 13! In-m
 emory computing experts from Hazelcast and GridGain Systems will be speakin
 g. And\, of course\, we’ll have great food\, drinks – and cool raffle prize
 s\, too! \n\nPlease RSVP here so we can order the appropriate food and beve
 rages. \nhttp://bit.ly/2AgZdpb\n\nSpeakers: \nValentin (Val) Kulichenko\, G
 ridGain Systems \nFuad Malikov\, Hazelcast \n\nTalk one: (Hazelcast) Java S
 E 8 Stream API is a modern and functional API for processing Java Collectio
 ns. Streams can do parallel processing by utilizing multi-core architecture
 \, without writing a single line of multithreaded code. Hazelcast JET is a 
 distributed\, high-performance stream processing DAG engine\, which provide
 s distributed Java 8 Stream API implementation. This session will highlight
  this implementation of Stream API for big-data processing across many mach
 ines from the comfort of your Java Application. \n\nWith an explanation of 
 internals of the implementation\, I will give an introduction to the genera
 l design behind stream processing using DAG (directed acyclic graph) engine
 s and how an actor-based implementation can provide in-memory performance w
 hile still leveraging industry-wide known frameworks as Java Streams API.\n
 \n\nTalk two: (GridGain Systems) It’s well known that distributed systems r
 ely very much on horizontal scalability. The more machines in your cluster 
 - the better performance of your application\, right? Well\, not always. Wh
 ile a database can provide rich capabilities to achieve lightning fast perf
 ormance\, it’s an engineer's responsibility to use these capabilities prope
 rly as there are a lot of ways to mess things up.\n\nDuring this meetup\, V
 alentin Kulichenko\, GridGain System’s Lead Architect\, will talk about cha
 llenges and pitfalls one may face when architecting and developing a distri
 buted system. Valentin will show how to take advantage of the affinity coll
 ocation concept that is one of the most powerful and usually undervalued te
 chnique provided by distributed systems. He will take Apache Ignite as a da
 tabase for his experiments covering these moments in particular:\n\nWhat is
  data affinity and why is it important for distributed systems? What is aff
 inity colocation and how does it help to improve performance? How does affi
 nity colocation affects execution of distributed computations and distribut
 ed SQL queries? And more…\n\nAfter this talk\, you will have better underst
 anding about how distributed systems work under the hood\, and will be able
  to better design your applications based on them.\n\n-::~:~::~:~:~:~:~:~:~
 :~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-\nPlease 
 do not edit this section of the description.\n\nView your event at https://
 www.google.com/calendar/event?action=VIEW=M2RrZ3Zxc2k5bGc1b3RxZTlwYzNtb
 zJxdm8gZGV2QGlnbml0ZS5hcGFjaGUub3Jn=NjUjZ3JpZGdhaW4uY29tX2NpNG9xcm43Yjl
 pYTRkcnRydXJxMDA0aWhzQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20zODQ1NjUwZDEyNDNiYT
 FlZDkzMDQwNzQ1MWYyZWUxYWRiYjZmMDRl=America/Chicago=en.\n-::~:~::~:~:
 ~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-
LAST-MODIFIED:20171120T225457Z
LOCATION:1172 Castro St\, Mountain View\, CA 94040\, USA
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:RSVP now for Dec. 13 Bay Area In-Memory Computing Meetup!
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics


Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-20 Thread Denis Magda
If an Ignite operation hangs by some reason due to an internal problem or buggy 
application code it needs to eventual *time out*.

Take atomic operations case brought by Val to our attention recently:
http://apache-ignite-developers.2346864.n4.nabble.com/Timeouts-in-atomic-cache-td19839.html

An application must not freeze waiting for a human being intervention if an 
atomic update fails internally.

Even more I would let all possible operation to time out:
- Ignite compute computations.
- Ignite services calls.
- Atomic/transactional cache updates.
- SQL queries.

I’m not sure this is covered by any of the tickets from the IEP-7. Any 
thoughts/suggestion before the one is created?

—
Denis
 
> On Nov 20, 2017, at 8:56 AM, Anton Vinogradov  
> wrote:
> 
> Dmitry,
> 
> There's two cases
> 1) STW duration is long -> notifying monitoring via JMX metric
> 
> 2) STW duration exceed N seconds -> no need to wait for something.
> We already know that node will be segmented or that pause bigger that N
> seconds will affect cluster performance.
> Better case is to kill node ASAP to protect the cluster. Some customers
> have huge timeouts and such node can kill whole cluster in case it will not
> be killed by watchdog.
> 
> On Mon, Nov 20, 2017 at 7:23 PM, Dmitry Pavlov 
> wrote:
> 
>> Hi Anton,
>> 
>>> - GC STW duration exceed maximum possible length (node should be stopped
>> before
>> STW finished)
>> 
>> Are you sure we should kill node in case long STW? Can we produce warnings
>> into logs and monitoring tools an wait node to become alive a little bit
>> longer if we detect STW. In this case we can notify coordinator or other
>> node, that 'current node is in STW, please wait longer than 3 heartbeat
>> timeout'.
>> 
>> It is probable such pauses will occur not often?
>> 
>> Sincerely,
>> Dmitriy Pavlov
>> 
>> пн, 20 нояб. 2017 г. в 18:53, Anton Vinogradov :
>> 
>>> Igniters,
>>> 
>>> Internal problems may and, unfortunately, cause unexpected cluster
>>> behavior.
>>> We should determine behavior in case any of internal problem happened.
>>> 
>>> Well known internal problems can be split to:
>>> 1) OOM or any other reason cause node crash
>>> 
>>> 2) Situations required graceful node shutdown with custom notification
>>> - IgniteOutOfMemoryException
>>> - Persistence errors
>>> - ExchangeWorker exits with error
>>> 
>>> 3) Prefomance issues should be covered by metrics
>>> - GC STW duration
>>> - Timed out tasks and jobs
>>> - TX deadlock
>>> - Hanged Tx (waits for some service)
>>> - Java Deadlocks
>>> 
>>> I created special issue [1] to make sure all these metrics will be
>>> presented at WebConsole or VisorConsole (what's preferred?)
>>> 
>>> 4) Situations required external monitoring implementation
>>> - GC STW duration exceed maximum possible length (node should be stopped
>>> before STW finished)
>>> 
>>> All this problems were reported by different persons different time ago,
>>> So, we should reanalyze each of them and, possible, find better ways to
>>> solve them than it described at issues.
>>> 
>>> P.s. IEP-7 [2] already contains 9 issues, feel free to mention something
>>> else :)
>>> 
>>> [1] https://issues.apache.org/jira/browse/IGNITE-6961
>>> [2]
>>> 
>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-
>> 7%3A+Ignite+internal+problems+detection
>>> 
>> 



Re: SQL warning for partitioned caches with setLocal

2017-11-20 Thread Dmitriy Setrakyan
Sounds like a good idea. Vladimir, would be nice to hear your thoughts.

D.

On Mon, Nov 20, 2017 at 7:45 AM, luqmanahmad  wrote:

> Hi there,
>
> Working with SQL queries with setLocal(true) with partitioned cache, it is
> very easy for someone to run SQL queries without affinityRun or
> affinityCall
> computations which are the preferred ways of running queries on partition
> cache, as described in [1].
>
> Now what I was thinking whenever a SQL is about to execute against
> partitioned caches it should check for a check whether the call for this
> SQL
> is made through an affinityRun or affinityCall function. If the call to SQL
> is not part of affinityRun or affinityCall then by default it should log a
> WARNING message or throw an exception which should be configurable in
> CacheConfiguration. The advantage would be it won't break others code
> instantly and allow them some time to fix it.
>
> This can be achieved when the affinityCall or affinityRun method is called
> we can set something specifically for SQL queries in the context which can
> be read before executing the queries. If the SQL processor cannot find the
> value in the given context for partitioned caches we can either log the
> warning or throw an exception based on the cache configuration.
>
> Let me know if it makes sense?
>
> Thanks,
> Luqman
>
> [1].  https://apacheignite-sql.readme.io/docs/local-queries
> 
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Suggestion to improve deadlock detection

2017-11-20 Thread Dmitriy Setrakyan
Vladimir,

I am not sure I like it, mainly due to some coordinator node doing some
periodic checks. For the deadlock detection to work effectively, it has to
be done locally on every node. This may require that every tx request will
carry information about up to N previous keys it accessed, but the
detection will happen locally on the destination node.

What do you think?

D.

On Mon, Nov 20, 2017 at 11:50 AM, Vladimir Ozerov 
wrote:

> Igniters,
>
> We are currently working on transactional SQL and distributed deadlocks are
> serious problem for us. It looks like current deadlock detection mechanism
> has several deficiencies:
> 1) It transfer keys! No go for SQL as we may have millions of keys.
> 2) By default we wait for a minute. Way too much IMO.
>
> What if we change it as follows:
> 1) Collect XIDs of all preceding transactions while obtaining lock within
> current transaction object. This way we will always have the list of TXes
> we wait for.
> 2) Define TX deadlock coordinator node
> 3) Periodically (e.g. once per second), iterate over active transactions
> and detect ones waiting for a lock for too long (e.g. >2-3 sec). Timeouts
> could be adaptive depending on the workload and false-pasitive alarms rate.
> 4) Send info about those long-running guys to coordinator in a form Map[XID
> -> List]
> 5) Rebuild global wait-for graph on coordinator and search for deadlocks
> 6) Choose the victim and send problematic wait-for graph to it
> 7) Victim collects necessary info (e.g. keys, SQL statements, thread IDs,
> cache IDs, etc.) and throws an exception.
>
> Advantages:
> 1) We ignore short transactions. So if there are tons of short TXes on
> typical OLTP workload, we will never many of them
> 2) Only minimal set of data is sent between nodes, so we can exchange data
> often without loosing performance.
>
> Thoughts?
>
> Vladimir.
>


RE: IGNITE-6745. Status

2017-11-20 Thread Cergey
Hi, 
I can't assign the ticket to myself - seems I have no rights.
Also, I see we still support java 7. Maybe it's time to cease it (especially 
when we have java 9 to worry about) ?

-Original Message-
From: Anton Vinogradov [mailto:avinogra...@gridgain.com] 
Sent: Monday, November 20, 2017 2:01 PM
To: dev@ignite.apache.org
Subject: Re: IGNITE-6745. Status

Cergey,

Please assign https://issues.apache.org/jira/browse/IGNITE-6745 to yourself and 
change status to Patch Available.
Also, before asking review, please check that TeamCity status is ok, see 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute#HowtoContribute-SubmittingforReview
for details.


On Sat, Nov 18, 2017 at 12:25 AM, Denis Magda  wrote:

> Igniters,
>
> Who is going to take a lead of Java 9 support and can do thorough 
> review of all the related changes? Here is a set of the tickets and 
> Cergey solved one of them:
> https://issues.apache.org/jira/browse/IGNITE-6728
>
> —
> Denis
>
> > On Nov 16, 2017, at 3:12 PM, Cergey  wrote:
> >
> > Hi, igniters
> >
> >
> >
> > Why no one commented on the patch and pull request
> > (https://github.com/apache/ignite/pull/2970) ?  What should I do ?
> >
> >
> >
> > Regards,
> >
> > Cergey Chaulin
> >
> >
> >
>
>



[jira] [Created] (IGNITE-6965) affinityCall() with key mapping may not be successful with AlwaysFailoverSpi when node left

2017-11-20 Thread Alexandr Kuramshin (JIRA)
Alexandr Kuramshin created IGNITE-6965:
--

 Summary: affinityCall() with key mapping may not be successful 
with AlwaysFailoverSpi when node left
 Key: IGNITE-6965
 URL: https://issues.apache.org/jira/browse/IGNITE-6965
 Project: Ignite
  Issue Type: Bug
  Components: cache, compute
Affects Versions: 2.3
Reporter: Alexandr Kuramshin


When doing {{affinityCall(cacheName, key, callable)}} there is a race between 
affinity node left then stopped and {{AlwaysFailoverSpi}} max attempts reached.

Suppose the following sequence (more probable when {{grid2.order}} >> 
{{grid1.order}}):

1. {{grid1.affinityCall(cacheName, key, callable)}}
2. {{grid1}}: {{key}} mapped to the primary partition on {{grid2}}
3. {{grid2.stop()}}
4. {{grid1}} receives {{NODE_LEFT}} and updates {{discoCache}}
5. {{grid1}} execution {{callable}} failed with 'Failed to send job request 
because remote node left grid (if fail-over is enabled, will attempt fail-over 
to another node'
6. {{grid1}}: {{AlwaysFailoverSpi}} max attempts reached.
7. {{grid1.affinityCall}} failed with 'Job failover failed because number of 
maximum failover attempts for affinity call is exceeded'
8. {{grid2}} receives verified node left message then stopping.

The patched {{CacheAffinityCallSelfTest}} reproduces the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Suggestion to improve deadlock detection

2017-11-20 Thread Vladimir Ozerov
Igniters,

We are currently working on transactional SQL and distributed deadlocks are
serious problem for us. It looks like current deadlock detection mechanism
has several deficiencies:
1) It transfer keys! No go for SQL as we may have millions of keys.
2) By default we wait for a minute. Way too much IMO.

What if we change it as follows:
1) Collect XIDs of all preceding transactions while obtaining lock within
current transaction object. This way we will always have the list of TXes
we wait for.
2) Define TX deadlock coordinator node
3) Periodically (e.g. once per second), iterate over active transactions
and detect ones waiting for a lock for too long (e.g. >2-3 sec). Timeouts
could be adaptive depending on the workload and false-pasitive alarms rate.
4) Send info about those long-running guys to coordinator in a form Map[XID
-> List]
5) Rebuild global wait-for graph on coordinator and search for deadlocks
6) Choose the victim and send problematic wait-for graph to it
7) Victim collects necessary info (e.g. keys, SQL statements, thread IDs,
cache IDs, etc.) and throws an exception.

Advantages:
1) We ignore short transactions. So if there are tons of short TXes on
typical OLTP workload, we will never many of them
2) Only minimal set of data is sent between nodes, so we can exchange data
often without loosing performance.

Thoughts?

Vladimir.


[GitHub] ignite pull request #3072: IGNITE-6963: Made PhysicalMemoryPages equal to To...

2017-11-20 Thread andrey-kuznetsov
GitHub user andrey-kuznetsov opened a pull request:

https://github.com/apache/ignite/pull/3072

IGNITE-6963: Made PhysicalMemoryPages equal to TotalAllocatedPages when PDS 
is off



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrey-kuznetsov/ignite ignite-6963

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3072.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3072


commit e5d412392c2d7d3e8310f2a7772db8fc104efac0
Author: Andrey Kuznetsov 
Date:   2017-11-20T17:35:14Z

IGNITE-6963: Made PhysicalMemoryPages equal to TotalAllocatedPages when PDS 
is off.




---


Re: Data eviction/expiration from Ignite persistence

2017-11-20 Thread Dmitry Pavlov
Hi Denis,

Is this need covered by PDS + TTL?

For the very first TTL test, I found some delay after applying TTL with the
repository enabled: https://issues.apache.org/jira/browse/IGNITE-6964

And I'm wondering if the user's needs are covered by
https://apacheignite.readme.io/docs/expiry-policies plus
https://apacheignite.readme.io/docs/distributed-persistent-store

Sincerely,
Dmitriy Pavlov

сб, 18 нояб. 2017 г. в 12:12, Dmitry Pavlov :

> Hi Denis,
>
> What is the difference of required by users functionality with TTL cache
> expiration?
>
> By some posts I can suppose TTL cache is compatible with native
> persistence.
>
> Sincerely,
> Dmitriy Pavlov
>
> сб, 18 нояб. 2017 г. в 0:41, Denis Magda :
>
>> Igniters,
>>
>> I’ve been talking to many Ignite users here and there who are already on
>> Ignite persistence or consider to turn it on. The majority of them are more
>> than satisfied with its current state and provided capabilities. That’s is
>> really good news for us.
>>
>> However, I tend to come across the people who ask about
>> eviction/expiration policies for the persistence itself. Had around 6
>> conversation about the topic this month only.
>>
>> Usually the requirement is connected with a streaming use case. When an
>> application streams a lot of data (IoT, metrics, etc.) to the cluster but
>> the data becomes stale in some period of time (day, couple of days, etc.).
>> The user doesn’t want to waste the disk space and needs to simple purge the
>> data from there.
>>
>> My suggestion here is to create a timer task that will remove the stale
>> data from the cluster. However, since the demand is growing probably it’s a
>> good time to discuss a feasibility of this feature.
>>
>> Alex G, as the main architect of the persistence, could you share your
>> thoughts on this? What will it cost to us to support eviction/expiration
>> for the persistence?
>>
>> —
>> Denis
>
>


[GitHub] ignite pull request #3068: IGNITE-6876: ODBC: Added support for SQL_ATTR_CON...

2017-11-20 Thread isapego
Github user isapego closed the pull request at:

https://github.com/apache/ignite/pull/3068


---


[jira] [Created] (IGNITE-6964) Ignite restart with PDS enabled may cause delays in TTL cleanup worker

2017-11-20 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-6964:
--

 Summary: Ignite restart with PDS enabled may cause delays in TTL 
cleanup worker
 Key: IGNITE-6964
 URL: https://issues.apache.org/jira/browse/IGNITE-6964
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Reporter: Dmitriy Pavlov
Assignee: Dmitriy Pavlov
 Fix For: 2.4


If ignite was restarted and not all TTL entries were evicted, simple restart 
does not cause entries to be deleted, even if test waits for some time.

In the same time if some entries were touched by get() TTL eviction starts to 
work as it is expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-20 Thread Anton Vinogradov
Dmitry,

There's two cases
1) STW duration is long -> notifying monitoring via JMX metric

2) STW duration exceed N seconds -> no need to wait for something.
We already know that node will be segmented or that pause bigger that N
seconds will affect cluster performance.
Better case is to kill node ASAP to protect the cluster. Some customers
have huge timeouts and such node can kill whole cluster in case it will not
be killed by watchdog.

On Mon, Nov 20, 2017 at 7:23 PM, Dmitry Pavlov 
wrote:

> Hi Anton,
>
> > - GC STW duration exceed maximum possible length (node should be stopped
> before
> STW finished)
>
> Are you sure we should kill node in case long STW? Can we produce warnings
> into logs and monitoring tools an wait node to become alive a little bit
> longer if we detect STW. In this case we can notify coordinator or other
> node, that 'current node is in STW, please wait longer than 3 heartbeat
> timeout'.
>
> It is probable such pauses will occur not often?
>
> Sincerely,
> Dmitriy Pavlov
>
> пн, 20 нояб. 2017 г. в 18:53, Anton Vinogradov :
>
> > Igniters,
> >
> > Internal problems may and, unfortunately, cause unexpected cluster
> > behavior.
> > We should determine behavior in case any of internal problem happened.
> >
> > Well known internal problems can be split to:
> > 1) OOM or any other reason cause node crash
> >
> > 2) Situations required graceful node shutdown with custom notification
> > - IgniteOutOfMemoryException
> > - Persistence errors
> > - ExchangeWorker exits with error
> >
> > 3) Prefomance issues should be covered by metrics
> > - GC STW duration
> > - Timed out tasks and jobs
> > - TX deadlock
> > - Hanged Tx (waits for some service)
> > - Java Deadlocks
> >
> > I created special issue [1] to make sure all these metrics will be
> > presented at WebConsole or VisorConsole (what's preferred?)
> >
> > 4) Situations required external monitoring implementation
> > - GC STW duration exceed maximum possible length (node should be stopped
> > before STW finished)
> >
> > All this problems were reported by different persons different time ago,
> > So, we should reanalyze each of them and, possible, find better ways to
> > solve them than it described at issues.
> >
> > P.s. IEP-7 [2] already contains 9 issues, feel free to mention something
> > else :)
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-6961
> > [2]
> >
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 7%3A+Ignite+internal+problems+detection
> >
>


Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-20 Thread Dmitry Pavlov
Hi Anton,

> - GC STW duration exceed maximum possible length (node should be stopped 
> before
STW finished)

Are you sure we should kill node in case long STW? Can we produce warnings
into logs and monitoring tools an wait node to become alive a little bit
longer if we detect STW. In this case we can notify coordinator or other
node, that 'current node is in STW, please wait longer than 3 heartbeat
timeout'.

It is probable such pauses will occur not often?

Sincerely,
Dmitriy Pavlov

пн, 20 нояб. 2017 г. в 18:53, Anton Vinogradov :

> Igniters,
>
> Internal problems may and, unfortunately, cause unexpected cluster
> behavior.
> We should determine behavior in case any of internal problem happened.
>
> Well known internal problems can be split to:
> 1) OOM or any other reason cause node crash
>
> 2) Situations required graceful node shutdown with custom notification
> - IgniteOutOfMemoryException
> - Persistence errors
> - ExchangeWorker exits with error
>
> 3) Prefomance issues should be covered by metrics
> - GC STW duration
> - Timed out tasks and jobs
> - TX deadlock
> - Hanged Tx (waits for some service)
> - Java Deadlocks
>
> I created special issue [1] to make sure all these metrics will be
> presented at WebConsole or VisorConsole (what's preferred?)
>
> 4) Situations required external monitoring implementation
> - GC STW duration exceed maximum possible length (node should be stopped
> before STW finished)
>
> All this problems were reported by different persons different time ago,
> So, we should reanalyze each of them and, possible, find better ways to
> solve them than it described at issues.
>
> P.s. IEP-7 [2] already contains 9 issues, feel free to mention something
> else :)
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6961
> [2]
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-7%3A+Ignite+internal+problems+detection
>


Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-20 Thread Anton Vinogradov
Igniters,

Internal problems may and, unfortunately, cause unexpected cluster
behavior.
We should determine behavior in case any of internal problem happened.

Well known internal problems can be split to:
1) OOM or any other reason cause node crash

2) Situations required graceful node shutdown with custom notification
- IgniteOutOfMemoryException
- Persistence errors
- ExchangeWorker exits with error

3) Prefomance issues should be covered by metrics
- GC STW duration
- Timed out tasks and jobs
- TX deadlock
- Hanged Tx (waits for some service)
- Java Deadlocks

I created special issue [1] to make sure all these metrics will be
presented at WebConsole or VisorConsole (what's preferred?)

4) Situations required external monitoring implementation
- GC STW duration exceed maximum possible length (node should be stopped
before STW finished)

All this problems were reported by different persons different time ago,
So, we should reanalyze each of them and, possible, find better ways to
solve them than it described at issues.

P.s. IEP-7 [2] already contains 9 issues, feel free to mention something
else :)

[1] https://issues.apache.org/jira/browse/IGNITE-6961
[2]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-7%3A+Ignite+internal+problems+detection


[GitHub] ignite pull request #3029: enforceJoinOrder flag for Thick JDBC Driver to 1....

2017-11-20 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3029


---


[GitHub] ignite pull request #3028: Support enforceJoinOrder flag for Thick JDBC Driv...

2017-11-20 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3028


---


[jira] [Created] (IGNITE-6963) TotalAllocatedPages metric does not match PhysicalMemoryPages when persistence is disabled

2017-11-20 Thread Andrey Kuznetsov (JIRA)
Andrey Kuznetsov created IGNITE-6963:


 Summary: TotalAllocatedPages metric does not match 
PhysicalMemoryPages when persistence is disabled
 Key: IGNITE-6963
 URL: https://issues.apache.org/jira/browse/IGNITE-6963
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.3
Reporter: Andrey Kuznetsov


As javadoc states for DataRegionMetrics#getPhysicalMemoryPages()

{noformat}
When persistence is disabled, this metric is equal to getTotalAllocatedPages()
{noformat}

and this seems to be sane requirement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


SQL warning for partitioned caches with setLocal

2017-11-20 Thread luqmanahmad
Hi there,

Working with SQL queries with setLocal(true) with partitioned cache, it is
very easy for someone to run SQL queries without affinityRun or affinityCall
computations which are the preferred ways of running queries on partition
cache, as described in [1].

Now what I was thinking whenever a SQL is about to execute against
partitioned caches it should check for a check whether the call for this SQL
is made through an affinityRun or affinityCall function. If the call to SQL
is not part of affinityRun or affinityCall then by default it should log a
WARNING message or throw an exception which should be configurable in
CacheConfiguration. The advantage would be it won't break others code
instantly and allow them some time to fix it.

This can be achieved when the affinityCall or affinityRun method is called
we can set something specifically for SQL queries in the context which can
be read before executing the queries. If the SQL processor cannot find the
value in the given context for partitioned caches we can either log the
warning or throw an exception based on the cache configuration.

Let me know if it makes sense?

Thanks,
Luqman

[1].  https://apacheignite-sql.readme.io/docs/local-queries
  



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #3062: Backport IGNITE-6818, fix test

2017-11-20 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3062


---


[GitHub] ignite pull request #3026: 5195 DataStreamer can fails if non-data node ente...

2017-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3026


---


Re: Ignite ML dense distributed matrices

2017-11-20 Thread Yury Babak
Artem,

I think It`s a good idea. We could implement dense matrix as separate
matrix, but what do you think about common distributed matrix with multiple
possible storage strategies? 

Regards,
Yury



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #3071: ignite-2.1.7-p2

2017-11-20 Thread ascherbakoff
GitHub user ascherbakoff opened a pull request:

https://github.com/apache/ignite/pull/3071

ignite-2.1.7-p2

Ignite-2.1.7-p2

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.1.7-p2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3071.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3071


commit 43e4ff2c0ecd1ef30d18cf1fbc9052f5ba703d05
Author: sboikov 
Date:   2017-07-18T14:52:51Z

Fixed test IgniteClusterActivateDeactivateTestWithPersistence.

(cherry picked from commit 54585ab)

commit d596b7806db3f002f83da5a02bc882d03dae3dfd
Author: Ilya Lantukh 
Date:   2017-08-23T15:23:06Z

Updated classnames.properties.

commit 3e08cd401d598a34832e72afc5e6c94a3a9ab081
Author: sboikov 
Date:   2017-08-23T15:29:52Z

ignite-6174 Temporary changed test until issue not fixed

(cherry picked from commit 4fe8f76)

commit 44e0b4cd62142dce8cf39f826449b9a04e22e1cf
Author: Alexey Kuznetsov 
Date:   2017-08-24T07:57:36Z

IGNITE-6136 Fixed version for demo.
(cherry picked from commit e1bf8d7)

commit 8d1838b03d6c1e5f86dfbb7f41c59895775e20c1
Author: Dmitry Pavlov 
Date:   2017-07-27T11:51:25Z

Adjusted memory policy to prevent OOM.

commit a3ec54b16bce1a569fbefba17188ccb4702b82a4
Author: sboikov 
Date:   2017-08-24T11:09:12Z

ignite-6124 DataStreamerImpl: do not wait for exchange future inside cache 
gateway.

(cherry picked from commit 3ab523c)

commit 30e6d019a21f4a045a50d7d95a04507e3b646e69
Author: sboikov 
Date:   2017-08-24T11:10:34Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit 41f574a7372ffc04b69809298798f24fb34c161f
Author: Dmitriy Govorukhin 
Date:   2017-08-24T12:58:27Z

Fixed test.

commit 943736b36d67381157fc2807cd7af4b03d44fef3
Author: nikolay_tikhonov 
Date:   2017-08-24T15:58:16Z

Revert "IGNITE-5947 Fixed "ClassCastException when two-dimensional array is 
fetched from cache".
* Due to this changes break compatibility with .NET;
* This fix doesn't cover all cases.

Signed-off-by: nikolay_tikhonov 

commit c2e836b5b9b183404f4507c64c13ab5c05653d24
Author: EdShangGG 
Date:   2017-08-24T16:15:24Z

ignite-6175 JVM Crash in Ignite Binary Objects Simple Mapper Basic suite

Signed-off-by: Andrey Gura 

commit b2b596b4f59bcf7a1b7397a6fd681a0ae47092db
Author: Andrey Novikov 
Date:   2017-08-25T03:48:15Z

IGNITE-5200 Web Console: Don't cache generated chunks in production.
(cherry picked from commit e1eb1b9)

commit 9399610d2dd4b67b1da6475ce2141787fb8dbb0e
Author: Ilya Lantukh 
Date:   2017-08-25T10:12:32Z

ignite-6180: restoring marshaller mappings on node start is implemented

commit 5bda4090f1580ea7b6557c8716e57a12572c322f
Author: Ivan Rakov 
Date:   2017-08-24T15:18:31Z

IGNITE-6178 Make CheckpointWriteOrder.SEQUENTIAL and checkpointingThreads=4 
default in persistent store confguration

commit 316312d2ae9015228e67f959e492b2c5c4a9366d
Author: Dmitry Pavlov 
Date:   2017-07-27T11:51:25Z

ignite-5682 Added stale version check for GridDhtPartFullMessage not 
related to exchange.

(cherry picked from commit eb9d06d)

commit 6f7011aa9c69280c76e03335ce4851a38cfc334e
Author: sboikov 
Date:   2017-08-25T13:59:02Z

Added test for rebalance after restart.

commit d31c43c1465ec33f9a1be81dedb958296ecc5068
Author: sboikov 
Date:   2017-08-25T14:50:01Z

Increment GridDhtPartitionMap update sequence when assign new state on 
coordinator.

commit 85fd8ce91f1e5827600aa32645552039e5a2298a
Author: Ilya Lantukh 
Date:   2017-08-25T18:23:19Z

Fixed update sequence.

commit 8c249b77533c95a4bef3d19ca583feb992322325
Author: Eduard Shangareev 
Date:   2017-08-26T14:01:46Z

GG-12609: Fixed OOM at initiator during LIST

commit a857c5ba5a24f41b5ebfaef0a15fde2906a7e0fd
Author: Ilya Lantukh 
Date:   2017-08-26T14:21:44Z

Fixed update sequence.

commit fc55ade9b5a586fb39701b4e8b7ce2105bff2fd0
Author: Sergey Chugunov 
Date:   2017-08-26T18:44:13Z

IGNITE-6124: fix for lastVer field on GridDhtPartitionsSingleMessage message

commit a01837b5028fc8739e16658d85ffe64aad01afdb
Author: Eduard Shangareev 
Date:   2017-08-27T14:30:21Z

GG-12682 Restart cluster during snapshot RESTORE fails


[GitHub] ignite pull request #3070: Ignite wal freelist disable

2017-11-20 Thread tledkov-gridgain
Github user tledkov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/3070


---


Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Anton Vinogradov
Yakov,

Issue is https://issues.apache.org/jira/browse/IGNITE-6171

We split issue to
#1 STW duration metrics
#2 External monitoring allows to stop node during STW

> Testing GC pause with java thread is
> a bit strange and can give info only after GC pause finishes.

That's ok since it's #1

On Mon, Nov 20, 2017 at 5:45 PM, Dmitriy_Sorokin 
wrote:

> I have tested solution with java-thread and GC logs had contain same pause
> values of thread stopping which was detected by java-thread.
>
>
> My log (contains pauses > 100ms):
> [2017-11-20 17:33:28,822][WARN ][Thread-1][root] Possible too long STW
> pause: 507 milliseconds.
> [2017-11-20 17:33:34,522][WARN ][Thread-1][root] Possible too long STW
> pause: 5595 milliseconds.
> [2017-11-20 17:33:37,896][WARN ][Thread-1][root] Possible too long STW
> pause: 3262 milliseconds.
> [2017-11-20 17:33:39,714][WARN ][Thread-1][root] Possible too long STW
> pause: 1737 milliseconds.
>
> GC log:
> gridgain@dell-5580-92zc8h2:~$ cat
> ./dev/ignite-logs/gc-2017-11-20_17-33-27.log | grep Total
> 2017-11-20T17:33:27.608+0300: 0,116: Total time for which application
> threads were stopped: 0,845 seconds, Stopping threads took: 0,246
> seconds
> 2017-11-20T17:33:27.667+0300: 0,175: Total time for which application
> threads were stopped: 0,0001072 seconds, Stopping threads took: 0,252
> seconds
> 2017-11-20T17:33:28.822+0300: 1,330: Total time for which application
> threads were stopped: 0,5001082 seconds, Stopping threads took: 0,178
> seconds// GOT!
> 2017-11-20T17:33:34.521+0300: 7,030: Total time for which application
> threads were stopped: 5,5856603 seconds, Stopping threads took: 0,229
> seconds// GOT!
> 2017-11-20T17:33:37.896+0300: 10,405: Total time for which application
> threads were stopped: 3,2595700 seconds, Stopping threads took: 0,223
> seconds// GOT!
> 2017-11-20T17:33:39.714+0300: 12,222: Total time for which application
> threads were stopped: 1,7337123 seconds, Stopping threads took: 0,121
> seconds// GOT!
>
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Dmitriy_Sorokin
I have tested solution with java-thread and GC logs had contain same pause
values of thread stopping which was detected by java-thread.


My log (contains pauses > 100ms):
[2017-11-20 17:33:28,822][WARN ][Thread-1][root] Possible too long STW
pause: 507 milliseconds.
[2017-11-20 17:33:34,522][WARN ][Thread-1][root] Possible too long STW
pause: 5595 milliseconds.
[2017-11-20 17:33:37,896][WARN ][Thread-1][root] Possible too long STW
pause: 3262 milliseconds.
[2017-11-20 17:33:39,714][WARN ][Thread-1][root] Possible too long STW
pause: 1737 milliseconds.

GC log:
gridgain@dell-5580-92zc8h2:~$ cat
./dev/ignite-logs/gc-2017-11-20_17-33-27.log | grep Total
2017-11-20T17:33:27.608+0300: 0,116: Total time for which application
threads were stopped: 0,845 seconds, Stopping threads took: 0,246
seconds
2017-11-20T17:33:27.667+0300: 0,175: Total time for which application
threads were stopped: 0,0001072 seconds, Stopping threads took: 0,252
seconds
2017-11-20T17:33:28.822+0300: 1,330: Total time for which application
threads were stopped: 0,5001082 seconds, Stopping threads took: 0,178
seconds// GOT!
2017-11-20T17:33:34.521+0300: 7,030: Total time for which application
threads were stopped: 5,5856603 seconds, Stopping threads took: 0,229
seconds// GOT!
2017-11-20T17:33:37.896+0300: 10,405: Total time for which application
threads were stopped: 3,2595700 seconds, Stopping threads took: 0,223
seconds// GOT!
2017-11-20T17:33:39.714+0300: 12,222: Total time for which application
threads were stopped: 1,7337123 seconds, Stopping threads took: 0,121
seconds// GOT!




--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[jira] [Created] (IGNITE-6962) Reduce ExchangeHistory memory consumption

2017-11-20 Thread Alexander Belyak (JIRA)
Alexander Belyak created IGNITE-6962:


 Summary: Reduce ExchangeHistory memory consumption
 Key: IGNITE-6962
 URL: https://issues.apache.org/jira/browse/IGNITE-6962
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Alexander Belyak


GridDhtPartitionExchangeManager$ExhcangeFutureSet store huge message 
GridDhtPartitionsFullMessage with IgniteDhtPartitionCountersMap2 for each cache 
group with two long[partCount]. If we have big grid (100+ nodes) with large 
amount of cacheGroups and partitions in CachePartitionFullCountersMap(long[] 
initialUpdCntrs; long[] updCntrs;)
*<2 

Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Yakov Zhdanov
Guys, how about having 2 native threads - one calling some java method,
another one monitoring that the first one is active and is not stuck on
safepoint (which points to GC pause)? Testing GC pause with java thread is
a bit strange and can give info only after GC pause finishes. Native
threads can give info immediately. This was suggested by V. Ozerov some
time ago. Vova, I believe we already have a ticket. Can you please provide
a link?


--Yakov


Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Dmitry Pavlov
Yes, we need some timestamp from Java code. But I think JVM thread could
update TS with delays not related to GC and will have same effect with
IgniteUtils#currentTimeMillis().

Is this new test compares result from java timestamps difference with GC
logs?

пн, 20 нояб. 2017 г. в 16:39, Anton Vinogradov :

> Dmitriy,
>
> > Sleeping Java Thread IMO is not an option, because thread can be in
> > Timed_Watiting logner than timeout.
>
> That's the only one idea we have, and, according to tests, it works!
>
> > Did I understand correctly that the native stream is proposed? And our
> goal
> > now is to select best framework for this?
>
> That's one of possible cases.
> We can replace native thread by another JVM, this should solve
> compatibility issues.
>
>
> On Mon, Nov 20, 2017 at 4:24 PM, Dmitry Pavlov 
> wrote:
>
> > Sleeping Java Thread IMO is not an option, because thread can be in
> > Timed_Watiting logner than timeout.
> >
> > Did I understand correctly that the native stream is proposed? And our
> goal
> > now is to select best framework for this?
> >
> > Can we limit this oppotunity with several popular OS (Win,Linux), and do
> > not implement this feature for all operation systems?
> >
> >
> > пн, 20 нояб. 2017 г. в 14:55, Anton Vinogradov  >:
> >
> > > Igniters,
> > >
> > > Since no one rejected proposal, let's start from part one.
> > >
> > > > I propose to add a special thread that will record current time
> every N
> > > > milliseconds and check the difference with the latest recorded value.
> > > > The maximum and total pause values for a certain period can be
> > published
> > > in
> > > > the special metrics available through JMX.
> > >
> > > On Fri, Nov 17, 2017 at 4:08 PM, Dmitriy_Sorokin <
> > > sbt.sorokin@gmail.com>
> > > wrote:
> > >
> > > > Hi, Igniters!
> > > >
> > > > This discussion thread related to
> > > > https://issues.apache.org/jira/browse/IGNITE-6171.
> > > >
> > > > Currently there are no JVM performance monitoring tools in AI, for
> > > example
> > > > the impact of GC (eg STW) on the operation of the node. I think we
> > should
> > > > add this functionality.
> > > >
> > > > 1) It is useful to know that STW duration increased or any other
> > > situations
> > > > leads to similar consequences.
> > > > This will allow system administrators to solve issues prior they
> become
> > > > problems.
> > > >
> > > > I propose to add a special thread that will record current time
> every N
> > > > milliseconds and check the difference with the latest recorded value.
> > > > The maximum and total pause values for a certain period can be
> > published
> > > in
> > > > the special metrics available through JMX.
> > > >
> > > > 2) If the pause reaches a critical value, we need to stop the node,
> > > without
> > > > waiting for end of the pause.
> > > >
> > > > The thread (from the first part of the proposed solution) is able to
> > > > estimate the pause duration, but only after its completion.
> > > > So, we need an external thread (in another JVM or native) that is
> able
> > to
> > > > recognize that the pause duration has passed the critical mark.
> > > >
> > > > We can estimate (STW or similar) pause duration by
> > > >  a) reading value updated by the first thread, somehow (eg via JMX,
> > shmem
> > > > or
> > > > shared file)
> > > >  or
> > > >  b) by using JVM diagnostic tools. Does anybody know crossplatform
> > > > solutions?
> > > >
> > > > Feel free to suggest ideas or tips, especially about second part of
> > > > proposal.
> > > >
> > > > Thoughts?
> > > >
> > > >
> > > >
> > > > --
> > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> > > >
> > >
> >
>


[jira] [Created] (IGNITE-6961) Internal Problems should be covered by Monitoring tool

2017-11-20 Thread Anton Vinogradov (JIRA)
Anton Vinogradov created IGNITE-6961:


 Summary: Internal Problems should be covered by Monitoring tool
 Key: IGNITE-6961
 URL: https://issues.apache.org/jira/browse/IGNITE-6961
 Project: Ignite
  Issue Type: New Feature
Reporter: Anton Vinogradov
Assignee: Alexey Kuznetsov


- Monitoring tool should provide UI which allows to view and manage:
- active transactions (including: long-running) 
- locks aquired;
- tasks performed;
- deadlocks detected.

- Moniroring tool should alert and keeps history of:
- java level deadlocks;
- GC- or STW- pauses;
- threadpool starvation events.

Best candidates (as monitoring tool) are WebConsole and VisorConsole



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3070: Ignite wal freelist disable

2017-11-20 Thread tledkov-gridgain
GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/3070

Ignite wal freelist disable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
ignite-wal-freelist-disable

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3070.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3070


commit dcc6561dd4f09e499b865421c69dd4060aef89a7
Author: tledkov-gridgain 
Date:   2017-11-14T12:39:47Z

print WAL

commit 5b67fb00868d152449e6188573535509b9a6880d
Author: tledkov-gridgain 
Date:   2017-11-15T09:13:48Z

FreeList hack

commit f57e3a514720017596d2b9d8efafdabe04b429da
Author: tledkov-gridgain 
Date:   2017-11-15T12:32:27Z

Merge branch '_master' into ignite-updwal-analyze

commit cc316f67496cf59b2704abbfe27f0d28c403c58c
Author: tledkov-gridgain 
Date:   2017-11-15T15:14:39Z

save the progress

commit ec13189a706df124b6f5d43ea524556b157f79c8
Author: tledkov-gridgain 
Date:   2017-11-16T09:16:32Z

Merge branch 'ignite-wal-freelist-base' into ignite-wal-freelist-disable

commit ec09c49213558ff02de0cb982575b93680c9b3df
Author: tledkov-gridgain 
Date:   2017-11-16T09:32:41Z

disable freelist WAL

commit 895d406abb90834b2f27915e2d0da7357ec8e17a
Author: tledkov-gridgain 
Date:   2017-11-16T11:19:55Z

add put & invoke update benchmark




---


[GitHub] ignite pull request #3069: Ignite 6872 1

2017-11-20 Thread oignatenko
GitHub user oignatenko opened a pull request:

https://github.com/apache/ignite/pull/3069

Ignite 6872 1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6872-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3069.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3069


commit cf6ba40c97840a28ee14f49f09f79fe3d9d5a66f
Author: Oleg Ignatenko 
Date:   2017-11-20T11:53:46Z

IGNITE-6872 Linear regression should implement Model API
- regression examples moved to more appropriate package
-- verified with diffs overview, clean build and execution of unit tests

commit a8dadee71e0ca2a00387cfa2dba9c656de75137f
Author: Oleg Ignatenko 
Date:   2017-11-20T11:57:18Z

IGNITE-6872 Linear regression should implement Model API
- added missing package-info
-- verified with diffs overview

commit cc8edfbd0ca7e20655f45dc82d576f2d21627497
Author: Oleg Ignatenko 
Date:   2017-11-20T14:12:03Z

IGNITE-6872 Linear regression should implement Model API
- implementation, tests and examples
- accommodated changes done to OLS per IGNITE-5846
- fixed some issues with coding style and javadoc
-- verified with diffs overview, clean build, execution of unit tests and 
examples




---


Ignite ML dense distributed matrices

2017-11-20 Thread ArtemM
Hi all, currently we do not have dense distributed matices in ML module, but
I think we should add them. Obviously, not every big  ML dataset has sparse
structure. Of course, we can put it into a sparse distributed matrix, but
this will have performance hits. Any ideas/objections about it?



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Anton Vinogradov
Dmitriy,

> Sleeping Java Thread IMO is not an option, because thread can be in
> Timed_Watiting logner than timeout.

That's the only one idea we have, and, according to tests, it works!

> Did I understand correctly that the native stream is proposed? And our
goal
> now is to select best framework for this?

That's one of possible cases.
We can replace native thread by another JVM, this should solve
compatibility issues.


On Mon, Nov 20, 2017 at 4:24 PM, Dmitry Pavlov 
wrote:

> Sleeping Java Thread IMO is not an option, because thread can be in
> Timed_Watiting logner than timeout.
>
> Did I understand correctly that the native stream is proposed? And our goal
> now is to select best framework for this?
>
> Can we limit this oppotunity with several popular OS (Win,Linux), and do
> not implement this feature for all operation systems?
>
>
> пн, 20 нояб. 2017 г. в 14:55, Anton Vinogradov :
>
> > Igniters,
> >
> > Since no one rejected proposal, let's start from part one.
> >
> > > I propose to add a special thread that will record current time every N
> > > milliseconds and check the difference with the latest recorded value.
> > > The maximum and total pause values for a certain period can be
> published
> > in
> > > the special metrics available through JMX.
> >
> > On Fri, Nov 17, 2017 at 4:08 PM, Dmitriy_Sorokin <
> > sbt.sorokin@gmail.com>
> > wrote:
> >
> > > Hi, Igniters!
> > >
> > > This discussion thread related to
> > > https://issues.apache.org/jira/browse/IGNITE-6171.
> > >
> > > Currently there are no JVM performance monitoring tools in AI, for
> > example
> > > the impact of GC (eg STW) on the operation of the node. I think we
> should
> > > add this functionality.
> > >
> > > 1) It is useful to know that STW duration increased or any other
> > situations
> > > leads to similar consequences.
> > > This will allow system administrators to solve issues prior they become
> > > problems.
> > >
> > > I propose to add a special thread that will record current time every N
> > > milliseconds and check the difference with the latest recorded value.
> > > The maximum and total pause values for a certain period can be
> published
> > in
> > > the special metrics available through JMX.
> > >
> > > 2) If the pause reaches a critical value, we need to stop the node,
> > without
> > > waiting for end of the pause.
> > >
> > > The thread (from the first part of the proposed solution) is able to
> > > estimate the pause duration, but only after its completion.
> > > So, we need an external thread (in another JVM or native) that is able
> to
> > > recognize that the pause duration has passed the critical mark.
> > >
> > > We can estimate (STW or similar) pause duration by
> > >  a) reading value updated by the first thread, somehow (eg via JMX,
> shmem
> > > or
> > > shared file)
> > >  or
> > >  b) by using JVM diagnostic tools. Does anybody know crossplatform
> > > solutions?
> > >
> > > Feel free to suggest ideas or tips, especially about second part of
> > > proposal.
> > >
> > > Thoughts?
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> > >
> >
>


Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Dmitry Pavlov
Sleeping Java Thread IMO is not an option, because thread can be in
Timed_Watiting logner than timeout.

Did I understand correctly that the native stream is proposed? And our goal
now is to select best framework for this?

Can we limit this oppotunity with several popular OS (Win,Linux), and do
not implement this feature for all operation systems?


пн, 20 нояб. 2017 г. в 14:55, Anton Vinogradov :

> Igniters,
>
> Since no one rejected proposal, let's start from part one.
>
> > I propose to add a special thread that will record current time every N
> > milliseconds and check the difference with the latest recorded value.
> > The maximum and total pause values for a certain period can be published
> in
> > the special metrics available through JMX.
>
> On Fri, Nov 17, 2017 at 4:08 PM, Dmitriy_Sorokin <
> sbt.sorokin@gmail.com>
> wrote:
>
> > Hi, Igniters!
> >
> > This discussion thread related to
> > https://issues.apache.org/jira/browse/IGNITE-6171.
> >
> > Currently there are no JVM performance monitoring tools in AI, for
> example
> > the impact of GC (eg STW) on the operation of the node. I think we should
> > add this functionality.
> >
> > 1) It is useful to know that STW duration increased or any other
> situations
> > leads to similar consequences.
> > This will allow system administrators to solve issues prior they become
> > problems.
> >
> > I propose to add a special thread that will record current time every N
> > milliseconds and check the difference with the latest recorded value.
> > The maximum and total pause values for a certain period can be published
> in
> > the special metrics available through JMX.
> >
> > 2) If the pause reaches a critical value, we need to stop the node,
> without
> > waiting for end of the pause.
> >
> > The thread (from the first part of the proposed solution) is able to
> > estimate the pause duration, but only after its completion.
> > So, we need an external thread (in another JVM or native) that is able to
> > recognize that the pause duration has passed the critical mark.
> >
> > We can estimate (STW or similar) pause duration by
> >  a) reading value updated by the first thread, somehow (eg via JMX, shmem
> > or
> > shared file)
> >  or
> >  b) by using JVM diagnostic tools. Does anybody know crossplatform
> > solutions?
> >
> > Feel free to suggest ideas or tips, especially about second part of
> > proposal.
> >
> > Thoughts?
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>


[jira] [Created] (IGNITE-6959) Split a ttl index tree by partition

2017-11-20 Thread Sergey Puchnin (JIRA)
Sergey Puchnin created IGNITE-6959:
--

 Summary: Split a ttl index tree by partition  
 Key: IGNITE-6959
 URL: https://issues.apache.org/jira/browse/IGNITE-6959
 Project: Ignite
  Issue Type: Task
  Components: persistence
Reporter: Sergey Puchnin
Assignee: Sergey Chugunov


h2. Use Case
* User starts up cluster of N nodes and fills it with some data.
* User splits the cluster into two halves and modifies data in each half 
independently.
* User tries to join two halves back into one - irresolvable conflicts in data 
for the same key may happen.

h2. BaselineTopology Versioning
Each BLT contains enough information to check its compatibility with other BLT. 
If BLT of joining node is not compatible with BLT grid is working on at the 
moment, node is not allowed to join the grid and must fail with proper 
exception.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3068: IGNITE-6876: ODBC: Added support for SQL_ATTR_CON...

2017-11-20 Thread isapego
GitHub user isapego opened a pull request:

https://github.com/apache/ignite/pull/3068

IGNITE-6876: ODBC: Added support for SQL_ATTR_CONNECTION_TIMEOUT



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6876

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3068.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3068


commit a503033eeb13a24f335a7393fce801b26025d26f
Author: Igor Sapego 
Date:   2017-11-17T16:32:28Z

IGNITE-6876: Implemented for Windows

commit 33517f2e4b4186de9239ff153c691895ddbe965e
Author: Igor Sapego 
Date:   2017-11-17T17:33:33Z

IGNITE-6876: Fixed

commit a674f9bc4acb9de956b39bc08d67bd2d440b
Author: Igor Sapego 
Date:   2017-11-20T09:18:39Z

IGNITE-6876: Added timeout for queries

commit 1ff14fbec7b6757ff1ce3229dace6059ed563645
Author: Igor Sapego 
Date:   2017-11-20T09:44:32Z

IGNITE-6876: Added test

commit 28baac25e7ade92f5b0074cb74e7ce2671e0d233
Author: Igor Sapego 
Date:   2017-11-20T10:25:04Z

IGNITE-6876: Added tests

commit 8d70c4da9e711a4477e6d3f2679deed4e13325da
Author: Igor Sapego 
Date:   2017-11-20T11:34:39Z

IGNITE-6876: Linux part implemented

commit d56a0986da23e33162fc9d88d6ed4c37e0c7937d
Author: Igor Sapego 
Date:   2017-11-20T11:50:41Z

IGNITE-6876: Linux fixes




---


Re: Facility to detect long STW pauses and other system response degradations

2017-11-20 Thread Anton Vinogradov
Igniters,

Since no one rejected proposal, let's start from part one.

> I propose to add a special thread that will record current time every N
> milliseconds and check the difference with the latest recorded value.
> The maximum and total pause values for a certain period can be published
in
> the special metrics available through JMX.

On Fri, Nov 17, 2017 at 4:08 PM, Dmitriy_Sorokin 
wrote:

> Hi, Igniters!
>
> This discussion thread related to
> https://issues.apache.org/jira/browse/IGNITE-6171.
>
> Currently there are no JVM performance monitoring tools in AI, for example
> the impact of GC (eg STW) on the operation of the node. I think we should
> add this functionality.
>
> 1) It is useful to know that STW duration increased or any other situations
> leads to similar consequences.
> This will allow system administrators to solve issues prior they become
> problems.
>
> I propose to add a special thread that will record current time every N
> milliseconds and check the difference with the latest recorded value.
> The maximum and total pause values for a certain period can be published in
> the special metrics available through JMX.
>
> 2) If the pause reaches a critical value, we need to stop the node, without
> waiting for end of the pause.
>
> The thread (from the first part of the proposed solution) is able to
> estimate the pause duration, but only after its completion.
> So, we need an external thread (in another JVM or native) that is able to
> recognize that the pause duration has passed the critical mark.
>
> We can estimate (STW or similar) pause duration by
>  a) reading value updated by the first thread, somehow (eg via JMX, shmem
> or
> shared file)
>  or
>  b) by using JVM diagnostic tools. Does anybody know crossplatform
> solutions?
>
> Feel free to suggest ideas or tips, especially about second part of
> proposal.
>
> Thoughts?
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Ignite Logger & logging file config output

2017-11-20 Thread Alexey Popov
Hi Igniters,

Could you please advise why Ignite does not indicate 
1) the logger type it uses
2) the logger configuration file (name) it applies
during startup?

Can we add such output to IgniteLogger implementations?

Thanks,
Alexey



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: IGNITE-6745. Status

2017-11-20 Thread Anton Vinogradov
Cergey,

Please assign https://issues.apache.org/jira/browse/IGNITE-6745 to yourself
and change status to Patch Available.
Also, before asking review, please check that TeamCity status is ok, see
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute#HowtoContribute-SubmittingforReview
for details.


On Sat, Nov 18, 2017 at 12:25 AM, Denis Magda  wrote:

> Igniters,
>
> Who is going to take a lead of Java 9 support and can do thorough review
> of all the related changes? Here is a set of the tickets and Cergey solved
> one of them:
> https://issues.apache.org/jira/browse/IGNITE-6728
>
> —
> Denis
>
> > On Nov 16, 2017, at 3:12 PM, Cergey  wrote:
> >
> > Hi, igniters
> >
> >
> >
> > Why no one commented on the patch and pull request
> > (https://github.com/apache/ignite/pull/2970) ?  What should I do ?
> >
> >
> >
> > Regards,
> >
> > Cergey Chaulin
> >
> >
> >
>
>


[jira] [Created] (IGNITE-6958) Reduce FilePageStore allocation on start

2017-11-20 Thread Alexander Belyak (JIRA)
Alexander Belyak created IGNITE-6958:


 Summary: Reduce FilePageStore allocation on start
 Key: IGNITE-6958
 URL: https://issues.apache.org/jira/browse/IGNITE-6958
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.1
Reporter: Alexander Belyak


On cache start ignite create FilePageStore for all partition in CacheGroup, 
even if that partition never assigned to particular node. See 
FilePageStoreManager.initForCache method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3067: IGNITE-6955 Update com.google.code.simple-spring-...

2017-11-20 Thread apopovgg
GitHub user apopovgg opened a pull request:

https://github.com/apache/ignite/pull/3067

IGNITE-6955 Update com.google.code.simple-spring-memcached:spymemcach…

…ed to 2.8.4

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6955

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3067.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3067


commit 0b9b26af54ba1b7671ae5dc1cbbfd337817b7b51
Author: apopov 
Date:   2017-11-20T09:14:06Z

IGNITE-6955 Update com.google.code.simple-spring-memcached:spymemcached to 
2.8.4




---


[jira] [Created] (IGNITE-6957) Reduce excessive int boxing when accessing cache by ID

2017-11-20 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-6957:


 Summary: Reduce excessive int boxing when accessing cache by ID
 Key: IGNITE-6957
 URL: https://issues.apache.org/jira/browse/IGNITE-6957
 Project: Ignite
  Issue Type: Task
  Components: cache
Affects Versions: 2.3
Reporter: Alexey Goncharuk
 Fix For: 2.4


We have a number of places which lead to a large number of Integer allocations 
when having a large number of caches and partitions. See the image attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6956) Reduce excessive iterator allocations during GridDhtPartitionMap instantiation

2017-11-20 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-6956:


 Summary: Reduce excessive iterator allocations during 
GridDhtPartitionMap instantiation
 Key: IGNITE-6956
 URL: https://issues.apache.org/jira/browse/IGNITE-6956
 Project: Ignite
  Issue Type: Task
Affects Versions: 2.1
Reporter: Alexey Goncharuk
 Fix For: 2.4


We allocate an iterator during map.values() call, see attached allocation 
profile.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6955) Update com.google.code.simple-spring-memcached:spymemcached to 2.8.4

2017-11-20 Thread Alexey Popov (JIRA)
Alexey Popov created IGNITE-6955:


 Summary: Update 
com.google.code.simple-spring-memcached:spymemcached to 2.8.4
 Key: IGNITE-6955
 URL: https://issues.apache.org/jira/browse/IGNITE-6955
 Project: Ignite
  Issue Type: Improvement
  Components: examples
Affects Versions: 2.3
Reporter: Alexey Popov
Assignee: Alexey Popov
Priority: Minor


Please update com.google.code.simple-spring-memcached:spymemcached to 2.8.4 
version.
This version does not have "netty" dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6954) Baseline should throw appropriate exception in case of --baseline version OLD_VERSION

2017-11-20 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-6954:


 Summary: Baseline should throw appropriate exception in case of 
--baseline version OLD_VERSION
 Key: IGNITE-6954
 URL: https://issues.apache.org/jira/browse/IGNITE-6954
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Kuznetsov
Assignee: Sergey Chugunov
 Fix For: 2.4


Steps to reproduce:
# Start node.
# Activate it (it will create baseline).
# Start one more node and add it to baseline.
# Execute: control.sh --baseline version 1

This should throw appropriate exception. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: SSL for ODBC connection

2017-11-20 Thread Igor Sapego
Ok, then how about the following set of options:

ssl_enabled=[true|false]
ssl_key_file=
ssl_cert_file=


Best Regards,
Igor

On Tue, Nov 14, 2017 at 5:21 PM, Vladimir Ozerov 
wrote:

> I think it would be enough to have a single switch for now.
>
> On Tue, Nov 7, 2017 at 10:04 PM, Denis Magda  wrote:
>
> > Igor,
> >
> > Thanks for the clarification. Please file a ticket if nobody else shares
> a
> > feedback soon.
> >
> > —
> > Denis
> >
> > > On Nov 7, 2017, at 1:23 AM, Igor Sapego  wrote:
> > >
> > > Hi Denis,
> > >
> > >> Could you explain the difference between “allow, prefer and require”
> > > modes?
> > > allow - Client will first try connecting without SSL, and then fallback
> > to
> > > SSL if it is not allowed to connect without SSL;
> > > prefer - Client will first try connecting using SSL, and then fallback
> to
> > > non-SSL if SSL is not supported by the server;
> > > disable - Client will only connect using SSL and return error if failed
> > to
> > > successfully do so.
> > >
> > >> BTW, do we really need to have the “disable” one? Guess that having
> > > ssl_mode set to “disable” will have the same effect as not setting the
> > > ssl_mode at all.
> > > This is the matter of the default value of the ssl_mode option. The way
> > you
> > > propose it means that you still has "disable" option, it is just is not
> > > explicit.
> > >
> > > Best Regards,
> > > Igor
> > >
> > > On Fri, Nov 3, 2017 at 10:35 PM, Denis Magda 
> wrote:
> > >
> > >> Hi Igor,
> > >>
> > >> Could you explain the difference between “allow, prefer and require”
> > modes?
> > >>
> > >> BTW, do we really need to have the “disable” one? Guess that having
> > >> ssl_mode set to “disable” will have the same effect as not setting the
> > >> ssl_mode at all.
> > >>
> > >> —
> > >> Denis
> > >>
> > >>> On Nov 3, 2017, at 9:04 AM, Igor Sapego  wrote:
> > >>>
> > >>> Hi, Igniters,
> > >>>
> > >>> I'm going to start working on the SSL support for the ODBC
> > >>> connection and I need to hear your opinion.
> > >>>
> > >>> For the client side I'm going to use OpenSSL library [1], which is
> > >>> standard de-facto for C/C++ applications. Unfortunately its
> > >>> licence is not fully compatible with Apache Licence, so its going
> > >>> to require from users to install OpenSSL themselves.
> > >>>
> > >>> For the driver I'm going to add following options to connection
> > >>> string:
> > >>> ssl_mode - Determines whether or with what priority a SSL
> > >>>   connection will be negotiated with the server. Options
> > >>>   here are disable, allow, prefer, require.
> > >>> ssl_key_file - Path to the location for the secret key used for the
> > >>>   client certificate.
> > >>> ssl_cert_file - Path to the file of the client SSL certificate.
> > >>>
> > >>> If the ssl_mode is not set to "disable" then ODBC driver will
> > >>> attempt to find and load OpenSSL library before establishing
> > >>> connection.
> > >>>
> > >>> For the server side there is already SslContextFactory in the
> > >>> IgniteConfiguration, which is used by all components to determine
> > >>> if the SSL enabled and to figure out connection parameters, so
> > >>> I think it's a good idea to just re-use it for the
> > >> ClientListenerProcessorю
> > >>>
> > >>> What do you guys think?
> > >>>
> > >>> [1] - https://www.openssl.org
> > >>>
> > >>> Best Regards,
> > >>> Igor
> > >>
> > >>
> >
> >
>


[GitHub] ignite pull request #2974: Ignite 2.1.7.b1

2017-11-20 Thread kdudkov
Github user kdudkov closed the pull request at:

https://github.com/apache/ignite/pull/2974


---


Re: Stop Ignite opening 47500 and 47100

2017-11-20 Thread karthik
I have been trying to implement my own discovery spi and communication spi.
But i am unable achieve it without errors. I just need Ignite Cache. 
It will be helpful if you can you provide the code or at least mention what
classes and where i need to make changes.
Thanks in advance.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #2774: IGNITE-6416

2017-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2774


---