Re: Ignite ML withKeepBinary cache

2019-01-09 Thread Alexey Zinoviev
Oscar, great use-case, and ML-related developers (me too) will happy to
help you with this case.

Could you please answer a three questions?

1) What kind of types (Java types or SQL types) could be in column
properties?
2) Have all data series (observation) has the same schema with equal number
of columns and equal types in them?
3) Could you provide an obfuscated data example  in csv or in another easy
readable format to make our experiments more efficient?

Also, if you have any issues related to algorithm usage (what to use, how
to calibrate features and etc) write on user list with me in copy.

Alex

чт, 3 янв. 2019 г. в 22:59, otorreno :

> Denis,
>
> That's great news! I will wait till your ML expert is back from holidays to
> work with him in a clean solution.
>
> Regarding the blog post, sure, it could be interesting and useful writing
> about how to use Ignite ML with BinaryObject caches IMHO.
>
> Thanks,
> Oscar
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: How to free up space on disc after removing entries from IgniteCache with enabled PDS?

2019-01-09 Thread Павлухин Иван
Vyacheslav,

Have you investigated how other vendors (Oracle, Postgres) tackle this problem?

I have one wild idea. Could the problem be solved by stopping a node
which need to be defragmented, clearing persistence files and
restarting the node? After rebalance the node will receive all data
back without fragmentation. I see a big downside -- sending data
across the network. But perhaps we can play with affinity and start
new node on the same host which will receive the same data, after that
old node can be stopped. It looks more as kind of workaround but
perhaps it can be turned into workable solution.

ср, 9 янв. 2019 г. в 10:49, Vyacheslav Daradur :
>
> Yes, it's about Page Memory defragmentation.
>
> Pages in partitions files are stored sequentially, possible, it makes
> sense to defragment pages first to avoid interpages gaps since we use
> pages offset to manage them.
>
> I filled an issue [1], I hope we will be able to find resources to
> solve the issue before 2.8 release.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-10862
>
> On Sat, Dec 29, 2018 at 10:47 AM Павлухин Иван  wrote:
> >
> > I suppose it is about Ignite Page Memory pages defragmentation.
> >
> > We can get 100 allocated pages each of which becomes only e.g. 50%
> > filled after removal some entries. But they will occupy a space for
> > 100 pages on a hard drive.
> >
> > пт, 28 дек. 2018 г. в 20:45, Denis Magda :
> > >
> > > Shouldn't the OS care of defragmentation? What we need to do is to give a
> > > way to remove stale data and "release" the allocated space somehow through
> > > the tools, MBeans or API methods.
> > >
> > > --
> > > Denis
> > >
> > >
> > > On Fri, Dec 28, 2018 at 6:24 AM Vladimir Ozerov 
> > > wrote:
> > >
> > > > Hi Vyacheslav,
> > > >
> > > > AFAIK this is not implemented. Shrinking/defragmentation is important
> > > > optimization. Not only because it releases free space, but also because 
> > > > it
> > > > decreases total number of pages. But is it not very easy to implement, 
> > > > as
> > > > you have to both reshuffle data entries and index entries, maintaining
> > > > consistency for concurrent reads and updates at the same time. Or
> > > > alternatively we can think of offline defragmentation. It will be 
> > > > easier to
> > > > implement and faster, but concurrent operations will be prohibited.
> > > >
> > > > On Fri, Dec 28, 2018 at 4:08 PM Vyacheslav Daradur 
> > > > wrote:
> > > >
> > > > > Igniters, we have faced with the following problem on one of our
> > > > > deployments.
> > > > >
> > > > > Let's imagine that we have used IgniteCache with enabled PDS during 
> > > > > the
> > > > > time:
> > > > > - hardware disc space has been occupied during growing up of an amount
> > > > > of data, e.g. 100Gb;
> > > > > - then, we removed non-actual data, e.g 50Gb, which became useless for
> > > > us;
> > > > > - disc space stopped growing up with new data, but it was not
> > > > > released, and still took 100Gb, instead of expected 50Gb;
> > > > >
> > > > > Another use case:
> > > > > - a user extracts data from IgniteCache to store it in separate
> > > > > IgniteCache or another store;
> > > > > - disc still is occupied and the user is not able to store data in the
> > > > > different cache at the same cluster because of disc limitation;
> > > > >
> > > > > How can we help the user to free up the disc space, if an amount of
> > > > > data in IgniteCache has been reduced many times and will not be
> > > > > increased in the nearest future?
> > > > >
> > > > > AFAIK, we have mechanics of reusing memory pages, that allows us to
> > > > > use pages which have been allocated and stored removed data for
> > > > > storing new data.
> > > > > Are there any chances to shrink data and free up space on disc (with
> > > > > defragmentation if possible)?
> > > > >
> > > > > --
> > > > > Best Regards, Vyacheslav D.
> > > > >
> > > >
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
>
>
>
> --
> Best Regards, Vyacheslav D.



-- 
Best regards,
Ivan Pavlukhin


Re: Apache Ignite Codebase has been migrated to GitBox

2019-01-09 Thread Anton Vinogradov
>> Committers can set up account matching between GitHub and Apache. After
>> this setup, you can merge pull requests from GitHub Web UI

That's awesome!

On Mon, Dec 31, 2018 at 1:49 AM Dmitriy Pavlov  wrote:

> Dear Ignite Developers,
>
> I would like to repeat announce from other thread with more details:
>
> Apache Ignite *New* Git Remotes for repositories hosted on git-wip-us:
> https://gitbox.apache.org/repos/asf/ignite.git
> https://gitbox.apache.org/repos/asf/ignite-release.git
>
> Mirrors are the same:
> https://github.com/apache/ignite/
> https://github.com/apache/ignite-release/
>
> Ignite-doc has been deleted as it has no commits. PMC can create new
> additional repositories https://selfserve.apache.org/
>
> Committers can set up account matching between GitHub and Apache. After
> this setup, you can merge pull requests from GitHub Web UI and commit files
> directly to master branch via GitHub. You can run this setup here
> https://gitbox.apache.org/setup/
>
> Sincerely,
> Dmitriy Pavlov
>


[jira] [Created] (IGNITE-10863) Fix incorrect links to git repositories.

2019-01-09 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10863:
-

 Summary: Fix incorrect links to git repositories.
 Key: IGNITE-10863
 URL: https://issues.apache.org/jira/browse/IGNITE-10863
 Project: Ignite
  Issue Type: Task
  Components: general
Reporter: Andrew Mashenkov
 Fix For: 2.8


We have to refresh outdated links to project git repository.

[https://cwiki.apache.org/confluence/display/IGNITE/] [
https://ignite.apache.org/community/resources.html#git]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] Relocation of Apache git repositories from git-wip-us.apache.org to GitBox

2019-01-09 Thread Andrey Mashenkov
Hi Dmitry,

Looks like repository has been transferred to GitBox,
but for now we still have incorrect links on wiki and ignite web site.

I've created a ticket [1].  Can someone fix this?

https://issues.apache.org/jira/browse/IGNITE-10863

On Sun, Dec 30, 2018 at 1:28 AM Dmitriy Pavlov  wrote:

> Hi Igniters,
>
> Thanks to Peter and Alexey for sharing their opinion.
>
> I've filed https://issues.apache.org/jira/browse/INFRA-17516 to migrate.
>
> Sincerely,
> Dmitriy Pavlov
>
> чт, 13 дек. 2018 г. в 21:47, Peter Ivanov :
>
> > There will be no problem with TC, as we use GitHub as main VCS root,
> using
> > ASF only for release (which I will reconfigure during release builds
> > refactoring and optimisations for AI 2.8).
> >
> >
> > > On 11 Dec 2018, at 19:25, Alexey Goncharuk  >
> > wrote:
> > >
> > > Given that there is no option to stay on the old repository and
> > mass-migration is scheduled for Feb, 7th, I think it is better to prepare
> > and move the repository before the date.
> > >
> > > As far as I understand, from developers standpoint we only need to
> > change the git remote entry. Petr, Sergey, can you asses what changes are
> > need to be done in order to support this migration for TC and release
> > procedure?
> > >
> > > вт, 11 дек. 2018 г. в 19:20, Dmitriy Pavlov   > dpav...@apache.org>>:
> > > Hi All,
> > >
> > > The Apache Ignite still has a repository on git-wip-us for its code, as
> > > well, for release.
> > >
> > > Does anyone have a problem with moving over the Ignite git-wip-us
> > > repository to gitbox voluntarily? This means integrated access and easy
> > PRs
> > > (write access to the GitHub repo).
> > >
> > > We need to document support for the decision from a mailing list post,
> so
> > > here it is.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > > сб, 8 дек. 2018 г. в 00:16, Dmitriy Pavlov   > dpav...@apache.org>>:
> > >
> > > > Hi Igniters,
> > > >
> > > > What do you think about the voluntary migration of project
> repositories
> > > > from git-wip to GitBox? (See details in the forwarded email.)
> > > >
> > > > Redirection of emails originated by PR actions to notifications@
> > seems to
> > > > be almost finished, so email flood issue seems to be already solved
> > for dev@
> > > > .
> > > >
> > > > It is up to us to decide if we would like to migrate sooner. Later
> all
> > > > repositories will be migrated anyway.
> > > >
> > > > Affected repositories
> > > > - Ignite Release
> > > > - Ignite
> > > >
> > > > Please share your opinion.
> > > >
> > > > Sincerely,
> > > > Dmitriy Pavlov
> > > >
> > > > -- Forwarded message -
> > > > From: Daniel Gruno  humbed...@apache.org
> > >>
> > > > Date: пт, 7 дек. 2018 г. в 19:52
> > > > Subject: [NOTICE] Mandatory relocation of Apache git repositories on
> > > > git-wip-us.apache.org 
> > > > To: us...@infra.apache.org  <
> > us...@infra.apache.org >
> > > >
> > > >
> > > > [IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
> > > >   DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]
> > > >
> > > > Hello Apache projects,
> > > >
> > > > I am writing to you because you may have git repositories on the
> > > > git-wip-us server, which is slated to be decommissioned in the coming
> > > > months. All repositories will be moved to the new gitbox service
> which
> > > > includes direct write access on github as well as the standard ASF
> > > > commit access via gitbox.apache.org .
> > > >
> > > > ## Why this move? ##
> > > > The move comes as a result of retiring the git-wip service, as the
> > > > hardware it runs on is longing for retirement. In lieu of this, we
> > > > have decided to consolidate the two services (git-wip and gitbox), to
> > > > ease the management of our repository systems and future-proof the
> > > > underlying hardware. The move is fully automated, and ideally,
> nothing
> > > > will change in your workflow other than added features and access to
> > > > GitHub.
> > > >
> > > > ## Timeframe for relocation ##
> > > > Initially, we are asking that projects voluntarily request to move
> > > > their repositories to gitbox, hence this email. The voluntary
> > > > timeframe is between now and January 9th 2019, during which projects
> > > > are free to either move over to gitbox or stay put on git-wip. After
> > > > this phase, we will be requiring the remaining projects to move
> within
> > > > one month, after which we will move the remaining projects over.
> > > >
> > > > To have your project moved in this initial phase, you will need:
> > > >
> > > > - Consensus in the project (documented via the mailing list)
> > > > - File a JIRA ticket with INFRA to voluntarily move your project
> repos
> > > >over to gitbox (as stated, this is highly automated and will take
> > > >between a minute and an hour, depending on the size and number of
> 

Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-09 Thread Akash Shinde
Added  dev@ignite.apache.org.

Should I log Jira for this issue?

Thanks,
Akash



On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde  wrote:

> Hi,
>
> No both nodes, client and server are running on Ignite 2.7 version. I am
> starting both server and client from Intellij IDE.
>
> Version printed in Server node log:
> Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>
> Version in client node log:
> Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>
> Thanks,
> Akash
>
> On Tue, Jan 8, 2019 at 5:18 PM Mikael  wrote:
>
>> Hi!
>>
>> Any chance you might have one node running 2.6 or something like that ?
>>
>> It looks like it get a different object that does not match the one
>> expected in 2.7
>>
>> Mikael
>> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
>>
>> Before submitting the affinity task ignite first gets the affinity cached
>> function (AffinityInfo) by submitting the cluster wide task "AffinityJob".
>> But while in the process of retrieving the output of this AffinityJob,
>> ignite deserializes this output. I am getting exception while deserailizing
>> this output.
>> In TcpDiscoveryNode.readExternal() method while deserailizing the
>> CacheMetrics object from input stream on 14th iteration I am getting
>> following exception. Complete stack trace is given in this mail chain.
>>
>> Caused by: java.io.IOException: Unexpected error occurred during
>> unmarshalling of an instance of the class:
>> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
>>
>> This is working fine on Ignite 2.6 version but giving problem on 2.7.
>>
>> Is this a bug or am I doing something wrong?
>>
>> Can someone please help?
>>
>> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
>> wrote:
>>
>>> Hi,
>>>
>>> When execute affinity.partition(key), I am getting following exception
>>> on Ignite  2.7.
>>>
>>> Stacktrace:
>>>
>>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
>>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
>>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
>>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
>>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
>>> isCancelled=false, retry=null]
>>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
>>> with optimized marshaller
>>>  at
>>> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
>>>  at
>>> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>>>  at
>>> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>>>  at
>>> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>  at java.lang.Thread.run(Thread.java:748)
>>> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
>>> unmarshal object with optimized marshaller
>>>  at
>>> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>>>  at
>>> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
>>>  at
>>> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
>>>  at
>>> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140)
>>>  ... 10 common frames omitted
>>> Caused by: org.apache.ignite.IgniteCheckedException: Failed to
>>> deserialize object with given class loader:
>>> [clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, err=Failed to
>>> deserialize object
>>> [typeName=org.apache.ignite.internal.util.lang.GridTuple3]]
>>>  at
>>> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:237)
>>>  at
>>> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(Bina

Re: Apache Ignite Codebase has been migrated to GitBox

2019-01-09 Thread Anton Vinogradov
Seems, the next step is to integrate tcbot with github.
It will be nice to have an opportunity to ask the bot to check PR right
from PR.

On Wed, Jan 9, 2019 at 12:04 PM Anton Vinogradov  wrote:

> >> Committers can set up account matching between GitHub and Apache. After
> >> this setup, you can merge pull requests from GitHub Web UI
>
> That's awesome!
>
> On Mon, Dec 31, 2018 at 1:49 AM Dmitriy Pavlov  wrote:
>
>> Dear Ignite Developers,
>>
>> I would like to repeat announce from other thread with more details:
>>
>> Apache Ignite *New* Git Remotes for repositories hosted on git-wip-us:
>> https://gitbox.apache.org/repos/asf/ignite.git
>> https://gitbox.apache.org/repos/asf/ignite-release.git
>>
>> Mirrors are the same:
>> https://github.com/apache/ignite/
>> https://github.com/apache/ignite-release/
>>
>> Ignite-doc has been deleted as it has no commits. PMC can create new
>> additional repositories https://selfserve.apache.org/
>>
>> Committers can set up account matching between GitHub and Apache. After
>> this setup, you can merge pull requests from GitHub Web UI and commit
>> files
>> directly to master branch via GitHub. You can run this setup here
>> https://gitbox.apache.org/setup/
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>


[jira] [Created] (IGNITE-10864) JDK11: fix check jdk version at the ignite.bat

2019-01-09 Thread Taras Ledkov (JIRA)
Taras Ledkov created IGNITE-10864:
-

 Summary: JDK11: fix check jdk version at the ignite.bat
 Key: IGNITE-10864
 URL: https://issues.apache.org/jira/browse/IGNITE-10864
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7
Reporter: Taras Ledkov
Assignee: Taras Ledkov
 Fix For: 2.8


THe JDK version check is invalid at the ignite.bat.
*Root cause:* trailing space at the parsed JDK version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Fwd: The ASF needs your help! "The Apache Way To Me..."

2019-01-09 Thread Dmitriy Pavlov
Hi Igniters,

Please find a free minute and fill survey:

https://s.apache.org/oxTr   DEADLINE: 15 February 2019

Sincerely,
Dmitriy Pavlov

-- Forwarded message -
From: Sally Khudairi 
Date: ср, 9 янв. 2019 г. в 09:13
Subject: Fwd: The ASF needs your help! "The Apache Way To Me..."
To: ASF Marketing & Publicity 


Hello Apache PMCs (in blind copy) --I hope you are all well.

For those of you who missed the email to committers@ , we'd love your
cooperation with this project.

Your help is much appreciated!

Warm thanks,
Sally

- - -
Vice President Marketing & Publicity
Vice President Sponsor Relations
The Apache Software Foundation

Tel +1 617 921 8656 <(617)%20921-8656> | s...@apache.org

- Original message -
From: Sally Khudairi 
To: committ...@apache.org
Subject: The ASF needs your help! "The Apache Way To Me..."
Date: Wed, 09 Jan 2019 00:02:08 -0500

Hello fellow ASF Committers --the ASF needs your help.

We'll be celebrating our 20th Anniversary at the end of March!

As you know, the key to our success of stewarding 300+ Apache projects is
community-led development "The Apache Way".

And as Apache Committers, you are entrusted to help educate and propagate
The Apache Way across your projects and their communities. We would love to
hear your thoughts and experiences on The Apache Way by taking the short
survey below.

https://s.apache.org/oxTr  DEADLINE: 15 February 2019

Please feel free to share this questionnaire with others as broadly as
possible. We'll also be announcing to the PMCs and promoting across our
social media channels.

Keep an eye out for other Anniversary details to be shared in the coming
months.

Thank you in advance for your help with this. As always, feel free to let
me know if you have any questions.

Warmest regards,
Sally

- - -
Vice President Marketing & Publicity
Vice President Sponsor Relations
The Apache Software Foundation

Tel +1 617 921 8656 <(617)%20921-8656> | s...@apache.org


Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-09 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer project to reliably confirm this issue?

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 12:39, Akash Shinde :

> Added  dev@ignite.apache.org.
>
> Should I log Jira for this issue?
>
> Thanks,
> Akash
>
>
>
> On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde  wrote:
>
> > Hi,
> >
> > No both nodes, client and server are running on Ignite 2.7 version. I am
> > starting both server and client from Intellij IDE.
> >
> > Version printed in Server node log:
> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
> >
> > Version in client node log:
> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
> >
> > Thanks,
> > Akash
> >
> > On Tue, Jan 8, 2019 at 5:18 PM Mikael  wrote:
> >
> >> Hi!
> >>
> >> Any chance you might have one node running 2.6 or something like that ?
> >>
> >> It looks like it get a different object that does not match the one
> >> expected in 2.7
> >>
> >> Mikael
> >> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
> >>
> >> Before submitting the affinity task ignite first gets the affinity
> cached
> >> function (AffinityInfo) by submitting the cluster wide task
> "AffinityJob".
> >> But while in the process of retrieving the output of this AffinityJob,
> >> ignite deserializes this output. I am getting exception while
> deserailizing
> >> this output.
> >> In TcpDiscoveryNode.readExternal() method while deserailizing the
> >> CacheMetrics object from input stream on 14th iteration I am getting
> >> following exception. Complete stack trace is given in this mail chain.
> >>
> >> Caused by: java.io.IOException: Unexpected error occurred during
> >> unmarshalling of an instance of the class:
> >> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
> >>
> >> This is working fine on Ignite 2.6 version but giving problem on 2.7.
> >>
> >> Is this a bug or am I doing something wrong?
> >>
> >> Can someone please help?
> >>
> >> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> When execute affinity.partition(key), I am getting following exception
> >>> on Ignite  2.7.
> >>>
> >>> Stacktrace:
> >>>
> >>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
> >>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
> >>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
> >>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
> >>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
> >>> isCancelled=false, retry=null]
> >>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
> >>> with optimized marshaller
> >>>  at
> >>>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
> >>>  at
> >>>
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
> >>>  at
> >>>
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
> >>>  at
> >>>
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
> >>>  at
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >>>  at
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >>>  at java.lang.Thread.run(Thread.java:748)
> >>> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
> >>> unmarshal object with optimized marshaller
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
> >>>  at
> >>>
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> >>>  at
> >>>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140)
> >>>  ... 10 common frames omitted
> >>> Caused by: org.apache.ignite.IgniteCheckedException: Failed to
> >>> deserialize object with given class loader:
> >>> [clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, err=Failed 

[jira] [Created] (IGNITE-10865) [ML] [Umbrella] Integration with Spark ML

2019-01-09 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10865:
-

 Summary: [ML] [Umbrella] Integration with Spark ML
 Key: IGNITE-10865
 URL: https://issues.apache.org/jira/browse/IGNITE-10865
 Project: Ignite
  Issue Type: New Feature
  Components: ml
Affects Versions: 2.8
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
 Fix For: 2.8


Investigate how to load ML models from Spark



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] Relocation of Apache git repositories from git-wip-us.apache.org to GitBox

2019-01-09 Thread Dmitriy Pavlov
Done for the wiki, we need update site.

ср, 9 янв. 2019 г. в 12:28, Andrey Mashenkov :

> Hi Dmitry,
>
> Looks like repository has been transferred to GitBox,
> but for now we still have incorrect links on wiki and ignite web site.
>
> I've created a ticket [1].  Can someone fix this?
>
> https://issues.apache.org/jira/browse/IGNITE-10863
>
> On Sun, Dec 30, 2018 at 1:28 AM Dmitriy Pavlov  wrote:
>
>> Hi Igniters,
>>
>> Thanks to Peter and Alexey for sharing their opinion.
>>
>> I've filed https://issues.apache.org/jira/browse/INFRA-17516 to migrate.
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> чт, 13 дек. 2018 г. в 21:47, Peter Ivanov :
>>
>> > There will be no problem with TC, as we use GitHub as main VCS root,
>> using
>> > ASF only for release (which I will reconfigure during release builds
>> > refactoring and optimisations for AI 2.8).
>> >
>> >
>> > > On 11 Dec 2018, at 19:25, Alexey Goncharuk <
>> alexey.goncha...@gmail.com>
>> > wrote:
>> > >
>> > > Given that there is no option to stay on the old repository and
>> > mass-migration is scheduled for Feb, 7th, I think it is better to
>> prepare
>> > and move the repository before the date.
>> > >
>> > > As far as I understand, from developers standpoint we only need to
>> > change the git remote entry. Petr, Sergey, can you asses what changes
>> are
>> > need to be done in order to support this migration for TC and release
>> > procedure?
>> > >
>> > > вт, 11 дек. 2018 г. в 19:20, Dmitriy Pavlov > > > dpav...@apache.org>>:
>> > > Hi All,
>> > >
>> > > The Apache Ignite still has a repository on git-wip-us for its code,
>> as
>> > > well, for release.
>> > >
>> > > Does anyone have a problem with moving over the Ignite git-wip-us
>> > > repository to gitbox voluntarily? This means integrated access and
>> easy
>> > PRs
>> > > (write access to the GitHub repo).
>> > >
>> > > We need to document support for the decision from a mailing list
>> post, so
>> > > here it is.
>> > >
>> > > Sincerely,
>> > > Dmitriy Pavlov
>> > >
>> > > сб, 8 дек. 2018 г. в 00:16, Dmitriy Pavlov > > > dpav...@apache.org>>:
>> > >
>> > > > Hi Igniters,
>> > > >
>> > > > What do you think about the voluntary migration of project
>> repositories
>> > > > from git-wip to GitBox? (See details in the forwarded email.)
>> > > >
>> > > > Redirection of emails originated by PR actions to notifications@
>> > seems to
>> > > > be almost finished, so email flood issue seems to be already solved
>> > for dev@
>> > > > .
>> > > >
>> > > > It is up to us to decide if we would like to migrate sooner. Later
>> all
>> > > > repositories will be migrated anyway.
>> > > >
>> > > > Affected repositories
>> > > > - Ignite Release
>> > > > - Ignite
>> > > >
>> > > > Please share your opinion.
>> > > >
>> > > > Sincerely,
>> > > > Dmitriy Pavlov
>> > > >
>> > > > -- Forwarded message -
>> > > > From: Daniel Gruno > humbed...@apache.org
>> > >>
>> > > > Date: пт, 7 дек. 2018 г. в 19:52
>> > > > Subject: [NOTICE] Mandatory relocation of Apache git repositories on
>> > > > git-wip-us.apache.org 
>> > > > To: us...@infra.apache.org  <
>> > us...@infra.apache.org >
>> > > >
>> > > >
>> > > > [IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
>> > > >   DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]
>> > > >
>> > > > Hello Apache projects,
>> > > >
>> > > > I am writing to you because you may have git repositories on the
>> > > > git-wip-us server, which is slated to be decommissioned in the
>> coming
>> > > > months. All repositories will be moved to the new gitbox service
>> which
>> > > > includes direct write access on github as well as the standard ASF
>> > > > commit access via gitbox.apache.org .
>> > > >
>> > > > ## Why this move? ##
>> > > > The move comes as a result of retiring the git-wip service, as the
>> > > > hardware it runs on is longing for retirement. In lieu of this, we
>> > > > have decided to consolidate the two services (git-wip and gitbox),
>> to
>> > > > ease the management of our repository systems and future-proof the
>> > > > underlying hardware. The move is fully automated, and ideally,
>> nothing
>> > > > will change in your workflow other than added features and access to
>> > > > GitHub.
>> > > >
>> > > > ## Timeframe for relocation ##
>> > > > Initially, we are asking that projects voluntarily request to move
>> > > > their repositories to gitbox, hence this email. The voluntary
>> > > > timeframe is between now and January 9th 2019, during which projects
>> > > > are free to either move over to gitbox or stay put on git-wip. After
>> > > > this phase, we will be requiring the remaining projects to move
>> within
>> > > > one month, after which we will move the remaining projects over.
>> > > >
>> > > > To have your project moved in this initial phase, you will need:
>> > > >
>> > > > - Consensus in the pr

Re: How to free up space on disc after removing entries from IgniteCache with enabled PDS?

2019-01-09 Thread Dmitriy Pavlov
In the TC Bot, I used to create the second cache with CacheV2 name and
migrate needed data from Cache  V1 to V2.

After CacheV1 destroy(), files are removed and disk space is freed.

ср, 9 янв. 2019 г. в 12:04, Павлухин Иван :

> Vyacheslav,
>
> Have you investigated how other vendors (Oracle, Postgres) tackle this
> problem?
>
> I have one wild idea. Could the problem be solved by stopping a node
> which need to be defragmented, clearing persistence files and
> restarting the node? After rebalance the node will receive all data
> back without fragmentation. I see a big downside -- sending data
> across the network. But perhaps we can play with affinity and start
> new node on the same host which will receive the same data, after that
> old node can be stopped. It looks more as kind of workaround but
> perhaps it can be turned into workable solution.
>
> ср, 9 янв. 2019 г. в 10:49, Vyacheslav Daradur :
> >
> > Yes, it's about Page Memory defragmentation.
> >
> > Pages in partitions files are stored sequentially, possible, it makes
> > sense to defragment pages first to avoid interpages gaps since we use
> > pages offset to manage them.
> >
> > I filled an issue [1], I hope we will be able to find resources to
> > solve the issue before 2.8 release.
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-10862
> >
> > On Sat, Dec 29, 2018 at 10:47 AM Павлухин Иван 
> wrote:
> > >
> > > I suppose it is about Ignite Page Memory pages defragmentation.
> > >
> > > We can get 100 allocated pages each of which becomes only e.g. 50%
> > > filled after removal some entries. But they will occupy a space for
> > > 100 pages on a hard drive.
> > >
> > > пт, 28 дек. 2018 г. в 20:45, Denis Magda :
> > > >
> > > > Shouldn't the OS care of defragmentation? What we need to do is to
> give a
> > > > way to remove stale data and "release" the allocated space somehow
> through
> > > > the tools, MBeans or API methods.
> > > >
> > > > --
> > > > Denis
> > > >
> > > >
> > > > On Fri, Dec 28, 2018 at 6:24 AM Vladimir Ozerov <
> voze...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Hi Vyacheslav,
> > > > >
> > > > > AFAIK this is not implemented. Shrinking/defragmentation is
> important
> > > > > optimization. Not only because it releases free space, but also
> because it
> > > > > decreases total number of pages. But is it not very easy to
> implement, as
> > > > > you have to both reshuffle data entries and index entries,
> maintaining
> > > > > consistency for concurrent reads and updates at the same time. Or
> > > > > alternatively we can think of offline defragmentation. It will be
> easier to
> > > > > implement and faster, but concurrent operations will be prohibited.
> > > > >
> > > > > On Fri, Dec 28, 2018 at 4:08 PM Vyacheslav Daradur <
> daradu...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Igniters, we have faced with the following problem on one of our
> > > > > > deployments.
> > > > > >
> > > > > > Let's imagine that we have used IgniteCache with enabled PDS
> during the
> > > > > > time:
> > > > > > - hardware disc space has been occupied during growing up of an
> amount
> > > > > > of data, e.g. 100Gb;
> > > > > > - then, we removed non-actual data, e.g 50Gb, which became
> useless for
> > > > > us;
> > > > > > - disc space stopped growing up with new data, but it was not
> > > > > > released, and still took 100Gb, instead of expected 50Gb;
> > > > > >
> > > > > > Another use case:
> > > > > > - a user extracts data from IgniteCache to store it in separate
> > > > > > IgniteCache or another store;
> > > > > > - disc still is occupied and the user is not able to store data
> in the
> > > > > > different cache at the same cluster because of disc limitation;
> > > > > >
> > > > > > How can we help the user to free up the disc space, if an amount
> of
> > > > > > data in IgniteCache has been reduced many times and will not be
> > > > > > increased in the nearest future?
> > > > > >
> > > > > > AFAIK, we have mechanics of reusing memory pages, that allows us
> to
> > > > > > use pages which have been allocated and stored removed data for
> > > > > > storing new data.
> > > > > > Are there any chances to shrink data and free up space on disc
> (with
> > > > > > defragmentation if possible)?
> > > > > >
> > > > > > --
> > > > > > Best Regards, Vyacheslav D.
> > > > > >
> > > > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> >
> >
> >
> > --
> > Best Regards, Vyacheslav D.
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


[jira] [Created] (IGNITE-10866) [ML] Add an example of LogRegression model loading

2019-01-09 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10866:
-

 Summary: [ML] Add an example of LogRegression model loading
 Key: IGNITE-10866
 URL: https://issues.apache.org/jira/browse/IGNITE-10866
 Project: Ignite
  Issue Type: Sub-task
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev


Load the LogReg model from Spark via Spark ML Writable to parquet file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10867) Document work-around JDK upgrade against command-line parsing bug on Windows

2019-01-09 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-10867:


 Summary: Document work-around JDK upgrade against command-line 
parsing bug on Windows
 Key: IGNITE-10867
 URL: https://issues.apache.org/jira/browse/IGNITE-10867
 Project: Ignite
  Issue Type: Task
  Components: documentation
Affects Versions: 2.5
Reporter: Ilya Kasnacheev
Assignee: Denis Magda






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Some problem with execution ignite.bat in master

2019-01-09 Thread Ilya Kasnacheev
Hello!

I have resolved the ticket, linked JDK bug. Please update docs.

I have created a Docs issue
https://issues.apache.org/jira/browse/IGNITE-10867

Regards,
-- 
Ilya Kasnacheev


пн, 31 дек. 2018 г. в 20:45, Denis Magda :

> Ivan, thanks!
>
> Ilya, could you confirm that is your issue and update the Ignite ticket
> respectively (just close it with a note)? I would update ignite.bat docs
> and DEV notes of Ignite to bring this to attention.
>
> --
> Denis
>
> On Sun, Dec 30, 2018 at 5:49 AM Павлухин Иван  wrote:
>
> > Denis, Ilya,
> >
> > It looks like that issue was adressed in [1]. I can see that it was
> > fixed for java 9.
> >
> > [1] https://bugs.openjdk.java.net/browse/JDK-8132379
> >
> > сб, 29 дек. 2018 г. в 19:19, Denis Magda :
> > >
> > > Could you open a ticket for OpenJDK community and link it to our JIRA?
> If
> > > it’s a known issue the community will explain how to tackle.
> > >
> > > Denis
> > >
> > > On Saturday, December 29, 2018, Ilya Kasnacheev <
> > ilya.kasnach...@gmail.com>
> > > wrote:
> > >
> > > > Hello!
> > > >
> > > > I don't see how we can fix a bug in JVM.
> > > >
> > > > Regards,
> > > > --
> > > > Ilya Kasnacheev
> > > >
> > > >
> > > > сб, 29 дек. 2018 г. в 17:31, Павлухин Иван :
> > > >
> > > > > Ilya,
> > > > >
> > > > > Indeed the issue looks weird. Why do you think that we should not
> > fix it?
> > > > >
> > > > > сб, 29 дек. 2018 г. в 14:12, Ilya Kasnacheev <
> > ilya.kasnach...@gmail.com
> > > > >:
> > > > > >
> > > > > > Hello!
> > > > > >
> > > > > > I have no idea since the issue is un-googlable, but the
> reproducer
> > is
> > > > > > trivial. The problem here is not with the script but with java
> > refusing
> > > > > to
> > > > > > run Main class.
> > > > > >
> > > > > > Regards,
> > > > > > --
> > > > > > Ilya Kasnacheev
> > > > > >
> > > > > >
> > > > > > пт, 28 дек. 2018 г. в 19:50, Denis Magda :
> > > > > >
> > > > > > > Ilya,
> > > > > > >
> > > > > > > Have you confirmed that the issue with -J is common for all the
> > > > scripts
> > > > > > > calling a Java app? Is there any JDK ticket for version prior
> to
> > 11?
> > > > > > >
> > > > > > > --
> > > > > > > Denis
> > > > > > >
> > > > > > > On Fri, Dec 28, 2018 at 12:30 AM Павлухин Иван <
> > vololo...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > AFAIK, "-J" is also a common way used by developers of the
> java
> > > > > > > > platform for passing JVM system properties from command line
> > tools
> > > > > > > > which start JVM inside. E.g. keytool follows such approach
> [1].
> > > > > > > >
> > > > > > > > [1]
> > > > > > > >
> > > > >
> > https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html
> > > > > > > > (section Common Options).
> > > > > > > >
> > > > > > > > пт, 28 дек. 2018 г. в 11:13, Alexey Kuznetsov <
> > > > akuznet...@apache.org
> > > > > >:
> > > > > > > > >
> > > > > > > > > Denis,
> > > > > > > > >
> > > > > > > > > AFAIK, "-J" is a way to pass JVM options via
> ignite.(bat|sh)
> > > > > > > > > For example: ignite.bat
> > -J-DIGNITE_OVERRIDE_MCAST_GRP=228.1.2.10
> > > > -v
> > > > > > > > >
> > > > > > > > > On Fri, Dec 28, 2018 at 1:22 AM Denis Magda <
> > dma...@apache.org>
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Ilya,
> > > > > > > > > >
> > > > > > > > > > Why do we use -J in general? Is it used to pass a
> specific
> > type
> > > > > of
> > > > > > > > > > parameters? I know -D or - X but can't recall what -J is
> > > > > supposed to
> > > > > > > be
> > > > > > > > > > used for.
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Denis
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Alexey Kuznetsov
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Best regards,
> > > > > > > > Ivan Pavlukhin
> > > > > > > >
> > > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > > Ivan Pavlukhin
> > > > >
> > > >
> > >
> > >
> > > --
> > > Denis Magda,
> > > GridGain Director of Product Management
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
> >
>


Re: Query history statistics API

2019-01-09 Thread Юрий
Hi,

I have question related to subject. How do you think should we track
EXPLAIN queries also? I see reasons as to skip it and as include to history:

pros:

We will have full picture and see all queries.

cons:

Such queries can be considered as investigate/debug/service queries and can
push out real queries.


What is your opinion?


пт, 21 дек. 2018 г. в 19:23, Юрий :

> Vladimir, thanks for your expert opinion.
>
> I have some thoughts about 5 point.
> I tried to find how it works for Oracle and PG:
>
> *PG*: keep by default 1000 (can be configured) statements without and
> discard the least-executed statements. Update statistics is asynchronous
> process and statistics may have lag.
>
> *Oracle*: use shared pool for historical data and can evict records with
> min time of last execution in case free space at shared pool is not enough
> for a data which can be related not only historical statistics. So seems
> also separate asynchronous process (information about it so small).
>
>
> Unfortunately I could not find information about big workload and how it
> handled for these databases. However We could see that both of vendors use
> asynchronous statistic processing.
>
>
> I see few variants how we can handle very high workload.
>
> First part of variants use asynchronous model with separate thread which
> should take elements to update stats from a queue:
> 1) We blocking on overlimited queue and wait when capacity will be enough
> to put new element.
>
> + We have all actual statistics
> - End of our query execution can be blocked.
>
> 2) Discard statistics for ended query in case queue is full.
>
> + Very fast for current query
> - We lose part of statistics.
>
> 3) Do full clean of statistic's queue.
>
> + Fast and freespace for further elements
> - We lose big number of statistic elements.
>
>
> Second part of variants use current approach for queryMetrics. When we
> have some additional capacity for CHM with history + periodical cleanup the
> Map. In case even the additional space is not enough we can :
> 1) Discard statistics for ended query
> 2) Do full clean CHM and discard all gathered information.
>
> First part of variants potentially should work faster due to we can update
> history Map in single thread without contention and put to queue should be
> faster.
>
>
> What do you think? Which of the variant will be prefer or may be you can
> suggest another way to handle potential huge workload?
>
> Also there is one initial question which stay not clear to me - it is
> right place for new API.
>
>
> пт, 21 дек. 2018 г. в 13:05, Vladimir Ozerov :
>
>> Hi,
>>
>> I'd propose the following approach:
>> 1) Enable history by default. Becuase otherwise users will have to restart
>> the node to enable it, or we will have to implement dynamic history
>> enable,
>> which is complex thing. Default value should be relatively small yet
>> allowing to accommodate typical workloads. E.g. 1000 entries. This should
>> not put any serious pressure to GC.
>> 2) Split queries by: schema, query, local flag
>> 3) Track only growing values: execution count, error count, minimum
>> duration, maximum duration
>> 4) Implement ability to clear history - JMX, SQL command, whatever (may be
>> this is different ticket)
>> 5) History cleanup might be implemented similarly to current approach:
>> store everything in CHM. Periodically check it's size. If it is too big -
>> evict oldest entries. But this should be done with care - under some
>> workloads new queries will be generated very quickly. In this case we
>> should either fallback to synchronous evicts, or do not log history at
>> all.
>>
>> Thoughts?
>>
>> Vladimir.
>> -
>>
>> On Fri, Dec 21, 2018 at 11:22 AM Юрий 
>> wrote:
>>
>> > Alexey,
>> >
>> > Yes, such property to configuration history size will be added. I think
>> > default value should be 0 and history by default shouldn't be gather at
>> > all, and can be switched on by property in case when it required.
>> >
>> > Currently I planned use the same way to evicting old data as for
>> > queryMetrics - scheduled task will evict will old data by oldest start
>> time
>> > of query.
>> >
>> > Will be gathered statistics for only initial clients queries, so
>> internal
>> > queries will not including. For the same queries we will have one
>> record in
>> > history with merged statistics.
>> >
>> > All above points just my proposal. Please revert back in case you think
>> > anything should be implemented by another way.
>> >
>> >
>> >
>> >
>> >
>> > чт, 20 дек. 2018 г. в 18:23, Alexey Kuznetsov :
>> >
>> > > Yuriy,
>> > >
>> > > I have several questions:
>> > >
>> > > Are we going to add some properties to cluster configuration for
>> history
>> > > size?
>> > >
>> > > And what will be default history size?
>> > >
>> > > Will the same queries count as same item of historical data?
>> > >
>> > > How we will evict old data that not fit into history?
>> > >
>> > > Will we somehow count "reduce" queries? Or only final 

[jira] [Created] (IGNITE-10868) WAL segments are removed from WAL archive after each checkpoint

2019-01-09 Thread Artem Budnikov (JIRA)
Artem Budnikov created IGNITE-10868:
---

 Summary: WAL segments are removed from WAL archive after each 
checkpoint
 Key: IGNITE-10868
 URL: https://issues.apache.org/jira/browse/IGNITE-10868
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7
Reporter: Artem Budnikov
Assignee: Alexey Goncharuk


The WAL archive is cleaned up after each checkpoint. This does not cause issues 
for end users; however, an empty WAL archive won't help in case of historical 
rebalancing. But our documentation says that it does. Please investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10869) [ML] Add MultiClass classification metrics

2019-01-09 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10869:
-

 Summary: [ML] Add MultiClass classification metrics
 Key: IGNITE-10869
 URL: https://issues.apache.org/jira/browse/IGNITE-10869
 Project: Ignite
  Issue Type: Sub-task
  Components: ml
Affects Versions: 2.8
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
 Fix For: 2.8


Add ability to calculate multiple metrics (as binary metrics) for multiclass 
classification

It can be merged with OneVsRest approach



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10870) [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset

2019-01-09 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10870:
-

 Summary: [ML] Add an example for KNN/LogReg and multi-class task 
full Iris dataset
 Key: IGNITE-10870
 URL: https://issues.apache.org/jira/browse/IGNITE-10870
 Project: Ignite
  Issue Type: Sub-task
  Components: ml
Affects Versions: 2.8
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
 Fix For: 2.8


Add a one or two examples for KNN/LogReg and Iris dataset with 3 classes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10871) MVCC: Fix SPI exception handling in mvcc mode.

2019-01-09 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10871:
-

 Summary: MVCC: Fix SPI exception handling in mvcc mode.
 Key: IGNITE-10871
 URL: https://issues.apache.org/jira/browse/IGNITE-10871
 Project: Ignite
  Issue Type: Task
  Components: mvcc
Reporter: Andrew Mashenkov


Next tests are failed due to wrong IgniteSpiException handling.

GridCacheReplicatedTxExceptionSelfTest
GridCacheColocatedTxExceptionSelfTest

 

TransactionHeuristicException expected, but CacheException is actually thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Set 'TcpDiscoveryVmIpFinder' as default IP finder for tests instead of 'TcpDiscoveryMulticastIpFinder'

2019-01-09 Thread Dmitrii Ryabov
Hi,

Anton, can you review ticket for examples [1]?

[1] https://issues.apache.org/jira/browse/IGNITE-6826

пн, 24 дек. 2018 г., 15:23 Anton Vinogradov a...@apache.org:

> Folks,
>
> The important thing here is that 99% of "final static" ip-finders were
> removed. (~800 pcs.)
> Non-static, mostly, kept as is since removal may or cause a test failure.
> (~130 pcs.)
>
> In case someone interested in the continuation of cleanup, please feel free
> to create an issue and provide PR, I'm ready to review it.
>
> On Mon, Dec 24, 2018 at 3:04 PM Vyacheslav Daradur 
> wrote:
>
> > The second step [2] "removing related boilerplate in tests" - has been
> > done. It is expected that a bit memory will be released at tests TC
> > agents, which could not be cleaned by GC because of static final
> > fields.
> >
> > Guys, please, do not generate a new one )
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-10715
> >
> > On Tue, Dec 18, 2018 at 6:29 PM Anton Vinogradov  wrote:
> > >
> > > Folks,
> > >
> > > Now I see 5-10% speedup at all TC suites right after the merge.
> > > Great fix!
> > >
> > >
> > > On Mon, Dec 17, 2018 at 2:39 PM Vyacheslav Daradur <
> daradu...@gmail.com>
> > > wrote:
> > >
> > > > Andrey Mashenkov, at first sight, I have not seen any problems with
> > > > .NET platform.
> > > >
> > > > I believe we need carefully configure ports in platform's examples,
> > > > additional actions should not be required.
> > > >
> > > > On Mon, Dec 17, 2018 at 2:35 PM Vyacheslav Daradur <
> > daradu...@gmail.com>
> > > > wrote:
> > > > >
> > > > > The task [1] is done.
> > > > >
> > > > > 'TcpDiscoveryVmIpFinder' is default IP finder in tests now.
> > > > >
> > > > > The IP finder is initialized on per tests class level to avoid
> hidden
> > > > > affecting among tests, it means the test methods in the common test
> > > > > class will use the same IP finder.
> > > > >
> > > > > You don't need to set up 'TcpDiscoveryVmIpFinder' in your tests
> > > > > explicitly anymore, also task [2] has been filled to remove related
> > > > > boilerplate if nobody minds.
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/IGNITE-10555
> > > > > [2] https://issues.apache.org/jira/browse/IGNITE-10715
> > > > >
> > > > >
> > > > > On Wed, Dec 5, 2018 at 7:17 PM Dmitriy Pavlov 
> > > > wrote:
> > > > > >
> > > > > > ++1
> > > > > >
> > > > > > ср, 5 дек. 2018 г. в 18:38, Denis Mekhanikov <
> > dmekhani...@gmail.com>:
> > > > > >
> > > > > > > Andrey,
> > > > > > >
> > > > > > > Multi-JVM tests may also use a static IP finder, but it should
> > use
> > > > some
> > > > > > > specific port range instead of being shared.
> > > > > > > Something like 127.0.0.1:48500..48509 would do.
> > > > > > >
> > > > > > > Denis
> > > > > > >
> > > > > > > ср, 5 дек. 2018 г. в 18:34, Vyacheslav Daradur <
> > daradu...@gmail.com
> > > > >:
> > > > > > >
> > > > > > > > I filled a task [1].
> > > > > > > >
> > > > > > > > >> Slava, do you think Platforms tests can be fixed as well
> or
> > one
> > > > more
> > > > > > > > ticket
> > > > > > > > should be created?
> > > > > > > >
> > > > > > > > I'll try to fix them within one ticket, it should be
> > investigated
> > > > a bit
> > > > > > > > deeper.
> > > > > > > >
> > > > > > > > I'll inform about the task's progress in this thread later.
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > >
> > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-10555
> > > > > > > > On Wed, Dec 5, 2018 at 6:28 PM Andrey Mashenkov
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > Slava,
> > > > > > > > > +1 for your proposal.
> > > > > > > > > Is there any ticket for this?
> > > > > > > > >
> > > > > > > > > Denis,
> > > > > > > > > I've just read in nabble thread you suggest to allow
> > multicast
> > > > finder
> > > > > > > for
> > > > > > > > > multiJVM tests
> > > > > > > > > and I'd think we shouldn't use multicast in test at all
> > (excepts
> > > > > > > > multicast
> > > > > > > > > Ip finder self tests of course),
> > > > > > > > > but e.g. add an assertion to force user to create ipfinder
> > > > properly.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Also, we have a ticket for similar issue in 'examples'
> > module.
> > > > > > > > > Seems, there are some issues with Platforms module
> > integration.
> > > > > > > > > Slava, do you think Platforms tests can be fixed as well or
> > one
> > > > more
> > > > > > > > ticket
> > > > > > > > > should be created?
> > > > > > > > >
> > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-6826
> > > > > > > > >
> > > > > > > > > On Wed, Dec 5, 2018 at 5:55 PM Denis Mekhanikov <
> > > > dmekhani...@gmail.com
> > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Slava,
> > > > > > > > > >
> > > > > > > > > > These are exactly my thoughts, so I fully support you
> here.
> > > > > > > > > > I already wrote about it:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > >
> > > > > > >
> > > 

Re: Query history statistics API

2019-01-09 Thread Alexey Kuznetsov
Yuriy,

I think, for the sake of code simplicity we can track all queries.
in other case we will need to parse SQL text and detect that current query
is the EXPLAIN one.

-- 
Alexey Kuznetsov


[jira] [Created] (IGNITE-10872) Queue Draining on Graceful Shutdowns

2019-01-09 Thread Gabriel Jimenez (JIRA)
Gabriel Jimenez created IGNITE-10872:


 Summary: Queue Draining on Graceful Shutdowns
 Key: IGNITE-10872
 URL: https://issues.apache.org/jira/browse/IGNITE-10872
 Project: Ignite
  Issue Type: Improvement
  Components: general
Reporter: Gabriel Jimenez


Our *issue to solve* was to have the grid complete any waiting and currently 
executing tasks on any termination where the signal was not KILL. With current 
behavior our tenants were losing data to corruption or incomplete requests due 
to job/task interruption on shutdown.

 Part of the functionality we were looking for is already exposed in the sense 
that the kill/stop functions within org.apache.ignite.Ignition already have the 
'cancel' flag parameter. However this flag is hardcoded as 'true' in 
org.apache.ignite.internal.IgnitionEx's default shutdown hook (meaning 
jobs/tasks will be interrupted). Currently we have changed IgnitionEx's 
implementation to depend on an ignite system variable, and set default behavior 
to drain on termination with any signal other than KILL. Additional Questions: 

Was there an existing solution/approach to our '*issue to solve*' that did not 
involved changing the codebase? 

Is there another preferred solution for our issue to solve? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10873) CorruptedTreeException during simultaneous cache put operations

2019-01-09 Thread Pavel Vinokurov (JIRA)
Pavel Vinokurov created IGNITE-10873:


 Summary: CorruptedTreeException during simultaneous cache put 
operations
 Key: IGNITE-10873
 URL: https://issues.apache.org/jira/browse/IGNITE-10873
 Project: Ignite
  Issue Type: Bug
  Components: cache, persistence
Affects Versions: 2.7
Reporter: Pavel Vinokurov


[2019-01-09 20:47:04,376][ERROR][pool-9-thread-9][GridDhtAtomicCache] 
 Unexpected exception during cache update
org.h2.message.DbException: General error: "class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 Runtime failure on row: Row@780acfb4[ key: model.TbclsrInputKey 
[idHash=1823856383, hash=275143246, clsbInputRef=GTEST, firstInputFlag=254], 
val: model.TbclsrInput [idHash=708235920, hash=-19147671, clsbMatchRef=null, 
origBic=null, desStlmtMbrBic=null, cpBic=null, cpDesSmBic=null, 
desSmManuAuth=0, origRef=null, relatedRef=null, commonRef=null, 
clsbTransRef=null, lastAmdSendRef=null, branchId=null, inputType=null, 
formatType=0, sourceType=0, sourceId=0, operType=null, fwdBookFlag=0, 
possDupFlag=0, sameDayFlag=0, pendingFlag=0, rescOrigSmFlag=0, 
rescCpCpsmFlag=0, stlmtEligFlag=0, authTms=null, ntfId=0, inputStatus=0, 
lastActionTms=null, ofacStatus=0, ofacTms=null, prevInputStatus=0, 
prevTms=null, cpOfacStatus=0, sentDt=null, valueDt=null, tradeDt=null, 
origSuspFlag=0, origSmSuspFlag=0, cpSuspFlag=0, cpSmSuspFlag=0, currSuspFlag=0, 
tpIndicatorFlag=0, tpBic=null, tpReference=null, tpFreeText=null, 
tpFurtherRef=null, tpCustIntRef=null, tpMbrField1=null, tpMbrField2=null, 
exchRate=0.0, currIdBuy=0, volBuy=0.0, currIdSell=0, volSell=0.0, 
inputVersionId=0, versionId=null, grpQueueOrderNo=0, queueOrderNo=0, 
originalGroupId=0, groupStatus=0, usi=null, prevUsi=null, origLei=null, 
cpLei=null, fundLei=null, reportJuris=null, execVenue=null, execTms=null, 
execTmsUtcoff=null, mappingRule=null, reportJuris2=null, usi2=null, 
prevUsi2=null, reportJuris3=null, usi3=null, prevUsi3=null], ver: 
GridCacheVersion [topVer=158536014, order=1547056011256, nodeOrder=1] ][ GTEST, 
null, 254, null, null, null, null, 0, null, null, null, null, null, null, null, 
0, 0, 0, null, 0, 0, 0, 0, 0, 0, 0, null, 0, 0, null, 0, null, 0, null, 0, 
null, null, null, 0, 0, 0, 0, 0, 0, null, null, null, null, null, null, null, 
0.0, 0, 0.0, 0, 0.0, 0, null, 0, 0, 0, 0, null, null, null, null, null, null, 
null, null, null, null, null, null, null, null, null, null ]" [5-197]
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:307)
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:302)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.addToIndex(GridH2Table.java:546)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:479)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:768)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1905)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:404)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:2633)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1646)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1621)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1935)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:428)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2295)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2494)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1951)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1780)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1668)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at

Re: Continuous queries and duplicates

2019-01-09 Thread Vladimir Ozerov
Hi,

MVCC caches have the same ordering guarantees as non-MVCC caches, i.e. two
subsequent updates on a single key will be delivered in proper order. There
is no guarantees  Order of updates on two subsequent transactions affecting
the same partition may be guaranteed with current implementation (though. I
am not sure), but even if it is so, I am not aware that this was ever our
design goal. Most likely, this is an implementation artifact which may be
changed in future. Cache experts are needed to clarify this.

As far as MVCC, data anomalies are still possible in current
implementation, because we didn't rework initial query handling in the
first iteration, because technically this is not so simple as we thought.
Once snapshot is obtained, query over that snapshot will return a data set
consistent at some point in time. But the problem is that there is a time
frame between snapshot acquisition and listener installation (or vice
versa), what leads to either duplicates or lost entries. Some multi-step
listener installation will be required here. We haven't designed it yet.

Vladimir.



On Mon, Dec 24, 2018 at 10:06 PM Denis Magda  wrote:

> >
> > In my case, values are immutable - I never change them, I just add new
> > entry for newer versions. Does it mean that I won't have any duplicates
> > between the initial query and listener entries when using continuous
> > queries on caches supporting MVCC?
>
>
> I'm afraid there still might be a race. Val, Vladimir, other Ignite
> experts, please confirm.
>
> After reading the related thread (
> >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Continuous-queries-and-MVCC-td33972.html
> > )
> > I'm now concerned about the ordering. My case assumes that there are
> groups
> > of entries which belong to a business aggregate object and I would like
> to
> > make sure that if I commit two records in two serial transactions then I
> > have notifications in the same order. Those entries will have different
> > keys so based on what you said ("we'd better to leave things as is and
> > guarantee only per-key ordering"), it would seem that the order is not
> > guaranteed. But do you think it would possible to guarantee order when
> > those entries share the same affinity key and they belong to the same
> > partition?
>
>
> The order should be the same for key-value transactions. Vladimir, could
> you clear out MVCC based behavior?
>
> --
> Denis
>
> On Mon, Dec 17, 2018 at 9:55 AM Piotr Romański 
> wrote:
>
> > Hi all, sorry for answering so late.
> >
> > I would like to use SqlQuery because I can leverage indexes there.
> >
> > As it was already mentioned earlier, the partition update counter is
> > exposed through CacheQueryEntryEvent. Initially, I thought that the
> > partition update counter is something what's persisted together with the
> > data but I'm guessing now that this is only a part of the notification
> > mechanism.
> >
> > I imagined that I would be able to implement my own deduplicaton by
> having
> > 3 stages on the client side: 1. Keep processing initial query results,
> > store their keys in memory, 2. When initial query is over, then process
> > listener entries but before that check if they have been already
> delivered
> > in the first stage, 3. When we are sure that we are already processing
> > notifications for commits executed after initial query was done, then we
> > can process listener entries without any additional checks (so our key
> set
> > from stage 1 can be removed from memory). The problem is that I have no
> way
> > to say that I can move from stage 2 to 3. Another problem is that we need
> > to stash listener entries while still processing initial query results
> > causing an excessive memory pressure on our client.
> >
> > In my case, values are immutable - I never change them, I just add new
> > entry for newer versions. Does it mean that I won't have any duplicates
> > between the initial query and listener entries when using continuous
> > queries on caches supporting MVCC?
> >
> > After reading the related thread (
> >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Continuous-queries-and-MVCC-td33972.html
> > )
> > I'm now concerned about the ordering. My case assumes that there are
> groups
> > of entries which belong to a business aggregate object and I would like
> to
> > make sure that if I commit two records in two serial transactions then I
> > have notifications in the same order. Those entries will have different
> > keys so based on what you said ("we'd better to leave things as is and
> > guarantee only per-key ordering"), it would seem that the order is not
> > guaranteed. But do you think it would possible to guarantee order when
> > those entries share the same affinity key and they belong to the same
> > partition?
> >
> > Piotr
> >
> > pt., 14 gru 2018, 19:31: Denis Magda  napisał(a):
> >
> > > Vladimir,
> > >
> > > Thanks for referring to the MVCC and Continuous Queries discussion, I
> 

[jira] [Created] (IGNITE-10874) JDBC thin driver metadata misses caches with queryEntities and names containing underscores

2019-01-09 Thread Alexey Kukushkin (JIRA)
Alexey Kukushkin created IGNITE-10874:
-

 Summary: JDBC thin driver metadata misses caches with 
queryEntities and names containing underscores
 Key: IGNITE-10874
 URL: https://issues.apache.org/jira/browse/IGNITE-10874
 Project: Ignite
  Issue Type: Improvement
  Components: jdbc
Affects Versions: 2.7
Reporter: Alexey Kukushkin


+*Steps to reproduce*+

1) Build and run this app:

 
{code:java}
public class App {
public static void main(String[] ags) {
try (Ignite ignite = Ignition.start(
new IgniteConfiguration()
.setDiscoverySpi(
new TcpDiscoverySpi()
.setIpFinder(
new 
TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500"))
)
)
)) {
final String NAME = "V_MODELS_SHORT";

ignite.getOrCreateCache(
new CacheConfiguration<>()
.setName(NAME)
.setCacheMode(CacheMode.PARTITIONED)
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
.setQueryEntities(Collections.singleton(
new QueryEntity()
.setKeyType(Integer.class.getTypeName())
.setValueType(NAME)
.setKeyFieldName("VMS_ID")
.setFields(
Stream.of(
new AbstractMap.SimpleEntry<>("VMS_ID", 
Integer.class.getTypeName()),
new 
AbstractMap.SimpleEntry<>("VMS_HASVERSION", Boolean.class.getTypeName())
).collect(Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue,
(u, v) -> {
throw new 
IllegalStateException(String.format("Duplicate key %s", u));
},
LinkedHashMap::new
))
)
))
);

System.in.read();
}
}
}
{code}
2) While the app is running use a JDBC database management tool like DBeaver to 
open a JDBC thin driver connection to Ignite.

3) Check table "V_MODELS_SHORT" under schema "V_MODELS_SHORT"

+*Expected*+

3) Table "V_MODELS_SHORT" is visible under  schema "V_MODELS_SHORT"

+*Actual*+

 

3) There is no table "V_MODELS_SHORT" under  schema "V_MODELS_SHORT"

+*Notes*+

You still can run queries on table V_MODELS_SHORT in DBeaver. For example, 
these queries work fine:
{code:java}
INSERT INTO V_MODELS_SHORT.V_MODELS_SHORT VALUES (1, 'false');

SELECT * FROM V_MODELS_SHORT.V_MODELS_SHORT;
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10875) Web Console: Update tooltip

2019-01-09 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-10875:
-

 Summary: Web Console: Update tooltip
 Key: IGNITE-10875
 URL: https://issues.apache.org/jira/browse/IGNITE-10875
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Kuznetsov


Old: This setting is needed to hide and show tooltips with hints.

New: Use this setting to hide or show tooltips with hints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10876) finishExchangeOnCoordinator parallelization

2019-01-09 Thread Pavel Voronkin (JIRA)
Pavel Voronkin created IGNITE-10876:
---

 Summary: finishExchangeOnCoordinator parallelization
 Key: IGNITE-10876
 URL: https://issues.apache.org/jira/browse/IGNITE-10876
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Voronkin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10877) GridAffinityAssignment.initPrimaryBackupMaps memory pressure

2019-01-09 Thread Pavel Voronkin (JIRA)
Pavel Voronkin created IGNITE-10877:
---

 Summary: GridAffinityAssignment.initPrimaryBackupMaps memory 
pressure
 Key: IGNITE-10877
 URL: https://issues.apache.org/jira/browse/IGNITE-10877
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Voronkin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)