Re: [DISCUSSION] Common approach to print sensitive information

2021-04-05 Thread Evgenii Zhuravlev
Ivan,

I've been talking with different users from different industries and some
of them(definitely not all of them) consider schema sensitive information.
As a framework, that can be used by different types of users, we should
cover this use case too. The solution, suggested by Ilya sounds very
reasonable here - not all the users have so strict restrictions, but still,
some of them have.

Best Regards,
Evgenii

пн, 5 апр. 2021 г. в 04:07, Ivan Daschinsky :

> Ilya, great idea, but I suppose that third option is a little bit paranoid.
> But nevertheless, let it be, it's quite simple to implement it.
>
> пн, 5 апр. 2021 г. в 14:04, Ilya Kasnacheev :
>
> > Hello!
> >
> > I have two ideas here:
> >
> > - Let's have more than a single level of sensitive information
> withholding.
> > Currently it is on/off, but I think we may need three levels: "print
> all",
> > "print structure but not content", "print none".
> > By structure I mean table/column/field names and data types. So we can
> > print SQL statements in their EXPLAIN form to log, but do not print any
> > query arguments or values, substituting them with '?'. We can also print
> > class and field names in various places.
> > - If we have a default different from "print all", we should add a
> > developer warning about it, such as
> > [WARN ] Sensitive information will not be written to log by default.
> > Consider *doing things* to enable developer mode.
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > пн, 5 апр. 2021 г. в 13:45, Taras Ledkov :
> >
> > > Hi,
> > >
> > > I work on ticket IGNITE-14441 [1] to hide sensitive information at the
> > > log messages produced by SQL.
> > > There are negative comments for the patch.
> > >
> > > I guess we have to produce view to work with sensitive information and
> > > make rules to define sensitive information.
> > >
> > > See on the usage of the GridToStringBuilder#includeSensitive. Class
> > > names and  field names now are considered sensitive.
> > > My train of thought is this: SQL query and query plan contain table
> name
> > > (similar to class name) and field name.
> > > So, the query and plan are completely sensitive.
> > >
> > > Lets define sensitive info and work with it for Ignite.
> > >
> > > Someone proposes introduce one more Ignite property for print SQL
> > > sensitive info.
> > > I think this leads to complication.
> > >
> > > Introduce levels of the sensitivity make sense but all similar
> > > information must be handled with the same rules.
> > >
> > > Igniters, WDYT?
> > >
> > > [1]. https://issues.apache.org/jira/browse/IGNITE-14441
> > >
> > > --
> > > Taras Ledkov
> > > Mail-To: tled...@gridgain.com
> > >
> > >
> >
>
>
> --
> Sincerely yours, Ivan Daschinskiy
>


Spark 3 support plans

2021-04-01 Thread Evgenii Zhuravlev
Hi Nikolay,

As a main spark integration maintainer, do you have any plans for upgrading
spark dependencies to version 3+? I see that some users are asking for it:
https://issues.apache.org/jira/browse/IGNITE-13181.

Thanks,
Evgenii


Re: [DISCUSSION] IEP-59: CDC - Capture Data Change

2020-10-14 Thread Evgenii Zhuravlev
Hi,

>On the segment archiving, utility iterates it using existing WALIterator
>Wait and respond to some specific events or data changes.
It seems like this solution will have an unpredictable delay for
synchronization for handling events.

Why can't we just implement a Debezium connector for Ignite, for example?
https://debezium.io/documentation/reference/1.3/index.html. It is a pretty
popular product that uses Kafka underneath.

Evgenii


ср, 14 окт. 2020 г. в 05:00, Pavel Kovalenko :

> This tool is also can be used to store snapshots in an external warehouse.
>
>
> ср, 14 окт. 2020 г. в 14:57, Pavel Kovalenko :
>
> > Hi Nikolay,
> >
> > The idea is good. But what do you think to integrate these ideas into
> > WAL-G project?
> > https://github.com/wal-g/wal-g
> > It's a well-known tool that is already used to stream WAL for PostgreSQL,
> > MySQL, and MongoDB.
> > The advantages are integration with S3, GCP, Azure out of the box,
> > encryption, and compression.
> >
> >
> > ср, 14 окт. 2020 г. в 14:21, Nikolay Izhikov :
> >
> >> Hello, Igniters.
> >>
> >> I want to start a discussion of the new feature [1]
> >>
> >> CDC - capture data change. The feature allows the consumer to receive
> >> online notifications about data record changes.
> >>
> >> It can be used in the following scenarios:
> >> * Export data into some warehouse, full-text search, or
> >> distributed log system.
> >> * Online statistics and analytics.
> >> * Wait and respond to some specific events or data changes.
> >>
> >> Propose to implement new IgniteCDC application as follows:
> >> * Run on the server node host.
> >> * Watches for the appearance of the WAL archive segments.
> >> * Iterates it using existing WALIterator and notifies consumer
> of
> >> each record from the segment.
> >>
> >> IgniteCDC features:
> >> * Independence from the server node process (JVM) - issues and
> >> failures of the consumer will not lead to server node instability.
> >> * Notification guarantees and failover - i.e. CDC track and save
> >> the pointer to the last consumed record. Continue notification from this
> >> pointer in case of restart.
> >> * Resilience for the consumer - it's not an issue when a
> consumer
> >> temporarily consumes slower than data appear.
> >>
> >> WDYT?
> >>
> >> [1]
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-59+CDC+-+Capture+Data+Change
> >
> >
>


Re: JVM_OPTS in control.sh and ignite.sh

2020-09-24 Thread Evgenii Zhuravlev
Ilya,

You can get absolutely the same behaviour when you set JVM_OPTS even
without Docker.

Evgenii

чт, 24 сент. 2020 г. в 05:44, Ilya Kasnacheev :

> Hello!
>
> If the issue is with docker only, then maybe we should get rid of JVM_OPTS
> with docker entirely? E.g. pass them as parameters.
>
> I'm not sold on this change yet, it breaks backward compatibility for
> marginal benefit.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 24 сент. 2020 г. в 15:35, Данилов Семён :
>
> > Hello, Igniters!
> >
> > I recently discovered, that control.sh and ignite.sh both use JVM_OPTS
> > environment variable. This can lead to various issues (especially in
> > docker), such as:
> > * Control utility will have the same xms/xmx parameters.
> > * Control utility won't launch due to JMX port being in use (as it is set
> > in JVM_OPTS and already occupied by ignite).
> > And so on.
> >
> > I suggest using different environment variable in control.sh
> > (CONTROL_JVM_OPTS for example).
> >
> > Here is the JIRA issue —
> > https://issues.apache.org/jira/browse/IGNITE-13479
> > And a pull request — https://github.com/apache/ignite/pull/8275/
> >
> > Regards, Semyon.
> >
>


[jira] [Created] (IGNITE-13391) Ignite-hibernate doesn't recreate cache proxies after full reconnect to the cluster

2020-08-28 Thread Evgenii Zhuravlev (Jira)
Evgenii Zhuravlev created IGNITE-13391:
--

 Summary: Ignite-hibernate doesn't recreate cache proxies after 
full reconnect to the cluster
 Key: IGNITE-13391
 URL: https://issues.apache.org/jira/browse/IGNITE-13391
 Project: Ignite
  Issue Type: Bug
  Components: hibernate
Affects Versions: 2.7.6
Reporter: Evgenii Zhuravlev


If there is only one server node in the cluster and user restart it, client 
node reconnects to the completely new cluster and, in order to continue using 
the ignite-hibernate integration, it should recreate all the cache proxies. 
Otherwise, it leads to the issue described in this thread:
http://apache-ignite-users.70518.x6.nabble.com/Hibernate-2nd-Level-query-cache-with-Ignite-td33816.html

We should add proxies reinitialization on reconnect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSSION] Add autocompletion for commands in control.sh

2020-06-17 Thread Evgenii Zhuravlev
Hi,

+1 for both moving control.sh to the separate module and adding
autocompletion.

Will API remain the same in control.sh?

Evgenii

пт, 5 июн. 2020 г. в 01:59, ткаленко кирилл :

> Folks have created a ticket [1].
>
> 1 - https://issues.apache.org/jira/browse/IGNITE-13120
>
> 02.06.2020, 16:48, "ткаленко кирилл" :
> > Maxim I suggested moving control.sh in a separate module, are we talking
> about the same thing?
> >
> > 02.06.2020, 16:15, "Maxim Muzafarov" :
> >>  Folks,
> >>
> >>  +1
> >>
> >>  However, AFAIK control.sh is the part of the ignite-core module with
> >>  zero dependencies from external resources.
> >>  Would it be better going the `control.sh` extensions-way?
> >>
> >>  By the way, according to README.md [1] the picocli is already using by
> >>  the Apache Ignite, right? :-)
> >>
> >>>   Picocli is used in the Apache Hadoop Ozone/HDDS command line tools,
> the Apache Hive benchmark CLI, has ** Apache [Ignite TensorFlow] **, and
> Apache Sling.
> >>
> >>  [1] https://github.com/remkop/picocli/blame/master/README.md#L199
> >>
> >>  On Tue, 2 Jun 2020 at 16:09, Ivan Daschinsky 
> wrote:
> >>>   +1 But this is not only usability improvement, but also a huge code
> >>>   improvement. With picocli developers can add custom command without
> writing
> >>>   a lot of boilerplate and error prone code to do a trivial task
> >>>   of parsing CLI arguments. Cleaner code, less bugs also matter.
> >>>
> >>>   вт, 2 июн. 2020 г. в 16:02, Sergey Antonov <
> antonovserge...@gmail.com>:
> >>>
> >>>   > It would be a great usability improvement!
> >>>   >
> >>>   > +1 From me.
> >>>   >
> >>>   > вт, 2 июн. 2020 г. в 15:54, Zhenya Stanilovsky
>  >>>   > >:
> >>>   >
> >>>   > >
> >>>   > >
> >>>   > > good catch ! it`s a little bit pain for now to working with it.
> >>>   > >
> >>>   > >
> >>>   > > >Hi, Igniters!
> >>>   > > >
> >>>   > > >At the moment to work with the control.sh we need to know
> exactly what
> >>>   > > the name of the command and its options are and so the user can
> often
> >>>   > make
> >>>   > > mistakes when using it. So I think it would be useful to do
> control.sh
> >>>   > more
> >>>   > > user-friendly by adding autocomplete as in modern command-line
> utilities.
> >>>   > > >
> >>>   > > >For this purpose, I suggest using framework [1] and to do this,
> take out
> >>>   > > control.sh together with its associated classes in a separate
> module such
> >>>   > > as "modules/control-utility".
> >>>   > > >
> >>>   > > >Comments, suggestions?
> >>>   > > >
> >>>   > > >[1] - https://picocli.info/
> >>>   > >
> >>>   > >
> >>>   > >
> >>>   > >
> >>>   >
> >>>   >
> >>>   >
> >>>   > --
> >>>   > BR, Sergey Antonov
> >>>   >
> >>>
> >>>   --
> >>>   Sincerely yours, Ivan Daschinskiy
>


Re: Unable to deploy Ignite Web Console in Kubernetes

2020-04-23 Thread Evgenii Zhuravlev
Hi,

Usually it means that backend wasn't fully started yet. Have you checked
logs?

Evgenii

чт, 23 апр. 2020 г. в 07:27, Lovell Mathews :

> *Hi, *
>
> *I am trying to deploy Apache Ignite web console in Google Kubernetes
> Engine. I have been following the instructions in the gridgain developer
> forum for kubernetes installation. I am able to successfully bring up the
> frontend and backend servers. However when trying to signup it gives me a
> 404 error. Any suggestions?*
>
> *Cheers,*
> *Lovell*
>


Re: Out of memory with eviction failure on persisted cache

2020-04-08 Thread Evgenii Zhuravlev
Raymond,

I've seen this behaviour before, it occurs on massive data loading to a
cluster with a small data region. It's not reproducible with data regions
with normal sizes, I think that this is the reason why this issue is not
fixed yet.

Best Regards,
Evgenii

ср, 8 апр. 2020 г. в 04:23, Raymond Wilson :

> Evgenii,
>
> Have you had a chance to look into the reproducer?
>
> Thanks,
> Raymond.
>
> On Fri, Mar 6, 2020 at 2:51 PM Raymond Wilson 
> wrote:
>
>> Evgenii,
>>
>> I have created a reproducer that triggers the error with the buffer size
>> set to 64Mb. The program.cs/csproj and log for the run that triggered the
>> error are attached.
>>
>> Thanks,
>> Raymond.
>>
>>
>>
>> On Fri, Mar 6, 2020 at 1:08 PM Raymond Wilson 
>> wrote:
>>
>>> The reproducer is my development system, which is hard to share.
>>>
>>> I have increased the size of the buffer to 256Mb, and it copes with the
>>> example data load, though I have not tried larger data sets.
>>>
>>> From an analytical perspective, is this an error that is possible or
>>> expected to occur when using a cache with a persistent data region defined?
>>>
>>> I'll see if I can make a small reproducer.
>>>
>>> On Fri, Mar 6, 2020 at 11:34 AM Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> Hi Raymond,
>>>>
>>>> I tried to reproduce it, but without success. Can you share the
>>>> reproducer?
>>>>
>>>> Also, have you tried to load much more data with 256mb data region? I
>>>> think it should work without issues.
>>>>
>>>> Thanks,
>>>> Evgenii
>>>>
>>>> ср, 4 мар. 2020 г. в 16:14, Raymond Wilson >>> >:
>>>>
>>>>> Hi Evgenii,
>>>>>
>>>>> I am individually Put()ing the elements using PutIfAbsent(). Each
>>>>> element can range 2kb-35Kb in size.
>>>>>
>>>>> Actually, the process that writes the data does not write the data
>>>>> directly to the cache, it uses a compute function to send the payload to
>>>>> the process that is doing the reading. The compute function applies
>>>>> validation logic and uses PutIfAbsent() to write the data into the cache.
>>>>>
>>>>> Sorry for the confusion.
>>>>>
>>>>> Raymond.
>>>>>
>>>>>
>>>>> On Thu, Mar 5, 2020 at 1:09 PM Evgenii Zhuravlev <
>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> How are you loading the data? Do you use putAll or DataStreamer?
>>>>>>
>>>>>> Evgenii
>>>>>>
>>>>>> ср, 4 мар. 2020 г. в 15:37, Raymond Wilson <
>>>>>> raymond_wil...@trimble.com>:
>>>>>>
>>>>>>> To add some further detail:
>>>>>>>
>>>>>>> There are two processes interacting with the cache. One process is
>>>>>>> writing
>>>>>>> data into the cache, while the second process is extracting data
>>>>>>> from the
>>>>>>> cache using a continuous query. The process that is the reader of
>>>>>>> the data
>>>>>>> is throwing the exception.
>>>>>>>
>>>>>>> Increasing the cache size further to 256 Mb resolves the problem for
>>>>>>> this
>>>>>>> data set, however we have data sets more than 100 times this size
>>>>>>> which we
>>>>>>> will be processing.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Raymond.
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Mar 5, 2020 at 12:10 PM Raymond Wilson <
>>>>>>> raymond_wil...@trimble.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> > I've been having a sporadic issue with the Ignite 2.7.5 JVM
>>>>>>> halting due to
>>>>>>> > out of memory error related to a cache with persistence enabled
>>>>>>> >
>>>>>>> > I just upgraded to the C#.Net, Ignite 2.7.6 client to pick up
>>>>>>> support for
>>>>>>> &g

Re: Apache Ignite 2.8.0 spring services configuration null fields

2020-03-23 Thread Evgenii Zhuravlev
Hi,

I tried to reproduce the behaviour you described, but everything works fine
for me. Please check if I missed something:
https://github.com/ezhuravl/ignite-code-examples/tree/master/src/main/java/examples/service/parameters
https://github.com/ezhuravl/ignite-code-examples/blob/master/src/main/resources/config/service-parameters-example.xml

Evgenii

пн, 23 мар. 2020 г. в 13:54, myset :

> Hi,
> In 2.8.0 maven version of ignite-core, I encountered one issue within
> spring
> service configuration.
> All service properties fields are null (Ex. testField1, testField2) at
> runtime on TestServiceImpl - init() or execute().
> In 2.7.6 version everything works well.
>
> Ex.
> ...
>
>  class="org.apache.ignite.services.ServiceConfiguration">
> 
> 
> 
>  value="Field
> 1 value"/>
>  value="Field
> 2 value"/>
> 
> 
> 
> 
> 
> ...
>
>
> Thank you.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


[jira] [Created] (IGNITE-12809) Python client returns fields in wrong order since the 2 row when fields_count>10

2020-03-19 Thread Evgenii Zhuravlev (Jira)
Evgenii Zhuravlev created IGNITE-12809:
--

 Summary: Python client returns fields in wrong order since the 2 
row when fields_count>10
 Key: IGNITE-12809
 URL: https://issues.apache.org/jira/browse/IGNITE-12809
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.8
Reporter: Evgenii Zhuravlev
 Attachments: reproducer.py

Reproducer attached. 

If result set is bigger than a page size(1 by default) and there are more than 
10 fields in an object, then, for all rows after the first query column order 
will be wrong.

The reason for that is a sorting in api/sql.py: 
https://github.com/apache/ignite/blob/master/modules/platforms/python/pyignite/api/sql.py#L445
If you remove it, everything will work fine. We need to make sure that this is 
the right solution for this issue.

Output from reproducer:

{code:java}
['CODE', 'NAME', 'CONTINENT', 'REGION', 'SURFACEAREA', 'INDEPYEAR', 
'POPULATION', 'LIFEEXPECTANCY', 'GNP', 'GNPOLD', 'LOCALNAME', 'GOVERNMENTFORM', 
'HEADOFSTATE', 'CAPITAL', 'CODE2']
['CHN', 'China', 'Asia', 'Eastern Asia', Decimal('9.5729E+6'), -1523, 
1277558000, Decimal('71.4'), Decimal('982268'), Decimal('917719'), 'Zhongquo', 
'PeoplesRepublic', 'Jiang Zemin', 1891, 'CN']
['IND', 'India', 'Bharat/India', 'Federal Republic', 'Kocheril Raman 
Narayanan', 1109, 'IN', 'Asia', 'Southern and Central Asia', 
Decimal('3287263'), 1947, 1013662000, Decimal('62.5'), Decimal('447114'), 
Decimal('430572')]
['USA', 'United States', 'United States', 'Federal Republic', 'George W. Bush', 
3813, 'US', 'North America', 'North America', Decimal('9.36352E+6'), 1776, 
278357000, Decimal('77.1'), Decimal('8.5107E+6'), Decimal('8.1109E+6')]

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Out of memory with eviction failure on persisted cache

2020-03-05 Thread Evgenii Zhuravlev
Hi Raymond,

I tried to reproduce it, but without success. Can you share the reproducer?

Also, have you tried to load much more data with 256mb data region? I think
it should work without issues.

Thanks,
Evgenii

ср, 4 мар. 2020 г. в 16:14, Raymond Wilson :

> Hi Evgenii,
>
> I am individually Put()ing the elements using PutIfAbsent(). Each element
> can range 2kb-35Kb in size.
>
> Actually, the process that writes the data does not write the data
> directly to the cache, it uses a compute function to send the payload to
> the process that is doing the reading. The compute function applies
> validation logic and uses PutIfAbsent() to write the data into the cache.
>
> Sorry for the confusion.
>
> Raymond.
>
>
> On Thu, Mar 5, 2020 at 1:09 PM Evgenii Zhuravlev 
> wrote:
>
>> Hi,
>>
>> How are you loading the data? Do you use putAll or DataStreamer?
>>
>> Evgenii
>>
>> ср, 4 мар. 2020 г. в 15:37, Raymond Wilson :
>>
>>> To add some further detail:
>>>
>>> There are two processes interacting with the cache. One process is
>>> writing
>>> data into the cache, while the second process is extracting data from the
>>> cache using a continuous query. The process that is the reader of the
>>> data
>>> is throwing the exception.
>>>
>>> Increasing the cache size further to 256 Mb resolves the problem for this
>>> data set, however we have data sets more than 100 times this size which
>>> we
>>> will be processing.
>>>
>>> Thanks,
>>> Raymond.
>>>
>>>
>>> On Thu, Mar 5, 2020 at 12:10 PM Raymond Wilson <
>>> raymond_wil...@trimble.com>
>>> wrote:
>>>
>>> > I've been having a sporadic issue with the Ignite 2.7.5 JVM halting
>>> due to
>>> > out of memory error related to a cache with persistence enabled
>>> >
>>> > I just upgraded to the C#.Net, Ignite 2.7.6 client to pick up support
>>> for
>>> > C# affinity functions and now have this issue appearing regularly while
>>> > adding around 400Mb of data into the cache which is configured to have
>>> > 128Mb of memory (this was 64Mb but I increased it to see if the failure
>>> > would resolve.
>>> >
>>> > The error I get is:
>>> >
>>> > 2020-03-05 11:58:57,568 [542] ERR [MutableCacheComputeServer] JVM will
>>> be
>>> > halted immediately due to the failure: [failureCtx=FailureContext
>>> > [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException:
>>> > Failed to find a page for eviction [segmentCapacity=1700, loaded=676,
>>> > maxDirtyPages=507, dirtyPages=675, cpPages=0, pinnedInSegment=2,
>>> > failedToPrepare=675]
>>> > Out of memory in data region [name=TAGFileBufferQueue, initSize=128.0
>>> MiB,
>>> > maxSize=128.0 MiB, persistenceEnabled=true] Try the following:
>>> >   ^-- Increase maximum off-heap memory size
>>> > (DataRegionConfiguration.maxSize)
>>> >   ^-- Enable Ignite persistence
>>> > (DataRegionConfiguration.persistenceEnabled)
>>> >   ^-- Enable eviction or expiration policies]]
>>> >
>>> > I'm not running an eviction policy as I thought this was not required
>>> for
>>> > caches with persistence enabled.
>>> >
>>> > I'm surprised by this behaviour as I expected the persistence
>>> mechanism to
>>> > handle it. The error relating to failure to find a page for eviction
>>> > suggest the persistence mechanism has fallen behind. If this is the
>>> case,
>>> > this seems like an unfriendly failure mode.
>>> >
>>> > Thanks,
>>> > Raymond.
>>> >
>>> >
>>> >
>>>
>>


Re: Out of memory with eviction failure on persisted cache

2020-03-04 Thread Evgenii Zhuravlev
Hi,

How are you loading the data? Do you use putAll or DataStreamer?

Evgenii

ср, 4 мар. 2020 г. в 15:37, Raymond Wilson :

> To add some further detail:
>
> There are two processes interacting with the cache. One process is writing
> data into the cache, while the second process is extracting data from the
> cache using a continuous query. The process that is the reader of the data
> is throwing the exception.
>
> Increasing the cache size further to 256 Mb resolves the problem for this
> data set, however we have data sets more than 100 times this size which we
> will be processing.
>
> Thanks,
> Raymond.
>
>
> On Thu, Mar 5, 2020 at 12:10 PM Raymond Wilson  >
> wrote:
>
> > I've been having a sporadic issue with the Ignite 2.7.5 JVM halting due
> to
> > out of memory error related to a cache with persistence enabled
> >
> > I just upgraded to the C#.Net, Ignite 2.7.6 client to pick up support for
> > C# affinity functions and now have this issue appearing regularly while
> > adding around 400Mb of data into the cache which is configured to have
> > 128Mb of memory (this was 64Mb but I increased it to see if the failure
> > would resolve.
> >
> > The error I get is:
> >
> > 2020-03-05 11:58:57,568 [542] ERR [MutableCacheComputeServer] JVM will be
> > halted immediately due to the failure: [failureCtx=FailureContext
> > [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException:
> > Failed to find a page for eviction [segmentCapacity=1700, loaded=676,
> > maxDirtyPages=507, dirtyPages=675, cpPages=0, pinnedInSegment=2,
> > failedToPrepare=675]
> > Out of memory in data region [name=TAGFileBufferQueue, initSize=128.0
> MiB,
> > maxSize=128.0 MiB, persistenceEnabled=true] Try the following:
> >   ^-- Increase maximum off-heap memory size
> > (DataRegionConfiguration.maxSize)
> >   ^-- Enable Ignite persistence
> > (DataRegionConfiguration.persistenceEnabled)
> >   ^-- Enable eviction or expiration policies]]
> >
> > I'm not running an eviction policy as I thought this was not required for
> > caches with persistence enabled.
> >
> > I'm surprised by this behaviour as I expected the persistence mechanism
> to
> > handle it. The error relating to failure to find a page for eviction
> > suggest the persistence mechanism has fallen behind. If this is the case,
> > this seems like an unfriendly failure mode.
> >
> > Thanks,
> > Raymond.
> >
> >
> >
>


Re: IGNITE-12361 Migrate Flume module to ignite-extensions

2020-02-12 Thread Evgenii Zhuravlev
Hi Saikat,

I left a couple of comments in pr:
https://github.com/apache/ignite-extensions/pull/4#pullrequestreview-357891629.
Please tell me what do you think about it.

Best Regards,
Evgenii

вт, 11 февр. 2020 г. в 17:15, Saikat Maitra :

> Hi,
>
> Can someone please help in review for these following PRs. I have
> received approval for release process from Alexey and would need a code
> review approval for following PR.
>
> Jira https://issues.apache.org/jira/browse/IGNITE-12361
>
> PR https://github.com/apache/ignite-extensions/pull/4
>   https://github.com/apache/ignite/pull/7227
>
> Regards,
> Saikat
>
> On Tue, Feb 11, 2020 at 6:47 PM Saikat Maitra 
> wrote:
>
> > Hi Alexey,
> >
> >
> > I think we can release for spring boot autoconfigure module.
> >
> > Nikolay - Do you have tentative timeline when you are planning for
> release
> > of spring boot autoconfigure module.
> >
> >
> > After that we are planning to make release for flink ext.
> >
> >
> > Since, each module are independent so they will be released
> independently.
> >
> >
> > Regards,
> > Saikat
> >
> > On Mon, 10 Feb 2020 at 7:33 AM, Alexey Goncharuk <
> > alexey.goncha...@gmail.com> wrote:
> >
> >> Saikat,
> >>
> >> Yes, I think we can go ahead with the modules PRs as long as reviewers
> are
> >> ok with the changes. Given that there is an activity around the spring
> >> module, which modules do you think will get to the first release?
> >>
> >> сб, 1 февр. 2020 г. в 21:37, Saikat Maitra :
> >>
> >> > Hi Alexey,
> >> >
> >> > Please let me know if I can share more info on the release process. I
> >> have
> >> > updated the issue confluence page on discussed approach for Ignite
> >> > Extensions. Do you think the open PRs can be merged in Ignite
> Extensions
> >> > repo?
> >> >
> >> > Independent Integrations:
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-36%3A+Modularization#IEP-36:Modularization-IndependentIntegrations
> >> > Discussion Links:
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-36%3A+Modularization#IEP-36:Modularization-DiscussionLinks
> >> > Tickets:
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-36%3A+Modularization#IEP-36:Modularization-Tickets
> >> >
> >> > Regards,
> >> > Saikat
> >> >
> >> > On Sun, Jan 26, 2020 at 3:11 PM Saikat Maitra <
> saikat.mai...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi Alexey,
> >> > >
> >> > > As discussed I have updated the wiki with agreed solution.
> >> > >
> >> > > Independent Integrations:
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-36%3A+Modularization#IEP-36:Modularization-IndependentIntegrations
> >> > >
> >> > > Discussion Links:
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-36%3A+Modularization#IEP-36:Modularization-DiscussionLinks
> >> > >
> >> > > Tickets:
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-36%3A+Modularization#IEP-36:Modularization-Tickets
> >> > >
> >> > > Please let me know if I can share more information.
> >> > >
> >> > > Regards,
> >> > > Saikat
> >> > >
> >> > >
> >> > > On Fri, Jan 17, 2020 at 9:16 PM Saikat Maitra <
> >> saikat.mai...@gmail.com>
> >> > > wrote:
> >> > >
> >> > >> Hello Alexey,
> >> > >>
> >> > >> Thank you for your email.
> >> > >>
> >> > >> 1. Yes, we discussed in dev list and agreed on creating a new
> >> repository
> >> > >> for hosting our Ignite integrations. Please find the discussion
> >> thread
> >> > >> below. I will update the wiki page as well and share updates.
> >> > >>
> >> > >>
> >> > >>
> >> >
> >>
> http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSS-Proposal-for-Ignite-Extensions-as-a-separate-Bahir-module-or-Incubator-project-td44064.html
> >> > >>
> >> > >> 2. I was hoping to complete migration of the following modules
> >> before we
> >> > >> go ahead with release. I am tracking the jira story here
> >> > >> https://issues.apache.org/jira/browse/IGNITE-12355
> >> > >>
> >> > >>- Flink
> >> > >>- Twitter
> >> > >>- Storm
> >> > >>- ZeroMQ
> >> > >>- RocketMQ
> >> > >>- Flume
> >> > >>- MQTT
> >> > >>- Camel
> >> > >>- JMS
> >> > >>
> >> > >> 3. The dependencies for modules are  pointing to latest snapshot of
> >> > >> ignite project and if there are changes in ignite master branch
> then
> >> > >> related affected Ignite extensions module also need to be modified.
> >> We
> >> > will
> >> > >> verify all the extensions for upcoming release but release only the
> >> one
> >> > >> that are impacted. We will plan to avoid publishing any extension
> >> unless
> >> > >> there are changes. Here is the discussion thread on release
> process:
> >> > >>
> >> > >>
> >> > >>
> >> >
> >>
> http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSS-dependencies-and-release-process-for-Ignite-Extensions-td44478.html
> >> > >>
> >> > >> 4. Sounds good, we can maintain a compatibility matrix to ensure

Re: Issue with replicated cache

2019-12-27 Thread Evgenii Zhuravlev
Hi Prasad,

Can you please share logs from all nodes, so I can check what was happening
with a cluster before an incident? It would be great to see logs since
nodes start.

Thanks,
Evgenii

чт, 26 дек. 2019 г. в 11:42, Denis Magda :

> Let me loop in the Ignite dev list as long as I've not heard about such an
> issue. Personally, don't see any misconfiguration in your Ignite config.
>
> -
> Denis
>
>
> On Thu, Dec 26, 2019 at 10:17 AM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
> > I used cache.remove(key) method to delete an entry from cache.
> >
> > Basically I  was not getting the consistent result on subsequent  API
> > calls with the same input.
> >
> > So I used grid gain console to query the cache. I executed the SQL on
> > single node at a time.
> > While doing this I found data only on node n1. But same entry was not
> > present on nodes n2,n3,n4.
> >
> > Thanks,
> > Prasad
> >
> >
> >
> >
> > On Thu 26 Dec, 2019, 11:09 PM Denis Magda  >
> >> Hello Prasad,
> >>
> >> What APIs did you use to remove the entry from the cache and what method
> >> did you use to confirm that the entry still exists on some of the nodes?
> >>
> >> -
> >> Denis
> >>
> >>
> >> On Thu, Dec 26, 2019 at 8:54 AM Prasad Bhalerao <
> >> prasadbhalerao1...@gmail.com> wrote:
> >>
> >>> Hi,
> >>>
> >>> I am using ignite 2.6.0 version and the time out settings are as
> follows.
> >>>
> >>> IgniteConfiguration cfg = new IgniteConfiguration();
> >>> cfg.setFailureDetectionTimeout(12);
> >>> cfg.setNetworkTimeout(1);
> >>> cfg.setClientFailureDetectionTimeout(12);
> >>>
> >>> I have 4 server nodes (n1,n2,n3,n4) and 6 client nodes. I am using a
> >>> replicated cache and cache configuration is as shown below.
> >>> As you can see write-through is false, read through is true and write
> >>> synchronization mode is FULL_SYNC.
> >>>
> >>> I got an issue, a network entry was removed from network cache but some
> >>> how it was removed from only 3 server nodes (n2,n3,n4). I was able to
> see
> >>> the network entry on node n1 consistently for a day(when it was
> removed).
> >>> So I checked the logs for any errors/warnings but I could not find any.
> >>> I did not see any segmentation issue in logs, looked like cluster was
> in
> >>> healthy state.
> >>> When I checked the cache after 2 days, I could not find that entry.
> >>> Cache was in a state as it was supposed to be.  Servers were  not
> stopped
> >>> and restarted during this whole time.
> >>>
> >>> Some how I am not able to reproduce this issue on dev env.
> >>>
> >>> Is there any way to investigate/debug this issue? Can someone please
> >>> advise?
> >>>
> >>> private CacheConfiguration networkCacheCfg() {
> >>>   CacheConfiguration networkCacheCfg = new CacheConfiguration<>(
> CacheName.NETWORK_CACHE.name());
> >>>   networkCacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> >>>   networkCacheCfg.setWriteThrough(false);
> >>>   networkCacheCfg.setReadThrough(true);
> >>>   networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> >>>
>  
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> >>>   networkCacheCfg.setBackups(this.backupCount);
> >>>   networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> >>>   Factory storeFactory =
> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> >>>   networkCacheCfg.setCacheStoreFactory(storeFactory);
> >>>   networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> NetworkData.class);
> >>>   networkCacheCfg.setSqlIndexMaxInlineSize(65);
> >>>   RendezvousAffinityFunction affinityFunction = new
> RendezvousAffinityFunction();
> >>>   affinityFunction.setExcludeNeighbors(true);
> >>>   networkCacheCfg.setAffinity(affinityFunction);
> >>>   networkCacheCfg.setStatisticsEnabled(true);
> >>>
> >>>   return networkCacheCfg;
> >>> }
> >>>
> >>>
> >>>
> >>> Thanks,
> >>> PRasad
> >>>
> >>>
>


[jira] [Created] (IGNITE-12032) Server node prints exception when ODBC driver disconnects

2019-08-01 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-12032:
--

 Summary: Server node prints exception when ODBC driver disconnects
 Key: IGNITE-12032
 URL: https://issues.apache.org/jira/browse/IGNITE-12032
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 2.7.5
Reporter: Evgenii Zhuravlev


Whenever a process using ODBC clients is finished, it's printing in the 
node logs this exception: 

{code:java}
*[07:45:19,559][SEVERE][grid-nio-worker-client-listener-1-#30][ClientListenerProcessor]
 
Failed to process selector key [s 
es=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker 
[readBuf=java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192 
], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, 
bytesRcvd0=0, bytesSent0=0, select=true, super=GridWo 
rker [name=grid-nio-worker-client-listener-1, igniteInstanceName=null, 
finished=false, heartbeatTs=1564289118230, hashCo 
de=1829856117, interrupted=false, 
runner=grid-nio-worker-client-listener-1-#30]]], writeBuf=null, 
readBuf=null, inRecove 
ry=null, outRecovery=null, super=GridNioSessionImpl 
[locAddr=/0:0:0:0:0:0:0:1:10800, rmtAddr=/0:0:0:0:0:0:0:1:63697, cre 
ateTime=1564289116225, closeTime=0, bytesSent=1346, bytesRcvd=588, 
bytesSent0=0, bytesRcvd0=0, sndSchedTime=156428911623 
5, lastSndTime=1564289116235, lastRcvTime=1564289116235, readsPaused=false, 
filterChain=FilterChain[filters=[GridNioAsyn 
cNotifyFilter, GridNioCodecFilter [parser=ClientListenerBufferedParser, 
directMode=false]], accepted=true, markedForClos 
e=false]]] 
java.io.IOException: An existing connection was forcibly closed by the 
remote host 
at sun.nio.ch.SocketDispatcher.read0(Native Method) 
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) 
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
at sun.nio.ch.IOUtil.read(IOUtil.java:197) 
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:11
 
04) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNi
 
oServer.java:2389) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:215
 
6) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
 
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
at java.lang.Thread.run(Thread.java:748)* 
{code}

It's absolutely normal behavior when ODBC client disconnects from the node, so, 
we shouldn't print exception in the log. We should replace it with something 
like INFO message about ODBC client disconnection.

Thread from user list: 
http://apache-ignite-users.70518.x6.nabble.com/exceptions-in-Ignite-node-when-a-thin-client-process-ends-td28970.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IGNITE-11847) Change note on the capacity planning page about memory usage

2019-05-14 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11847:
--

 Summary: Change note on the capacity planning page about memory 
usage
 Key: IGNITE-11847
 URL: https://issues.apache.org/jira/browse/IGNITE-11847
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


https://apacheignite.readme.io/docs/capacity-planning#calculating-memory-usage

It says that "Apache Ignite will typically add around 200 bytes overhead to 
each entry.", but it's not true, I think it was applicable only for 1.x 
versions, where everything was stored in heap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11832) Creating cache with EvictionPolicy and without onHeap cache kills the cluster

2019-05-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11832:
--

 Summary: Creating cache with EvictionPolicy and without onHeap 
cache kills the cluster
 Key: IGNITE-11832
 URL: https://issues.apache.org/jira/browse/IGNITE-11832
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


After this, the cluster can be restored only by deleting the folder with the 
problem cache. We should add some kind of validation to avoid these situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11831) Eviction doesn't work properly for data region with big objects of different sizes

2019-05-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11831:
--

 Summary: Eviction doesn't work properly for data region with big 
objects of different sizes
 Key: IGNITE-11831
 URL: https://issues.apache.org/jira/browse/IGNITE-11831
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


Reproducer:
{code:java}
public class ExampleNodeStartup {
/**
 * Start up an empty node with example compute configuration.
 *
 * @param args Command line arguments, none required.
 * @throws IgniteException If failed.
 */
public static void main(String[] args) throws IgniteException {
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");

IgniteCache keywordCache = 
ignite.getOrCreateCache("keyword");
for(int i=0;i<1000;i++){
int mega = new Random().nextInt(3) + 1;
keywordCache.put(UUID.randomUUID().toString(), new byte[mega * 1024 
* 1024]);
System.out.println("current:"+i);
}
}
}
{code}


data region configuration:
{code:java}












{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11830) Visor cmd shows up time in HH:MM:SS format

2019-05-02 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11830:
--

 Summary: Visor cmd shows up time in HH:MM:SS format
 Key: IGNITE-11830
 URL: https://issues.apache.org/jira/browse/IGNITE-11830
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev
 Attachments: visor_uptime.png

In Visor.scala it takes X.timeSpan2HMS(m.getUpTime), which lead to the 
behavior, that it starts to count starting from 00:00:00 after a day up time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11696) Create JMX metric for current PME execution time

2019-04-08 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11696:
--

 Summary: Create JMX metric for current PME execution time
 Key: IGNITE-11696
 URL: https://issues.apache.org/jira/browse/IGNITE-11696
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev


Now PME process can't be monitored from JMX, only from the logs. It makes sense 
to show the execution time for the current partition map exchange.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11695) AverageGetTime metric doesn't work properly with ScanQuery predicate

2019-04-08 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11695:
--

 Summary: AverageGetTime metric doesn't work properly with 
ScanQuery predicate
 Key: IGNITE-11695
 URL: https://issues.apache.org/jira/browse/IGNITE-11695
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


In *GridCacheQueryManager.advance* method *start* variable is set only once, at 
the start of executing the method, while metrics.onRead inside this method 
could be executed multiple times in case if Predicate returns false:

Reproducer:
{code:java}

public class ExampleNodeStartup {

private static int FILTER_COUNT = 10;
/**
 * Start up an empty node with example compute configuration.
 *
 * @param args Command line arguments, none required.
 * @throws IgniteException If failed.
 */
public static void main(String[] args) throws IgniteException {
Ignite ignite = Ignition.start();

IgniteCache cache = ignite.getOrCreateCache(new 
CacheConfiguration<>("test").setStatisticsEnabled(true));

for (int i = 0; i < 10; i++)
cache.put(i, i);

long start = System.currentTimeMillis();

Iterator it = cache.query(new ScanQuery().setFilter(new 
IgniteBiPredicate() {
@Override public boolean apply(Object o, Object o2) {
if ((int)o2 % FILTER_COUNT == 0)
return true;

return false;
}
})).iterator();

while (it.hasNext())
System.out.println("iterator value: " + it.next());

System.out.println("Execution time: " + (System.currentTimeMillis() - 
start));

System.out.println("GETS: " + cache.metrics().getCacheGets());

System.out.println("GET times: " + cache.metrics().getAverageGetTime());
}
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11610) Add note to the DROP TABLE doc that it can be used only for table created with DDL

2019-03-22 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11610:
--

 Summary: Add note to the DROP TABLE doc that it can be used only 
for table created with DDL
 Key: IGNITE-11610
 URL: https://issues.apache.org/jira/browse/IGNITE-11610
 Project: Ignite
  Issue Type: Bug
  Components: documentation
Reporter: Evgenii Zhuravlev
Assignee: Artem Budnikov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11599) Thin client doesn't have proper retry

2019-03-21 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11599:
--

 Summary: Thin client doesn't have proper retry
 Key: IGNITE-11599
 URL: https://issues.apache.org/jira/browse/IGNITE-11599
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


In case when one of the nodes from the addresses list is nor available, there 
is a chance that you will see "Ignite cluster is unavailable" even if other 
nodes are running.


{code:java}
List addrs = new ArrayList(2);
addrs.add("127.0.0.1:10800");
addrs.add("127.0.0.1:10801");
ClientConfiguration cfg = new 
ClientConfiguration().setAddresses(addrs.toArray(new String[]{}));
   IgniteClient igniteClient = Ignition.startClient(cfg)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11598) Add possibility to have different rebalance thread pool size for nodes in cluster

2019-03-21 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11598:
--

 Summary: Add possibility to have different rebalance thread pool 
size for nodes in cluster
 Key: IGNITE-11598
 URL: https://issues.apache.org/jira/browse/IGNITE-11598
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev


It can be used for changing this property without downtime when rebalance is 
slow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11554) Create detailed documentation for peerClassLoading with places where and how it can be used

2019-03-15 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11554:
--

 Summary: Create detailed documentation for peerClassLoading with 
places where and how it can be used
 Key: IGNITE-11554
 URL: https://issues.apache.org/jira/browse/IGNITE-11554
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev


Right now, it's not clear for which classes and which APIs peerClassLoading 
works. Also, we should describe some of the examples of use cases for 
peerClassLoading



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11496) Long running SQL queries could be randomly canceled from WC

2019-03-06 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11496:
--

 Summary: Long running SQL queries could be randomly canceled from 
WC
 Key: IGNITE-11496
 URL: https://issues.apache.org/jira/browse/IGNITE-11496
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


I've tried to run some long-running queries from WC(more than a couple of 
minutes) and I've faced a behavior when the query was canceled without clicking 
on the cancel button.

I've opened different browser tabs at this moment, maybe it could be the reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11495) document IGNITE_SQL_MERGE_TABLE_PREFETCH_SIZE parameter

2019-03-06 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11495:
--

 Summary: document IGNITE_SQL_MERGE_TABLE_PREFETCH_SIZE parameter
 Key: IGNITE-11495
 URL: https://issues.apache.org/jira/browse/IGNITE-11495
 Project: Ignite
  Issue Type: Improvement
  Components: documentation, sql
Affects Versions: 2.7
Reporter: Evgenii Zhuravlev
Assignee: Artem Budnikov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11494) Change message log message in case of too small IGNITE_SQL_MERGE_TABLE_MAX_SIZE parameter

2019-03-06 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11494:
--

 Summary: Change message log message in case of too small 
IGNITE_SQL_MERGE_TABLE_MAX_SIZE parameter
 Key: IGNITE-11494
 URL: https://issues.apache.org/jira/browse/IGNITE-11494
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Evgenii Zhuravlev


Message "Fetched result set was too large." should be changed to some 
recommendations regarding IGNITE_SQL_MERGE_TABLE_MAX_SIZE property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11487) Document IGNITE_SQL_MERGE_TABLE_MAX_SIZE property

2019-03-05 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11487:
--

 Summary: Document IGNITE_SQL_MERGE_TABLE_MAX_SIZE property
 Key: IGNITE-11487
 URL: https://issues.apache.org/jira/browse/IGNITE-11487
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev
Assignee: Prachi Garg






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11373) varchar_ignorecase doesn't work properly

2019-02-20 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11373:
--

 Summary: varchar_ignorecase doesn't work properly
 Key: IGNITE-11373
 URL: https://issues.apache.org/jira/browse/IGNITE-11373
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


Looks like a field with type varchar_ignorecase can't be used for filtering the 
values for different cases.


{code:java}
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");

IgniteCache cache = ignite.getOrCreateCache("TEST");

cache.query(new SqlFieldsQuery("CREATE TABLE IF NOT EXISTS TEST\n" +
"(\n" +
"  TEST_IDNUMBER(15)NOT NULL,\n" +
"  TEST_VALUE VARCHAR_IGNORECASE(100),\n" +
"  PRIMARY KEY (TEST_ID)\n" +
") "));

System.out.println("INSERTED:" + ignite.cache("TEST").query(new 
SqlFieldsQuery("INSERT INTO TEST values (1,'aAa')")).getAll().size());

System.out.println("FOUND:" + ignite.cache("TEST").query(new 
SqlFieldsQuery("Select * from TEST where TEST_VALUE like 
'%aaa%'")).getAll().size());
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11219) CREATE TABLE with template doesn't work properly with data inserted from KV API

2019-02-05 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11219:
--

 Summary: CREATE TABLE with template doesn't work properly with 
data inserted from KV API
 Key: IGNITE-11219
 URL: https://issues.apache.org/jira/browse/IGNITE-11219
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


When you use a template for your table, it takes the affinityMapper field from 
this template, which was set for a different type without any 
keyConfigurations. This leads to the problem with accessing data  from SQL if 
it was inserted using Key-value API

Here is a code to reproduce the issue:
{code:java}
 Ignition.setClientMode(true);
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");
ignite.cluster().active(true);
IgniteCache cache = ignite.getOrCreateCache("test");

cache.query(new SqlFieldsQuery("CREATE TABLE IF NOT EXISTS TEST\n" +
"(\n" +
"  TEST_IDNUMBER(15)NOT NULL,\n" +
"  TEST_FIELD VARCHAR2(100),\n" +
"  PRIMARY KEY (TEST_ID)\n" +
") with \"TEMPLATE=TEST_TEMPLATE,KEY_TYPE=TEST_KEY 
,CACHE_NAME=TEST_CACHE , 
VALUE_TYPE=TEST_VALUE,ATOMICITY=TRANSACTIONAL\";").setSchema("PUBLIC"));


for (int i = 0; i < 100; i++) {
BinaryObjectBuilder keyBuilder = 
ignite.binary().builder("TEST_KEY");


keyBuilder.setField("TEST_ID", new BigDecimal(111l + 
i));

BinaryObjectBuilder valueBuilder = 
ignite.binary().builder("TEST_VALUE");

valueBuilder.setField("TEST_FIELD", "123123" + i);


ignite.cache("TEST_CACHE").withKeepBinary().put(keyBuilder.build(), 
valueBuilder.build());

}

System.out.println("FOUND:" + ignite.cache("TEST_CACHE").query(new 
SqlFieldsQuery("Select * from TEST")).getAll().size());

System.out.println("FOUND:" + ignite.cache("TEST_CACHE").query(new 
SqlFieldsQuery("Select TEST_FIELD from TEST where TEST_ID = 
111")).getAll());

for (int i = 0; i < 100; i++)
System.out.println("FOUND:" + ignite.cache("TEST_CACHE").query(new 
SqlFieldsQuery("Select TEST_FIELD from TEST where TEST_ID  = " + 
(111l + i))).getAll());
{code}

Here is a test template:
{code:java}



{code}

Steps to reproduce:
1. Start a server node, for example, using ExampleNodeStartup.
2. Start a client node using the code provided above with the template in 
configuration.


Possible quickfix:

set affinityMapper to cfgTemplate in GridCacheProcessor.getConfigFromTemplate:
cfgTemplate.setAffinityMapper(null)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11165) Add note to the documentation that cache name will be used as folder name in case of using persistence

2019-01-31 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11165:
--

 Summary: Add note to the documentation that cache name will be 
used as folder name in case of using persistence
 Key: IGNITE-11165
 URL: https://issues.apache.org/jira/browse/IGNITE-11165
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev
Assignee: Artem Budnikov


We should add a note that it's not recommended to use symbols which are not 
allowed to use in the file system in case of using persistence.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11096) Webagent: flag --disable-demo doesn't work

2019-01-25 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11096:
--

 Summary: Webagent: flag --disable-demo doesn't work
 Key: IGNITE-11096
 URL: https://issues.apache.org/jira/browse/IGNITE-11096
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Affects Versions: 2.7
 Environment: After enabling this flag it's still possible to start demo
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11052) Add documentation for "failed to wait for partition map exchange" message

2019-01-23 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11052:
--

 Summary: Add documentation for "failed to wait for partition map 
exchange" message
 Key: IGNITE-11052
 URL: https://issues.apache.org/jira/browse/IGNITE-11052
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev
Assignee: Denis Magda


We should describe why this message could be seen in logs and how to debug the 
real problem that caused this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11015) AveragePutTime metrics doesn't work properly in case of remote puts

2019-01-21 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-11015:
--

 Summary: AveragePutTime metrics doesn't work properly in case of 
remote puts
 Key: IGNITE-11015
 URL: https://issues.apache.org/jira/browse/IGNITE-11015
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


It happens because the number of writes is collected on the server machine, 
while execution time is collected on the machine which invoked the operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10838) Ignite wrap byte[] value with UserCacheObjectByteArrayImpl before saving it and copying the whole array

2018-12-27 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10838:
--

 Summary: Ignite wrap byte[] value with 
UserCacheObjectByteArrayImpl before saving it and copying the whole array 
 Key: IGNITE-10838
 URL: https://issues.apache.org/jira/browse/IGNITE-10838
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


I assume it was created for heap cache before 2.0, while it doesn't make sense 
for off-heap cache since it will be copied in any case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10734) Add documentation for the list of operations that should be retried in case of cluster topology changes

2018-12-18 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10734:
--

 Summary: Add documentation for the list of operations that should 
be retried in case of cluster topology changes
 Key: IGNITE-10734
 URL: https://issues.apache.org/jira/browse/IGNITE-10734
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev
Assignee: Artem Budnikov


Some of the operations, like get or getAll would throw ClusterTopologyException 
if primary node left topology, while other operations not. So, some operations 
should be re-tried from user code, while some operation will do it internally. 
We should prepare documentation for the list of these operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10627) Support custom preferences like date format and other similar features

2018-12-10 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10627:
--

 Summary: Support custom preferences like date format and other 
similar features
 Key: IGNITE-10627
 URL: https://issues.apache.org/jira/browse/IGNITE-10627
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10626) Save authenticated Webconsole session for more than one page refresh

2018-12-10 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10626:
--

 Summary: Save authenticated Webconsole session for more than one 
page refresh
 Key: IGNITE-10626
 URL: https://issues.apache.org/jira/browse/IGNITE-10626
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev


Now, it's needed to enter login and password after each page refresh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10414) IF NOT EXISTS in CREATE TABLE doesn't work

2018-11-26 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10414:
--

 Summary: IF NOT EXISTS in CREATE TABLE doesn't work
 Key: IGNITE-10414
 URL: https://issues.apache.org/jira/browse/IGNITE-10414
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.4
Reporter: Evgenii Zhuravlev


Reproducer:
 
{code:java}
   Ignite ignite = Ignition.start();

ignite.getOrCreateCache("test").query(new SqlFieldsQuery("CREATE TABLE 
IF NOT EXISTS City(id LONG PRIMARY KEY,"
+ " name VARCHAR) WITH \"template=replicated\""));
ignite.getOrCreateCache("test").query(new SqlFieldsQuery("CREATE TABLE 
IF NOT EXISTS City(id LONG PRIMARY KEY,"
+ " name VARCHAR) WITH \"template=replicated\""));
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10398) CacheMetrics always return 0 for local cache

2018-11-23 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10398:
--

 Summary: CacheMetrics always return 0 for local cache
 Key: IGNITE-10398
 URL: https://issues.apache.org/jira/browse/IGNITE-10398
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


However, it shows the right offHeapEntriesCnt.
Short code snippet:

{code:java}

IgniteConfiguration igniteConfig = new IgniteConfiguration();
CacheConfiguration cacheConfig = new CacheConfiguration("testCache");
cacheConfig.setStatisticsEnabled(true);
igniteConfig.setCacheConfiguration(cacheConfig);
cacheConfig.setCacheMode(CacheMode.LOCAL);

try (Ignite ignite = Ignition.start(igniteConfig)) {
IgniteCache cache = ignite.getOrCreateCache(cacheConfig.getName());
cache.put("key", "val");
cache.put("key2", "val2");
cache.remove("key2");

System.out.println(cache.localMetrics());
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9657) socket leak in TcpDiscoverySpi

2018-09-20 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9657:
-

 Summary: socket leak in TcpDiscoverySpi
 Key: IGNITE-9657
 URL: https://issues.apache.org/jira/browse/IGNITE-9657
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Evgenii Zhuravlev
Assignee: Evgenii Zhuravlev
 Fix For: 2.7


When host from ipFinder can't be resolved, the socket don't close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9486) JobStealing doesn't work with affinityRun

2018-09-06 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9486:
-

 Summary: JobStealing doesn't work with affinityRun
 Key: IGNITE-9486
 URL: https://issues.apache.org/jira/browse/IGNITE-9486
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Evgenii Zhuravlev


It rebalances job to the node, that doesn't have needed partition, which lead 
to the exception:

{code:java}
[2018-09-06 18:03:47,545][ERROR][pub-#61][GridJobWorker] Failed to lock 
partitions [jobId=a29f86fa561-3e2c91e1-1f47-401c-80a2-ea2b452a3cb5, 
ses=GridJobSessionImpl [ses=GridTaskSessionImpl 
[taskName=o.a.i.examples.ExampleNodeStartup3$1, dep=GridDeployment 
[ts=1536246214786, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, 
clsLdrId=606b86fa561-cffc6951-7eef-4a58-82e1-69511458d650, userVer=0, loc=true, 
sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap,
 pendingUndeploy=false, undeployed=false, usage=1], 
taskClsName=o.a.i.examples.ExampleNodeStartup3$1, 
sesId=929f86fa561-3e2c91e1-1f47-401c-80a2-ea2b452a3cb5, 
startTime=1536246223300, endTime=9223372036854775807, 
taskNodeId=3e2c91e1-1f47-401c-80a2-ea2b452a3cb5, 
clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, 
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, 
topPred=null, subjId=3e2c91e1-1f47-401c-80a2-ea2b452a3cb5, mapFut=IgniteFuture 
[orig=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, 
hash=30355072]], execName=null], 
jobId=a29f86fa561-3e2c91e1-1f47-401c-80a2-ea2b452a3cb5]]
class org.apache.ignite.IgniteException: Failed partition reservation. 
Partition is not primary on the node. [partition=1, cacheName=test, 
nodeId=cffc6951-7eef-4a58-82e1-69511458d650, topology=AffinityTopologyVersion 
[topVer=3, minorTopVer=0]]
at 
org.apache.ignite.internal.processors.job.GridJobProcessor$PartitionsReservation.reserve(GridJobProcessor.java:1596)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:510)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: QueryDetailMetrics for cache-less SQL queries

2018-08-27 Thread Evgenii Zhuravlev
Vladimir,

Then it looks like we send wrong EVT_CACHE_QUERY_EXECUTED events too, since
we send it for the same cache, as in your example.

Evgenii



пн, 27 авг. 2018 г. в 13:20, Vladimir Ozerov :

> Folks,
>
> Can we leave query details metrics API unchanged, and simply fix how we
> record them? Historically it worked as follows:
> 1) IgniteCache.query() is invoked
> 2) Query is executed, possibly on other caches!
> 3) Metric is incremented for cache from p.1
>
> I.e. our query detail metrics were broken since their first days. I propose
> to fix them as follows:
> 1) Any SQL query() method is invoked (see GridQueryProcessor)
> 2) We collect all cache IDs during parsing (this already happens)
> 3) Record event for all cache IDs through
> GridCacheQueryManager#collectMetrics
>
> This appears to be as the simplest and consistent solution to the problem.
>
> On Tue, Aug 21, 2018 at 1:09 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com>
> wrote:
>
> > Hi Alex,
> >
> > I agree that we can't move all metrics to ignite.metrics() and SPI
> metrics
> > is a good example here. I propose to move at least dataRegionMetrics,
> > dataStorageMetrics to it, since the main API class is not a good place
> for
> > such things. If we will decide to choose option 1 from my first message,
> > then, I will also add QueryDetailMetrics for cacheless queries to this
> new
> > class.
> >
> > пн, 20 авг. 2018 г. в 23:28, Alex Plehanov :
> >
> > > Hi Evgeny,
> > >
> > > Do you propose to move into IgniteMetrics absolutely all Ignite metrics
> > or
> > > just dataRegionMetrics, dataStorageMetrics and queryDetailMetrics? I
> > think
> > > you can't move all metrics into one place. Pluggable components and
> > > different SPI implementations may have their own metric sets, and
> perhaps
> > > it's not such a good idea to try to fit them in one common fixed
> > interface.
> > >
> > > 2018-08-20 18:14 GMT+03:00 Evgenii Zhuravlev  >:
> > >
> > > > As for now, metrics are accumulated for cache only when the query is
> > run
> > > > directly over this cache, for example, using ignite.cache("Some
> > > > cache").sqlFieldsQuery("select ... from .."). When a query is started
> > > using
> > > > other APIs, it doesn't detect cache, to which this table belongs and
> > > > doesn't save any metrics.
> > > >
> > > >
> > > >
> > > > 2018-08-17 16:44 GMT+03:00 Vladimir Ozerov :
> > > >
> > > > > Query is not executed on specific cache. It is executed on many
> > caches.
> > > > >
> > > > > On Fri, Aug 17, 2018 at 6:10 AM Dmitriy Setrakyan <
> > > dsetrak...@apache.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > But internally the SQL query still runs on some cache, no? What
> > > happens
> > > > > to
> > > > > > the metrics accumulated on that cache?
> > > > > >
> > > > > > D.
> > > > > >
> > > > > > On Thu, Aug 16, 2018, 18:51 Alexey Kuznetsov <
> > akuznet...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Dima,
> > > > > > >
> > > > > > > "cache-less" means that SQL executed directly on SQL engine.
> > > > > > >
> > > > > > > In previous version of Ignite we execute queries via cache:
> > > > > > >
> > > > > > > ignite.cache("Some cache").sqlFieldsQuery("select ... from ..")
> > > > > > >
> > > > > > > In current Ignite we can execute query directly without using
> > cache
> > > > as
> > > > > > > "gateway".
> > > > > > >
> > > > > > > And if we execute query directly, metrics not update.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Aug 17, 2018 at 4:21 AM Dmitriy Setrakyan <
> > > > > dsetrak...@apache.org
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Evgeny, what is a "cache-less" SQL query?
> > > > > > > >
&

[jira] [Created] (IGNITE-9383) Add to the documentation that Ignite cluster requires that each nodes have direct connection to any nodes of grid.

2018-08-27 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9383:
-

 Summary: Add to the documentation that Ignite cluster requires 
that each nodes have direct connection to any nodes of grid. 
 Key: IGNITE-9383
 URL: https://issues.apache.org/jira/browse/IGNITE-9383
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.6
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: QueryDetailMetrics for cache-less SQL queries

2018-08-21 Thread Evgenii Zhuravlev
Hi Alex,

I agree that we can't move all metrics to ignite.metrics() and SPI metrics
is a good example here. I propose to move at least dataRegionMetrics,
dataStorageMetrics to it, since the main API class is not a good place for
such things. If we will decide to choose option 1 from my first message,
then, I will also add QueryDetailMetrics for cacheless queries to this new
class.

пн, 20 авг. 2018 г. в 23:28, Alex Plehanov :

> Hi Evgeny,
>
> Do you propose to move into IgniteMetrics absolutely all Ignite metrics or
> just dataRegionMetrics, dataStorageMetrics and queryDetailMetrics? I think
> you can't move all metrics into one place. Pluggable components and
> different SPI implementations may have their own metric sets, and perhaps
> it's not such a good idea to try to fit them in one common fixed interface.
>
> 2018-08-20 18:14 GMT+03:00 Evgenii Zhuravlev :
>
> > As for now, metrics are accumulated for cache only when the query is run
> > directly over this cache, for example, using ignite.cache("Some
> > cache").sqlFieldsQuery("select ... from .."). When a query is started
> using
> > other APIs, it doesn't detect cache, to which this table belongs and
> > doesn't save any metrics.
> >
> >
> >
> > 2018-08-17 16:44 GMT+03:00 Vladimir Ozerov :
> >
> > > Query is not executed on specific cache. It is executed on many caches.
> > >
> > > On Fri, Aug 17, 2018 at 6:10 AM Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > > wrote:
> > >
> > > > But internally the SQL query still runs on some cache, no? What
> happens
> > > to
> > > > the metrics accumulated on that cache?
> > > >
> > > > D.
> > > >
> > > > On Thu, Aug 16, 2018, 18:51 Alexey Kuznetsov 
> > > > wrote:
> > > >
> > > > > Dima,
> > > > >
> > > > > "cache-less" means that SQL executed directly on SQL engine.
> > > > >
> > > > > In previous version of Ignite we execute queries via cache:
> > > > >
> > > > > ignite.cache("Some cache").sqlFieldsQuery("select ... from ..")
> > > > >
> > > > > In current Ignite we can execute query directly without using cache
> > as
> > > > > "gateway".
> > > > >
> > > > > And if we execute query directly, metrics not update.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Aug 17, 2018 at 4:21 AM Dmitriy Setrakyan <
> > > dsetrak...@apache.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Evgeny, what is a "cache-less" SQL query?
> > > > > >
> > > > > > D.
> > > > > >
> > > > > > On Thu, Aug 16, 2018 at 6:36 AM, Evgenii Zhuravlev <
> > > > > > e.zhuravlev...@gmail.com
> > > > > > > wrote:
> > > > > >
> > > > > > > Hi Igniters,
> > > > > > >
> > > > > > > I've started to work on adding QueryDetailMetrics for
> cache-less
> > > SQL
> > > > > > > queries(issue
> https://issues.apache.org/jira/browse/IGNITE-6677)
> > > and
> > > > > > found
> > > > > > > that it's required to change API. I don't think that adding
> > methods
> > > > > like
> > > > > > > queryDetailMetrics, resetQueryDetailMetrics, as in IgniteCache
> to
> > > > > Ignite
> > > > > > > class is a good idea. So, I see 2 possible solutions here:
> > > > > > >
> > > > > > > 1. Create IgniteMetrics(ignite.metrics()) and move metrics from
> > > > > > > Ignite(like dataRegionMetrics and dataStorageMetrics) and add a
> > new
> > > > > > > metric "queryDetailMetrics" to it. Of course, old methods will
> be
> > > > > > > deprecated.
> > > > > > >
> > > > > > > 2. Finally create Ignite.sql() API, which was already discussed
> > > here:
> > > > > > > http://apache-ignite-developers.2346864.n4.nabble.
> > > > > > > com/Rethink-native-SQL-API-in-Apache-Ignite-2-0-td14335.html
> > > > > > > and place "queryDetailMetrics" metric there. Here is the ticket
> > for
> > > > > this
> > > > > > > change: https://issues.apache.org/jira/browse/IGNITE-4701
> > > > > > >
> > > > > > > Personally, I think that the second solution looks better in
> this
> > > > case,
> > > > > > > however, moving dataRegionMetrics and dataStorageMetrics to
> > > > > > > ignite.matrics() is still a good idea - IMO, Ignite class is
> not
> > > the
> > > > > > right
> > > > > > > place for them - we shouldn't change our main API class so
> often.
> > > > > > >
> > > > > > > What do you think?
> > > > > > >
> > > > > > > Thank you,
> > > > > > > Evgenii
> > > > > > >
> > > > > >
> > > > > > --
> > > > > > Alexey Kuznetsov
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: QueryDetailMetrics for cache-less SQL queries

2018-08-20 Thread Evgenii Zhuravlev
As for now, metrics are accumulated for cache only when the query is run
directly over this cache, for example, using ignite.cache("Some
cache").sqlFieldsQuery("select ... from .."). When a query is started using
other APIs, it doesn't detect cache, to which this table belongs and
doesn't save any metrics.



2018-08-17 16:44 GMT+03:00 Vladimir Ozerov :

> Query is not executed on specific cache. It is executed on many caches.
>
> On Fri, Aug 17, 2018 at 6:10 AM Dmitriy Setrakyan 
> wrote:
>
> > But internally the SQL query still runs on some cache, no? What happens
> to
> > the metrics accumulated on that cache?
> >
> > D.
> >
> > On Thu, Aug 16, 2018, 18:51 Alexey Kuznetsov 
> > wrote:
> >
> > > Dima,
> > >
> > > "cache-less" means that SQL executed directly on SQL engine.
> > >
> > > In previous version of Ignite we execute queries via cache:
> > >
> > > ignite.cache("Some cache").sqlFieldsQuery("select ... from ..")
> > >
> > > In current Ignite we can execute query directly without using cache as
> > > "gateway".
> > >
> > > And if we execute query directly, metrics not update.
> > >
> > >
> > >
> > >
> > > On Fri, Aug 17, 2018 at 4:21 AM Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > > wrote:
> > >
> > > > Evgeny, what is a "cache-less" SQL query?
> > > >
> > > > D.
> > > >
> > > > On Thu, Aug 16, 2018 at 6:36 AM, Evgenii Zhuravlev <
> > > > e.zhuravlev...@gmail.com
> > > > > wrote:
> > > >
> > > > > Hi Igniters,
> > > > >
> > > > > I've started to work on adding QueryDetailMetrics for cache-less
> SQL
> > > > > queries(issue https://issues.apache.org/jira/browse/IGNITE-6677)
> and
> > > > found
> > > > > that it's required to change API. I don't think that adding methods
> > > like
> > > > > queryDetailMetrics, resetQueryDetailMetrics, as in IgniteCache to
> > > Ignite
> > > > > class is a good idea. So, I see 2 possible solutions here:
> > > > >
> > > > > 1. Create IgniteMetrics(ignite.metrics()) and move metrics from
> > > > > Ignite(like dataRegionMetrics and dataStorageMetrics) and add a new
> > > > > metric "queryDetailMetrics" to it. Of course, old methods will be
> > > > > deprecated.
> > > > >
> > > > > 2. Finally create Ignite.sql() API, which was already discussed
> here:
> > > > > http://apache-ignite-developers.2346864.n4.nabble.
> > > > > com/Rethink-native-SQL-API-in-Apache-Ignite-2-0-td14335.html
> > > > > and place "queryDetailMetrics" metric there. Here is the ticket for
> > > this
> > > > > change: https://issues.apache.org/jira/browse/IGNITE-4701
> > > > >
> > > > > Personally, I think that the second solution looks better in this
> > case,
> > > > > however, moving dataRegionMetrics and dataStorageMetrics to
> > > > > ignite.matrics() is still a good idea - IMO, Ignite class is not
> the
> > > > right
> > > > > place for them - we shouldn't change our main API class so often.
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Thank you,
> > > > > Evgenii
> > > > >
> > > >
> > > > --
> > > > Alexey Kuznetsov
> > > >
> > > >
> > >
> >
>


QueryDetailMetrics for cache-less SQL queries

2018-08-16 Thread Evgenii Zhuravlev
Hi Igniters,

I've started to work on adding QueryDetailMetrics for cache-less SQL
queries(issue https://issues.apache.org/jira/browse/IGNITE-6677) and found
that it's required to change API. I don't think that adding methods like
queryDetailMetrics, resetQueryDetailMetrics, as in IgniteCache to Ignite
class is a good idea. So, I see 2 possible solutions here:

1. Create IgniteMetrics(ignite.metrics()) and move metrics from
Ignite(like dataRegionMetrics and dataStorageMetrics) and add a new
metric "queryDetailMetrics" to it. Of course, old methods will be
deprecated.

2. Finally create Ignite.sql() API, which was already discussed here:
http://apache-ignite-developers.2346864.n4.nabble.com/Rethink-native-SQL-API-in-Apache-Ignite-2-0-td14335.html
and place "queryDetailMetrics" metric there. Here is the ticket for this
change: https://issues.apache.org/jira/browse/IGNITE-4701

Personally, I think that the second solution looks better in this case,
however, moving dataRegionMetrics and dataStorageMetrics to
ignite.matrics() is still a good idea - IMO, Ignite class is not the right
place for them - we shouldn't change our main API class so often.

What do you think?

Thank you,
Evgenii


[jira] [Created] (IGNITE-9199) Ignite doesn't save history for SQL queries after setting queryDetailMetricsSize

2018-08-06 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9199:
-

 Summary: Ignite doesn't save history for SQL queries after setting 
queryDetailMetricsSize
 Key: IGNITE-9199
 URL: https://issues.apache.org/jira/browse/IGNITE-9199
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


Steps to reproduce:
1. Start cluster with persistence without queryDetailMetricsSize.
2. Restart cluster with configured queryDetailMetricsSize.

Ignite save history for SCAN queries only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9153) Accessing cache from transaction on client node, where it was not accessed yet throws an exception

2018-08-01 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9153:
-

 Summary: Accessing cache from transaction on client node, where it 
was not accessed yet throws an exception
 Key: IGNITE-9153
 URL: https://issues.apache.org/jira/browse/IGNITE-9153
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev
 Attachments: ClientCacheTransactionsTest.java

Exception message: Cannot start/stop cache within lock or transaction. 

Reproducer is attached: ClientCacheTransactionsTest.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9052) Add possibility to configure Ignite instance name for springData repository in IgniteRepositoryFactoryBean

2018-07-23 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9052:
-

 Summary: Add possibility to configure Ignite instance name for 
springData repository in IgniteRepositoryFactoryBean
 Key: IGNITE-9052
 URL: https://issues.apache.org/jira/browse/IGNITE-9052
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev


This configuration can be used to access 2 different clusters from 2 
repositories in one spring context



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9019) Ignite prints redundant warnings on node start

2018-07-17 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9019:
-

 Summary: Ignite prints redundant warnings on node start
 Key: IGNITE-9019
 URL: https://issues.apache.org/jira/browse/IGNITE-9019
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


It's scary to see so many warnings when you start node with default 
configuration:

[WARN ][main][TcpCommunicationSpi] Message queue limit is set to 0 which may 
lead to potential OOMEs when running cache operations in FULL_ASYNC or 
PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[WARN ][main][NoopCheckpointSpi] Checkpoints are disabled (to enable configure 
any GridCheckpointSpi implementation)
[WARN ][main][GridCollisionManager] Collision resolution is disabled (all jobs 
will be activated upon arrival).
[WARN ][main][IgniteKernal] Peer class loading is enabled (disable it in 
production for performance and deployment consistency reasons)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9016) Byte arrays are not working as cache keys

2018-07-17 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-9016:
-

 Summary: Byte arrays are not working as cache keys
 Key: IGNITE-9016
 URL: https://issues.apache.org/jira/browse/IGNITE-9016
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


it's not possible to retrieve early inserted data with byte[] key:

{code:java}
IgniteCache cache = ignite.getOrCreateCache("test");

byte[] a = "test".getBytes();
cache.put(a, a);

cache.get(a); //returns null
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Add cluster (de)activation events IGNITE-8376

2018-07-06 Thread Evgenii Zhuravlev
I've linked them as duplicates, however, one ticket suggests to add
Lifecycle events, while another is for adding our simple events
from EventType

Evgenii

2018-07-06 1:10 GMT+03:00 Dmitriy Setrakyan :

> On Thu, Jul 5, 2018 at 1:55 AM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com>
> wrote:
>
> > Guys,
> >
> > Do we really need events for activation/deactivation? We already have a
> > ticket for implementation lifecycle events for it:
> > https://issues.apache.org/jira/browse/IGNITE-5427, won't it be enough?
> >
>
> Hm... I think these two tickets are duplicates of one another, no?
>


Re: Add cluster (de)activation events IGNITE-8376

2018-07-05 Thread Evgenii Zhuravlev
Guys,

Do we really need events for activation/deactivation? We already have a
ticket for implementation lifecycle events for it:
https://issues.apache.org/jira/browse/IGNITE-5427, won't it be enough?

Evgenii

2018-07-03 16:06 GMT+03:00 Ken Cheng :

> Hi dsetrakyan,
>
> I checked the source again and found there is a customized Event sent out
> with below code
>
> org.apache.ignite.internal.processors.cluster.GridClusterSta
> teProcessor#changeGlobalState0
>
> I am not sure what you are referencing is this part? or still we need a
> dedicated event for cluster status changes?
>
> Thank you very much,
> kcmvp
>
> ==
> ChangeGlobalStateMessage msg = new ChangeGlobalStateMessage(start
> edFut.requestId,
> ctx.localNodeId(),
> storedCfgs,
> activate,
> blt,
> forceChangeBaselineTopology,
> System.currentTimeMillis());
>
> try {
> if (log.isInfoEnabled())
> U.log(log, "Sending " + prettyStr(activate) + " request
> with BaselineTopology " + blt);
>
> ctx.discovery().sendCustomEvent(msg);
> 
> Thanks,
> Ken Cheng
>
>
> On Tue, Jul 3, 2018 at 7:41 AM Dmitriy Setrakyan 
> wrote:
>
> > Do we really not have events in Ignite for cluster activation?
> >
> > Alexey Goncharuk, can you please comment?
> >
> > D.
> >
> > On Mon, Jul 2, 2018 at 1:34 AM, kcheng.mvp  wrote:
> >
> > > Dear igniters,I am going to pick up
> > > https://issues.apache.org/jira/browse/IGNITE-8376, and did some
> research
> > > based on the comments.based on my understanding, if we want to know
> this
> > > action is triggered by which way(rest, mbean, auto or visocmd)  then we
> > > need
> > > to change the core method's signature.I am not sure my understanding is
> > > correct or not. Can anybody help to clarify this? Thank you very
> much.By
> > > the
> > > way, I commented this in jira as well.Thanks youkcvmp
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>


[jira] [Created] (IGNITE-8934) LongJVMPauseDetector prints error on thread interruption when node stopping

2018-07-05 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8934:
-

 Summary: LongJVMPauseDetector prints error on thread interruption 
when node stopping
 Key: IGNITE-8934
 URL: https://issues.apache.org/jira/browse/IGNITE-8934
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Evgenii Zhuravlev
Assignee: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8899) IgniteJdbcDriver directly create JavaLogger in static context

2018-06-29 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8899:
-

 Summary: IgniteJdbcDriver directly create JavaLogger in static 
context
 Key: IGNITE-8899
 URL: https://issues.apache.org/jira/browse/IGNITE-8899
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


It means, that it always prints error in logs, if Jul logging file doesn't 
exist. I suggest to use the same approach as in thin driver:
replace 
new JavaLogger()
with
Logger.getLogger(IgniteJdbcDriver.class.getName())



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8810) Create failoverSafe for ReentrantLock

2018-06-15 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8810:
-

 Summary: Create failoverSafe for ReentrantLock
 Key: IGNITE-8810
 URL: https://issues.apache.org/jira/browse/IGNITE-8810
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev


Currently, it has the flag "failoverSafe", but it doesn't have any 
implementation for it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8433) Upgrade zookeper version

2018-05-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8433:
-

 Summary: Upgrade zookeper version
 Key: IGNITE-8433
 URL: https://issues.apache.org/jira/browse/IGNITE-8433
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev


AI 2.4.0 internally uses zookeeper 3.4.6 and this version of zookeeper uses 
netty 3.7.0.Final. This version of Netty had reported a security vulnerability 
CVE-2014-3488 that was addressed in later versions and zookeeper has upgraded 
netty in later versions such as 3.4.10. But the latest version of Ignite 2.4.0 
is still using zookeeper 3.4.6

 

http://apache-ignite-users.70518.x6.nabble.com/Ignite-latest-release-using-zookeeper-3-4-6-td21382.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8426) Some classes creates JavaLogger directly which lead to SEVERE message in logs if JUL config file is missing

2018-05-01 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8426:
-

 Summary: Some classes creates JavaLogger directly which lead to 
SEVERE message in logs if JUL config file is missing
 Key: IGNITE-8426
 URL: https://issues.apache.org/jira/browse/IGNITE-8426
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


Here is the error message:

SEVERE: Failed to resolve default logging config file: 
config/java.util.logging.properties

For example, problem code is placed in LongJVMPauseDetector and 
IgniteJdbcDriver classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8230) SQL: CREATE TABLE doesn't take backups from template

2018-04-11 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8230:
-

 Summary: SQL: CREATE TABLE doesn't take backups from template
 Key: IGNITE-8230
 URL: https://issues.apache.org/jira/browse/IGNITE-8230
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.4
Reporter: Evgenii Zhuravlev
 Fix For: 2.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7928) Exception is not propagated to the C# client and the app hangs

2018-03-13 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7928:
-

 Summary: Exception is not propagated to the C# client and the app 
hangs
 Key: IGNITE-7928
 URL: https://issues.apache.org/jira/browse/IGNITE-7928
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.3
Reporter: Evgenii Zhuravlev


An exception like https://issues.apache.org/jira/browse/IGNITE-1903 is not 
propagated to the C# client:
the issue has happened during JNI call, that is why .NET hung.
The marshaller is unable to unmarshal CacheStoreFactory class (it is absent on 
the client node).
 
Stack trace:
 


{code:java}
class org.apache.ignite.IgniteException: Platform 
error:System.ArgumentNullException: Value cannot be null. Parameter name: key   
 at System.ThrowHelper.ThrowArgumentNullException(ExceptionArgument argument)   
 at System.Collections.Generic.Dictionary`2.FindEntry(TKey key)    at 
System.Collections.Generic.Dictionary`2.TryGetValue(TKey key, TValue& value)    
at Apache.Ignite.Core.Impl.Binary.Marshaller.GetDescriptor(Type type)    at 
Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadFullObject[T](Int32 pos, Type 
typeOverride)    at 
Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res, Type 
typeOverride)    at 
Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T](Type typeOverride)   
 at Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadBinaryObject[T](Boolean 
doDetach)    at 
Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res, Type 
typeOverride)    at 
Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T](Type typeOverride)   
 at Apache.Ignite.Core.Impl.Cache.Store.CacheStore.CreateInstance(Int64 memPtr, 
HandleRegistry registry)    at 
Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.CacheStoreCreate(Int64 
memPtr)    at 
Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.InLongOutLong(Void* 
target, Int32 type, Int64 val)         at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.loggerLog(PlatformProcessorImpl.java:373)
         at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutLong(PlatformProcessorImpl.java:423)
         at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutLong(PlatformProcessorImpl.java:434)
         at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67)
         at 
org.apache.ignite.internal.processors.platform.callback.PlatformCallbackUtils.inLongOutLong(Native
 Method)         at 
org.apache.ignite.internal.processors.platform.callback.PlatformCallbackGateway.cacheStoreCreate(PlatformCallbackGateway.java:65)
         at 
org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetCacheStore.initialize(PlatformDotNetCacheStore.java:403)
         at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.registerStore0(PlatformProcessorImpl.java:650)
         at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.registerStore(PlatformProcessorImpl.java:293)
         at 
org.apache.ignite.internal.processors.cache.store.CacheOsStoreManager.start0(CacheOsStoreManager.java:60)
         at 
org.apache.ignite.internal.processors.cache.GridCacheManagerAdapter.start(GridCacheManagerAdapter.java:50)
         at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(GridCacheProcessor.java:1097)
         at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1826)
         at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:428)
         at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:611)
         at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:338)
         at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2142)
         at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2231)
         at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)      
   at java.lang.Thread.run(Thread.java:748){code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Ignite contibutors page

2018-03-06 Thread Evgenii Zhuravlev
Hi Dmitriy, thank you for noticing this! Unfortunately, I'm not mentioned
in this list too.

Evgenii

2018-03-06 18:40 GMT+03:00 Andrey Kuznetsov :

> +1
>
> --
> Best regards,
>   Andrey Kuznetsov.
>


[jira] [Created] (IGNITE-7614) Documentation: How to access data from key-value that was inserted from SQL

2018-02-02 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7614:
-

 Summary: Documentation: How to access data from key-value that was 
inserted from SQL
 Key: IGNITE-7614
 URL: https://issues.apache.org/jira/browse/IGNITE-7614
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev
Assignee: Denis Magda
 Fix For: 2.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7555) Create documentation for using Ignite Persistence on Kubernetes

2018-01-29 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7555:
-

 Summary: Create documentation for using Ignite Persistence on 
Kubernetes
 Key: IGNITE-7555
 URL: https://issues.apache.org/jira/browse/IGNITE-7555
 Project: Ignite
  Issue Type: Task
  Components: documentation
Reporter: Evgenii Zhuravlev
 Fix For: 2.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7512) Variable updated should be checked for null before invocation of ctx.validateKeyAndValue(entry.key(), updated) in GridDhtAtomicCache.updateWithBatch

2018-01-24 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7512:
-

 Summary: Variable updated should be checked for null before 
invocation of ctx.validateKeyAndValue(entry.key(), updated) in 
GridDhtAtomicCache.updateWithBatch
 Key: IGNITE-7512
 URL: https://issues.apache.org/jira/browse/IGNITE-7512
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


Or it could lead to the NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7361) HA: NPE in HadoopJobTracker.jobMetaCache

2018-01-09 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7361:
-

 Summary: HA: NPE in HadoopJobTracker.jobMetaCache
 Key: IGNITE-7361
 URL: https://issues.apache.org/jira/browse/IGNITE-7361
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.jobMetaCache(HadoopJobTracker.java:206)
 
at 
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.finishFuture(HadoopJobTracker.java:478)
 
at 
org.apache.ignite.internal.processors.hadoop.HadoopProcessor.finishFuture(HadoopProcessor.java:192)
 
at 
org.apache.ignite.internal.processors.hadoop.HadoopImpl.finishFuture(HadoopImpl.java:111)
 
at 
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolJobStatusTask.run(HadoopProtocolJobStatusTask.java:59)
at 
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolJobStatusTask.run(HadoopProtocolJobStatusTask.java:33)
 
at 
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolTaskAdapter$Job.execute(HadoopProtocolTaskAdapter.java:101)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
 
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
 
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
 
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
 
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1181)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1908)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1562)
 
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1190)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
 
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1097)
 [
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) 
[?:1.7.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) 
[?:1.7.0_151]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7355) peerClassLoadin doesn't work with DataStreamer Transformer

2018-01-05 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7355:
-

 Summary: peerClassLoadin doesn't work with DataStreamer Transformer
 Key: IGNITE-7355
 URL: https://issues.apache.org/jira/browse/IGNITE-7355
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Evgenii Zhuravlev


Example:
{code:java}
 try (IgniteDataStreamer streamer = 
ignite.dataStreamer(CacheName.CACHE)) {
streamer.receiver(StreamTransformer.from(new 
MyCacheEntryProcessor()));
streamer.addData("key", "value");
}

private static class MyCacheEntryProcessor implements 
CacheEntryProcessor {

@Override
public Object process(MutableEntry mutableEntry, 
Object... objects) throws EntryProcessorException {
return null;
}
}
{code}

workaround: use streamer.deployClass(MyCacheEntryProcessor.class);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7349) Add possibility to serialise and deserialise IgniteAtomicSequence with custom AtomicConfiguration

2018-01-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7349:
-

 Summary: Add possibility to serialise and deserialise 
IgniteAtomicSequence with custom AtomicConfiguration
 Key: IGNITE-7349
 URL: https://issues.apache.org/jira/browse/IGNITE-7349
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


now, if an atomic sequence has AtomicConfiguration with custom groupName, it 
will lead to the problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7289) ODBC: add possibility to reconnect to the cluster after node failing

2017-12-22 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7289:
-

 Summary: ODBC: add possibility to reconnect to the cluster after 
node failing
 Key: IGNITE-7289
 URL: https://issues.apache.org/jira/browse/IGNITE-7289
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev
Assignee: Igor Sapego






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7180) ODBC: add possibility to configure more than one address in connection string

2017-12-13 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7180:
-

 Summary: ODBC: add possibility to configure more than one address 
in connection string
 Key: IGNITE-7180
 URL: https://issues.apache.org/jira/browse/IGNITE-7180
 Project: Ignite
  Issue Type: Improvement
  Components: odbc
Affects Versions: 2.3
Reporter: Evgenii Zhuravlev
Assignee: Igor Sapego






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7114) C++ node can't start without java examples folder

2017-12-05 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7114:
-

 Summary: C++ node can't start without java examples folder
 Key: IGNITE-7114
 URL: https://issues.apache.org/jira/browse/IGNITE-7114
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.1
Reporter: Evgenii Zhuravlev
Assignee: Igor Sapego
Priority: Critical
 Fix For: 2.4


Error message: 
ERROR: Java classpath is empty (did you set IGNITE_HOME environment variable?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7088) Wrong implementation of DIRECT comparator for ordering cache start operations

2017-12-01 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-7088:
-

 Summary: Wrong implementation of DIRECT comparator for ordering 
cache start operations
 Key: IGNITE-7088
 URL: https://issues.apache.org/jira/browse/IGNITE-7088
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Evgenii Zhuravlev
Priority: Critical
 Fix For: 2.4



{code:java}
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
at java.util.TimSort.mergeHi(TimSort.java:899) ~[?:1.8.0_102]
at java.util.TimSort.mergeAt(TimSort.java:516) ~[?:1.8.0_102]
at java.util.TimSort.mergeForceCollapse(TimSort.java:457) ~[?:1.8.0_102]
at java.util.TimSort.sort(TimSort.java:254) ~[?:1.8.0_102]
at java.util.Arrays.sort(Arrays.java:1512) ~[?:1.8.0_102]
at java.util.ArrayList.sort(ArrayList.java:1454) ~[?:1.8.0_102]
at java.util.Collections.sort(Collections.java:175) ~[?:1.8.0_102]
at 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.orderedCaches(ClusterCachesInfo.java:1616)
 ~[ignite-core-2.1.7.jar:2.1.7]
at 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.cachesReceivedFromJoin(ClusterCachesInfo.java:839)
 ~[ignite-core-2.1.7.jar:2.1.7]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1709)
 ~[ignite-core-2.1.7.jar:2.1.7]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:606)
 [ignite-core-2.1.7.jar:2.1.7]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2278)
 [ignite-core-2.1.7.jar:2.1.7]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.1.7.jar:2.1.7]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]

{code}

When 2 not user cache will be compared using this comparator, this above 
exception will be thrown.

As a workaround can be used environment variable 
-Djava.util.Arrays.useLegacyMergeSort=true 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6992) Ignite MR problem with accessing hdfs with enabled Kerberos

2017-11-22 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6992:
-

 Summary: Ignite MR problem with accessing hdfs with enabled 
Kerberos
 Key: IGNITE-6992
 URL: https://issues.apache.org/jira/browse/IGNITE-6992
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3, 2.1, 2.0
Reporter: Evgenii Zhuravlev


class org.apache.ignite.IgniteCheckedException: SIMPLE authentication is not 
enabled.  Available:[TOKEN, KERBEROS]
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2JobResourceManager.prepareJobEnvironment(HadoopV2JobResourceManager.java:169)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job.initialize(HadoopV2Job.java:328)
at 
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.job(HadoopJobTracker.java:1141)
at 
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.submit(HadoopJobTracker.java:318)
at 
org.apache.ignite.internal.processors.hadoop.HadoopProcessor.submit(HadoopProcessor.java:173)
at 
org.apache.ignite.internal.processors.hadoop.HadoopImpl.submit(HadoopImpl.java:69)
at 
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolSubmitJobTask.run(HadoopProtocolSubmitJobTask.java:50)
at 
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolSubmitJobTask.run(HadoopProtocolSubmitJobTask.java:33)
at 
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolTaskAdapter$Job.execute(HadoopProtocolTaskAdapter.java:101)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1181)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1908)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1562)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1190)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1097)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.AccessControlException: SIMPLE 
authentication is not enabled.  Available:[TOKEN, KERBEROS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2110)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2JobResourceManager.prepareJobEnvironment(HadoopV2JobResourceManager.java:136)
... 22 more
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy36.getFileInfo(Unknown Source)
at

[jira] [Created] (IGNITE-6755) Add possibility to create sql tables with DDL without defined PRIMARY KEY, by adding it implicitly

2017-10-25 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6755:
-

 Summary: Add possibility to create sql tables with DDL without 
defined PRIMARY KEY, by adding it implicitly
 Key: IGNITE-6755
 URL: https://issues.apache.org/jira/browse/IGNITE-6755
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
  Components: sql
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6691) Run ServiceTopologyCallable from GridServiceProcessor.serviceTopology in management pool

2017-10-20 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6691:
-

 Summary: Run ServiceTopologyCallable from 
GridServiceProcessor.serviceTopology in management pool
 Key: IGNITE-6691
 URL: https://issues.apache.org/jira/browse/IGNITE-6691
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.1
Reporter: Evgenii Zhuravlev
 Fix For: 2.4


Now this job runs in the public pool. It could lead to starvation when service 
invoked from any job that runs in public pool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6591) JdbcThinErrorsSelfTest.testBatchUpdateException() should be removed, as batch updates already supported

2017-10-10 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6591:
-

 Summary: JdbcThinErrorsSelfTest.testBatchUpdateException() should 
be removed, as batch updates already supported
 Key: IGNITE-6591
 URL: https://issues.apache.org/jira/browse/IGNITE-6591
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6533) Jdbc Client friver connection creation could hang if client node can't start in parallel

2017-09-29 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6533:
-

 Summary: Jdbc Client friver connection creation could hang if 
client node can't start in parallel
 Key: IGNITE-6533
 URL: https://issues.apache.org/jira/browse/IGNITE-6533
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Evgenii Zhuravlev


1. Start client node with
2. At the same time create connection with the same configuration in JDBC 
Client driver
3. If client node won't start, jdbc connection will hang



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6365) Wrong EventType and oldValue in RemoteFilter of CQ on not primary node due to reordering after EntryProcessor update

2017-09-13 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6365:
-

 Summary: Wrong EventType and oldValue in RemoteFilter of CQ on not 
primary node due to reordering after EntryProcessor update
 Key: IGNITE-6365
 URL: https://issues.apache.org/jira/browse/IGNITE-6365
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev


Example of events:
IS PRIMARY NODE: true
EVENT:CacheContinuousQueryEvent [evtType=UPDATED, key=4, newVal=2061, 
oldVal=2060]
IS PRIMARY NODE: true
EVENT:CacheContinuousQueryEvent [evtType=UPDATED, key=4, newVal=2062, 
oldVal=2061]
IS PRIMARY NODE: false
EVENT:CacheContinuousQueryEvent [evtType=UPDATED, key=4, newVal=2062, 
oldVal=2060]
IS PRIMARY NODE: false
EVENT:CacheContinuousQueryEvent [evtType=CREATED, key=4, newVal=2061, 
oldVal=null]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6345) Wrong message about cluster activation

2017-09-11 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6345:
-

 Summary: Wrong message about cluster activation
 Key: IGNITE-6345
 URL: https://issues.apache.org/jira/browse/IGNITE-6345
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Evgenii Zhuravlev
Priority: Minor
 Fix For: 2.3


Message of an exception that threw while performing an operation on inactive 
cluster relates to method Ignite.activate(true), which doesn't exist. 

Method in message should be changed to Ignite.active(true)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Fwd: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-09-08 Thread Evgenii Zhuravlev
Hi Igor,

Could you check this message from user list? I can't find any reasons why
readThrough doesn't work with Cassandra here

Thanks,
Evgenii

-- Forwarded message --
From: Kenan Dalley 
Date: 2017-08-31 17:14 GMT+03:00
Subject: Re: Cassandra failing to ReadThrough using Cache.get(key) without
preloading
To: u...@ignite.apache.org


Trying this again... OUTPUT Output From Not Preloading the Cache:

>>> Cassandra cache store example started.

>>> Cache retrieve example started.
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 
>>> 'FLAG', col4: '1491843013376'}], Value: [{}]
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 
>>> 'FLAG', col4: '1491843013376'}], Value: [null]

Cache size: 0

Output From Preloading the Cache:

>>> Cassandra cache store example started.

Loading cache...

Cache size: 16

Entries...
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'149688465'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492599741108'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 144}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1491843013376'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496746939945'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492081339596'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492173434330'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496487738766'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492599740168'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492254138310'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1497016098855'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492340538017'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496930018886'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1495969325403'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1492430355581'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1496590566077'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'03C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}
Key: TestResponseKey: {col1: 'text1', col2: '$03', col3: 'FLAG', col4:
'1491999483231'}, Value: TestResponse: {col5: 0, col6: 'null', col7:
'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}

>>> Cache retrieve example started.
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 
>>> 'FLAG', col4: '1491843013376'}], Value: [{TestResponseKey: {col1: 'text1', 
>>> col2: '$03', col3: 'FLAG', col4: '1491843013376'}=TestResponse: {col5: 0, 
>>> col6: 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}}]
>>> Read from C*.  Key: [TestResponseKey: {col1: 'text1', col2: '$03', col3: 
>>> 'FLAG', col4: '1491843013376'}], Value: [TestResponse: {col5: 0, col6: 
>>> 'null', col7: 'C39A9EFB7E', col8: 1, col9: 'NA', col10: 60}]

Cache size: 16

CODE cassandra-ignite.xml






http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>






















  

Re: DataStreamer usability

2017-09-03 Thread Evgenii Zhuravlev
>But why would anyone have different configurations for streaming into the
>same cache on the same node? In this case, we should print out a warning if
>the configuration of one streamer is different from another. Agree?

Well, I have not come across the necessity to use different
datastreamers on one node in use cases I've seen. But at the same time, I
can come up with the use case in order to configure different
StreamReceivers when uploading data from different data sources.

Evgenii

2017-09-03 20:18 GMT+03:00 Dmitriy Setrakyan :

> On Sun, Sep 3, 2017 at 10:09 AM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com
> > wrote:
>
> > Dmitriy,
> >
> > >Disagree. This is an Ignite issue, not a user issue. We must detect this
> > >and either provide a user with a warning message or disallow 2 streamers
> > >with identical names.
> > >Makes sense?
> >
> > It's not a streamer name we apply as an argument, it's a cache name. So,
> > it's possible to use a different Data Streamers from different nodes for
> > the one cache or even different streamers for one cache on the one node.
> >
>
> But why would anyone have different configurations for streaming into the
> same cache on the same node? In this case, we should print out a warning if
> the configuration of one streamer is different from another. Agree?
>
>
> >
> > Evgenii
> >
> >
> > 2017-09-03 19:59 GMT+03:00 Dmitriy Setrakyan :
> >
> > > On Sun, Sep 3, 2017 at 9:36 AM, Evgenii Zhuravlev <
> > > e.zhuravlev...@gmail.com>
> > > wrote:
> > >
> > > > What about this SO question - I think it could be a good idea to add
> > some
> > > > additional validation between nodes configurations at node's start.
> > > > Because, if the root of the problem was found correctly, then it
> means
> > > that
> > > > nodes were started with inconsistent configurations. What do you
> think?
> > > >
> > >
> > > Agree. These are exactly the fixes we should start adding to Ignite.
> > >
> > >
> > > > https://stackoverflow.com/questions/45911616/
> > datastreamer-does-not-work-
> > > > well/4597234
> > > > - it doesn't relate to DataStreamer, looks like since version 2.1(or
> > > > earlier) we have problems with old versions of Java7. I've seen also
> at
> > > > least two same questions on the user list. I've created issue for
> that:
> > > > https://issues.apache.org/jira/browse/IGNITE-6248
> > >
> > >
> > > Great! I have scheduled this ticket for 2.2. The node must not start on
> > an
> > > incompatible Java version.
> > >
> > >
> > > >
> > > > https://stackoverflow.com/questions/45034975/apache-
> > > > ignite-data-streamer/45035436
> > > > - As he said at the end, I think he didn't properly understand how
> > > > DataStreamer should work
> > > >
> > >
> > > Disagree. This is an Ignite issue, not a user issue. We must detect
> this
> > > and either provide a user with a warning message or disallow 2
> streamers
> > > with identical names.
> > >
> > > Makes sense?
> > >
> > >
> > > >
> > > > http://apache-ignite-users.70518.x6.nabble.com/Failing-
> > > > DataStreamer-beacuse-of-minor-version-change-td12142.html
> > > > for this issue, we already have an improvement too:
> > > > https://issues.apache.org/jira/browse/IGNITE-5195
> > >
> > >
> > > Argh, what a horrible issue. We spit out an exception 2 pages long, and
> > > then telling the poor user to ignore it. This is just sad. I updated
> the
> > > ticket and added a link to the user thread.
> > >
> > >
> > > >
> > > >
> > > > Evgenii
> > > >
> > > > 2017-09-03 18:13 GMT+03:00 Dmitriy Setrakyan  >:
> > > >
> > > > > In that case we either provide bad configuration or bad error
> > messages.
> > > > > Could we provide a better error message for this SO issue?
> > > > >
> > > > > Evgenii, can I please ask you to provide links to all the data
> > streamer
> > > > > questions you looked at here? This way we may have a chance to spot
> > > some
> > > > > area for improvement.
> > > > >
> > > > > D.
> > > > >
> > > >

Re: DataStreamer usability

2017-09-03 Thread Evgenii Zhuravlev
Dmitriy,

>Disagree. This is an Ignite issue, not a user issue. We must detect this
>and either provide a user with a warning message or disallow 2 streamers
>with identical names.
>Makes sense?

It's not a streamer name we apply as an argument, it's a cache name. So,
it's possible to use a different Data Streamers from different nodes for
the one cache or even different streamers for one cache on the one node.

Evgenii


2017-09-03 19:59 GMT+03:00 Dmitriy Setrakyan :

> On Sun, Sep 3, 2017 at 9:36 AM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com>
> wrote:
>
> > What about this SO question - I think it could be a good idea to add some
> > additional validation between nodes configurations at node's start.
> > Because, if the root of the problem was found correctly, then it means
> that
> > nodes were started with inconsistent configurations. What do you think?
> >
>
> Agree. These are exactly the fixes we should start adding to Ignite.
>
>
> > https://stackoverflow.com/questions/45911616/datastreamer-does-not-work-
> > well/4597234
> > - it doesn't relate to DataStreamer, looks like since version 2.1(or
> > earlier) we have problems with old versions of Java7. I've seen also at
> > least two same questions on the user list. I've created issue for that:
> > https://issues.apache.org/jira/browse/IGNITE-6248
>
>
> Great! I have scheduled this ticket for 2.2. The node must not start on an
> incompatible Java version.
>
>
> >
> > https://stackoverflow.com/questions/45034975/apache-
> > ignite-data-streamer/45035436
> > - As he said at the end, I think he didn't properly understand how
> > DataStreamer should work
> >
>
> Disagree. This is an Ignite issue, not a user issue. We must detect this
> and either provide a user with a warning message or disallow 2 streamers
> with identical names.
>
> Makes sense?
>
>
> >
> > http://apache-ignite-users.70518.x6.nabble.com/Failing-
> > DataStreamer-beacuse-of-minor-version-change-td12142.html
> > for this issue, we already have an improvement too:
> > https://issues.apache.org/jira/browse/IGNITE-5195
>
>
> Argh, what a horrible issue. We spit out an exception 2 pages long, and
> then telling the poor user to ignore it. This is just sad. I updated the
> ticket and added a link to the user thread.
>
>
> >
> >
> > Evgenii
> >
> > 2017-09-03 18:13 GMT+03:00 Dmitriy Setrakyan :
> >
> > > In that case we either provide bad configuration or bad error messages.
> > > Could we provide a better error message for this SO issue?
> > >
> > > Evgenii, can I please ask you to provide links to all the data streamer
> > > questions you looked at here? This way we may have a chance to spot
> some
> > > area for improvement.
> > >
> > > D.
> > >
> > > On Sun, Sep 3, 2017 at 3:05 AM, Evgenii Zhuravlev <
> > > e.zhuravlev...@gmail.com>
> > > wrote:
> > >
> > > > Dmitriy,
> > > >
> > > > I've seen several questions on StackOverflow and on the user list,
> that
> > > > seems to be connected with Data Streamer at first sight, but after
> some
> > > > investigation, it was clear that they were not related to Data
> Streamer
> > > at
> > > > all. Usually, as it was in this question on SO, it was a wrong
> > > > configuration or conflicted configuration between nodes.
> > > >
> > > > Evgenii
> > > >
> > > > 2017-09-03 2:10 GMT+03:00 Dmitriy Setrakyan :
> > > >
> > > > > Igniters,
> > > > >
> > > > > I am noticing quite a few issues on the user list and Stack
> Overflow
> > > that
> > > > > have to do with Data Streamer usability. Here is just one example:
> > > > >
> > > > > https://stackoverflow.com/questions/46015833/
> > > > datastreamer-adddata-method-
> > > > > not-work-as-expected-in-cluster
> > > > >
> > > > > I think, as a community, we should try to analyze all the issues
> and
> > > > figure
> > > > > out why the users are struggling.
> > > > >
> > > > > Does anyone have a good sense for what is happening?
> > > > >
> > > > > D.
> > > > >
> > > >
> > >
> >
>


Re: DataStreamer usability

2017-09-03 Thread Evgenii Zhuravlev
What about this SO question - I think it could be a good idea to add some
additional validation between nodes configurations at node's start.
Because, if the root of the problem was found correctly, then it means that
nodes were started with inconsistent configurations. What do you think?

Sure, here are a few questions:

https://stackoverflow.com/questions/45911616/datastreamer-does-not-work-well/4597234
- it doesn't relate to DataStreamer, looks like since version 2.1(or
earlier) we have problems with old versions of Java7. I've seen also at
least two same questions on the user list. I've created issue for that:
https://issues.apache.org/jira/browse/IGNITE-6248

https://stackoverflow.com/questions/45034975/apache-ignite-data-streamer/45035436
- As he said at the end, I think he didn't properly understand how
DataStreamer should work

http://apache-ignite-users.70518.x6.nabble.com/Failing-DataStreamer-beacuse-of-minor-version-change-td12142.html
-
for this issue, we already have an improvement too:
https://issues.apache.org/jira/browse/IGNITE-5195

Evgenii

2017-09-03 18:13 GMT+03:00 Dmitriy Setrakyan :

> In that case we either provide bad configuration or bad error messages.
> Could we provide a better error message for this SO issue?
>
> Evgenii, can I please ask you to provide links to all the data streamer
> questions you looked at here? This way we may have a chance to spot some
> area for improvement.
>
> D.
>
> On Sun, Sep 3, 2017 at 3:05 AM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com>
> wrote:
>
> > Dmitriy,
> >
> > I've seen several questions on StackOverflow and on the user list, that
> > seems to be connected with Data Streamer at first sight, but after some
> > investigation, it was clear that they were not related to Data Streamer
> at
> > all. Usually, as it was in this question on SO, it was a wrong
> > configuration or conflicted configuration between nodes.
> >
> > Evgenii
> >
> > 2017-09-03 2:10 GMT+03:00 Dmitriy Setrakyan :
> >
> > > Igniters,
> > >
> > > I am noticing quite a few issues on the user list and Stack Overflow
> that
> > > have to do with Data Streamer usability. Here is just one example:
> > >
> > > https://stackoverflow.com/questions/46015833/
> > datastreamer-adddata-method-
> > > not-work-as-expected-in-cluster
> > >
> > > I think, as a community, we should try to analyze all the issues and
> > figure
> > > out why the users are struggling.
> > >
> > > Does anyone have a good sense for what is happening?
> > >
> > > D.
> > >
> >
>


[jira] [Created] (IGNITE-6248) Check Java 7 builds for compatibility with Ignite and update documentation

2017-09-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6248:
-

 Summary: Check Java 7 builds for compatibility with Ignite and 
update documentation
 Key: IGNITE-6248
 URL: https://issues.apache.org/jira/browse/IGNITE-6248
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Evgenii Zhuravlev
Priority: Minor


According to some threads on SO and user list, some old Java 7 version doesn't 
compatible with Ignite since version 2.1 anymore:
http://apache-ignite-users.70518.x6.nabble.com/Caused-by-org-h2-jdbc-JdbcSQLException-General-error-quot-java-lang-IllegalMonitorStateException-Attt-td15684.html
https://stackoverflow.com/questions/45911616/datastreamer-does-not-work-well/45972341#45972341




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: DataStreamer usability

2017-09-03 Thread Evgenii Zhuravlev
Dmitriy,

I've seen several questions on StackOverflow and on the user list, that
seems to be connected with Data Streamer at first sight, but after some
investigation, it was clear that they were not related to Data Streamer at
all. Usually, as it was in this question on SO, it was a wrong
configuration or conflicted configuration between nodes.

Evgenii

2017-09-03 2:10 GMT+03:00 Dmitriy Setrakyan :

> Igniters,
>
> I am noticing quite a few issues on the user list and Stack Overflow that
> have to do with Data Streamer usability. Here is just one example:
>
> https://stackoverflow.com/questions/46015833/datastreamer-adddata-method-
> not-work-as-expected-in-cluster
>
> I think, as a community, we should try to analyze all the issues and figure
> out why the users are struggling.
>
> Does anyone have a good sense for what is happening?
>
> D.
>


[jira] [Created] (IGNITE-6088) Socket#shutdownOutput in ServerImpl lead UnsupportedOperationException on SSLSocket

2017-08-16 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6088:
-

 Summary: Socket#shutdownOutput in ServerImpl lead 
UnsupportedOperationException on SSLSocket
 Key: IGNITE-6088
 URL: https://issues.apache.org/jira/browse/IGNITE-6088
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Evgenii Zhuravlev
Assignee: Evgenii Zhuravlev
 Fix For: 2.2


UnsupportedOperationException: The method shutdownOutput() is not supported in 
SSLSocket 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6053) IgniteCache.clear clears local cache with same names on all server nodes

2017-08-14 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-6053:
-

 Summary: IgniteCache.clear clears local cache with same names on 
all server nodes
 Key: IGNITE-6053
 URL: https://issues.apache.org/jira/browse/IGNITE-6053
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.7
Reporter: Evgenii Zhuravlev
 Fix For: 2.2


Clear on LOCAL cache should clear the only cache on the current node, not on 
all nodes, that have local caches with same names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Resurrect FairAffinityFunction

2017-08-09 Thread Evgenii Zhuravlev
Dmitriy,

Yes, you're right. Moreover, it looks like a good practice to combine
caches that will be used for collocated JOINs in one group since it reduces
overall overhead.

I think it's not a problem to add this restriction to the SQL JOIN level if
we will decide to use this solution.

Evgenii




2017-08-09 17:07 GMT+03:00 Dmitriy Setrakyan :

> On Wed, Aug 9, 2017 at 6:28 AM, ezhuravl  wrote:
>
> > Folks,
> >
> > I've started working on a https://issues.apache.org/
> > jira/browse/IGNITE-5836
> > ticket and found that the recently added feature with cacheGroups doing
> > pretty much the same that was described in this issue. CacheGroup
> > guarantees
> > that all caches within a group have same assignments since they share a
> > single underlying 'physical' cache.
> >
>
> > I think we can return FairAffinityFunction and add information to its
> > Javadoc that all caches with same AffinityFunction and NodeFilter should
> be
> > combined in cache group to avoid a problem with inconsistent previous
> > assignments.
> >
> > What do you guys think?
> >
>
> Are you suggesting that we can only reuse the same FairAffinityFunction
> across the logical caches within the same group? This would mean that
> caches from the different groups cannot participate in JOINs or collocated
> compute.
>
> I think I like the idea, however, we need to make sure that we enforce this
> restriction, at least at the SQL JOIN level.
>
> Alexey G, Val, would be nice to hear your thoughts on this.
>
>
> >
> > Evgenii
> >
> >
> >
> > --
> > View this message in context: http://apache-ignite-
> > developers.2346864.n4.nabble.com/Resurrect-FairAffinityFunction-
> > tp19987p20669.html
> > Sent from the Apache Ignite Developers mailing list archive at
> Nabble.com.
> >
>


[jira] [Created] (IGNITE-5985) WebConsole: add generation of keyFields for queryEntity for multiple primary key

2017-08-08 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5985:
-

 Summary: WebConsole: add generation of keyFields for queryEntity 
for multiple primary key
 Key: IGNITE-5985
 URL: https://issues.apache.org/jira/browse/IGNITE-5985
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev
Assignee: Alexey Kuznetsov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5922) Improve collisionSpi doc - ParallelJobsNumber should be less than PublicThreadPoolSize

2017-08-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5922:
-

 Summary: Improve collisionSpi doc - ParallelJobsNumber should be 
less than PublicThreadPoolSize
 Key: IGNITE-5922
 URL: https://issues.apache.org/jira/browse/IGNITE-5922
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5867) Web console: Add parameter for adding primary key columns to generated POJO class

2017-07-28 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5867:
-

 Summary: Web console: Add parameter for adding primary key columns 
to generated POJO class
 Key: IGNITE-5867
 URL: https://issues.apache.org/jira/browse/IGNITE-5867
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5860) Client disconnects if server it is connected to goes unresponsive

2017-07-27 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5860:
-

 Summary: Client disconnects if server it is connected to goes 
unresponsive 
 Key: IGNITE-5860
 URL: https://issues.apache.org/jira/browse/IGNITE-5860
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.7
Reporter: Evgenii Zhuravlev
 Fix For: 2.2


Scenario is the following:

# Start at least two server nodes.
# Start a client node.
# Detect server node client is connected to in discovery SPI.
# Suspend that server (^Z in terminal or 'kill -SIGSTOP ').
# It's expected that client will drop connection with this server and 
connect to another one.
# However, a client gets dropped from topology and disconnects.

A client should reconnect cluster before the timeout and without 
EVT_CLIENT_NODE_RECONNECTED event.

In ClientImpl.Reconnector in joinTopology method it gets all addresses from 
discoverySpi and addresses of the server that was suspended on top of this list.

Suggested solution:
Move addresses of the suspended server to the end of the list for the join.






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5814) Service deploy fails with NPE if it implements ExecutorService interface

2017-07-24 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5814:
-

 Summary: Service deploy fails with NPE if it implements 
ExecutorService interface
 Key: IGNITE-5814
 URL: https://issues.apache.org/jira/browse/IGNITE-5814
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5789) After client reconnects to server if server was restarted, client doesn't create caches defined in config file

2017-07-20 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5789:
-

 Summary: After client reconnects to server if server was 
restarted, client doesn't create caches defined in config file
 Key: IGNITE-5789
 URL: https://issues.apache.org/jira/browse/IGNITE-5789
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.0
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5778) Update JobMetrics on each job add/start/finish methods and add possibility to turn out JobMetrics and

2017-07-18 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5778:
-

 Summary: Update JobMetrics on each job add/start/finish methods 
and add possibility to turn out JobMetrics and 
 Key: IGNITE-5778
 URL: https://issues.apache.org/jira/browse/IGNITE-5778
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 1.9
Reporter: Evgenii Zhuravlev
 Fix For: 2.2






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5776) Add option to turn out filter reachable addresses in TcpCommunicationSpi

2017-07-18 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5776:
-

 Summary: Add option to turn out filter reachable addresses in 
TcpCommunicationSpi
 Key: IGNITE-5776
 URL: https://issues.apache.org/jira/browse/IGNITE-5776
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev
Assignee: Evgenii Zhuravlev
 Fix For: 2.2






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5775) Compute runs one job in MetricsUpdateFrequency per thread after all jobs was submitted(as onCollision is not invokes)

2017-07-18 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5775:
-

 Summary: Compute runs one job in MetricsUpdateFrequency per thread 
after all jobs was submitted(as onCollision is not invokes)
 Key: IGNITE-5775
 URL: https://issues.apache.org/jira/browse/IGNITE-5775
 Project: Ignite
  Issue Type: Bug
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5751) In TcpCommunicationSpi.createTcpClient U.filterReachable waits all addresses to check if they are reachable

2017-07-13 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5751:
-

 Summary: In TcpCommunicationSpi.createTcpClient U.filterReachable 
waits all addresses to check if they are reachable
 Key: IGNITE-5751
 URL: https://issues.apache.org/jira/browse/IGNITE-5751
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Evgenii Zhuravlev
Priority: Critical


In TcpCommunicationSpi.createTcpClient replace U.filterReachable with filtering 
first reachable address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5738) Add support of batch requests for jdbc2

2017-07-12 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5738:
-

 Summary: Add support of batch requests for jdbc2
 Key: IGNITE-5738
 URL: https://issues.apache.org/jira/browse/IGNITE-5738
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgenii Zhuravlev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >