Sorry for me being obnoxious here, but still. Mentioning ticket numbers
usually does not make a lot of sense for someone, who is outside of context
of popular/trendy tickets.
At the same time, usually contributors scan emails in the list using
subject. They decide what should go to archive and
It seems reasonable to me. Also I would like to propose adding value of
Ignition.KILL_EXIT_CODE into javadoc using @value javadoc tag.
Dev ops/admins/anyone who admins Ignite may want to know it's value without
going to Java code.
чт, 21 мая 2020 г. в 09:39, Zhenya Stanilovsky :
>
> Thank you
Hi Ilya,
Thank you for reviewing the changes.
I have updated the PR.
Please review and share your feedback.
Regards,
Saikat
On Sat, May 16, 2020 at 7:30 PM Saikat Maitra
wrote:
> Hi,
>
> I have raised PRs for the following issue:
>
> IGNITE-12359 Migrate RocketMQ module to ignite-extensions
Thank you Denis, that would be great.
Regards,
Saikat
On Thu, May 21, 2020 at 11:02 AM Denis Magda wrote:
> Folks, thanks for sharing ideas.
>
> Let me find a designer who can turn the ideas into pictures. Then we can
> compare and adjust.
>
> -
> Denis
>
>
> On Sun, May 17, 2020 at 5:50 AM
Hello!
We still say we support Mono:
https://apacheignite-net.readme.io/docs/cross-platform-support
Do we deprecate building with Mono or running also? I think we need to do
something formal here since we are removing functionality.
Regards,
--
Ilya Kasnacheev
пн, 25 мая 2020 г. в 18:21,
Roman Kondakov created IGNITE-13070:
---
Summary: SQL regressions detection framework
Key: IGNITE-13070
URL: https://issues.apache.org/jira/browse/IGNITE-13070
Project: Ignite
Issue Type:
I've found out that we have `ignite-compatibility` module. It already
has nice tools for launching different Ignite version simultaneously. We
can reuse these tools for SQL regressions detection.
I'm going to place the SQL regressions detection framework to this
module and add a separate TeamCity
Ilya, good point.
I'll remove build-mono.sh, it does not make sense anyway.
On Mon, May 25, 2020 at 11:03 AM Ilya Kasnacheev
wrote:
> Hello!
>
> Well, while we ship build-mono.sh I'll try to run it every time :) I'm not
> running tests with it, just smoke testing to see if anything happens at
Ok, anyway I will finish my patch 2 unit tests still not working. I assume
anyway that if we would want to apply zstd compression we would reuse the
existing page compression algorithm in memory and not only for persistence.
That would probably be anyway simpler and more straightforward.
Will try
Nikolay Izhikov created IGNITE-13069:
Summary: Rewrite creation of IgniteInClosure and IgniteOutClosure
as lambda expression
Key: IGNITE-13069
URL: https://issues.apache.org/jira/browse/IGNITE-13069
Hello Igniters.
If type of binary field does not match query entity field type we
still able to insert such entry into cache, but can't query it.
In the following example we have query entity with the UUID field
"name", but we insert String field "name" using binary object.
IgniteCache
Compress of whole binary inside ignite.
>Sorry I do not actual get what are you opposing? the compress of the binary
>or the null compaction or both?
>And can you ellaborate on why you are opposing it?
>
>
>
>--
>Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Maxim,
> Assertions must be disabled in production :-)
Since production deployments are out of our control we should think
about ones with enabled assertions as well (and they do occur in
practice).
> A contributor who makes a new patch must be able to see failed tests
> immediately, he must
Sorry I do not actual get what are you opposing? the compress of the binary
or the null compaction or both?
And can you ellaborate on why you are opposing it?
--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Ivan,
> 2. Assertions might be disabled in production.
Assertions must be disabled in production :-)
Here is another side of the same coin.
A contributor who makes a new patch must be able to see failed tests
immediately, he must not check all the logs with logged assertions.
The assertion is an
I`m currently against this approach, everyone can previously compress Binary
Object for further using, no additional code need here. This discussion only
about currently not optimal null storing and looks like we can improve it
without performance pay.
>Понедельник, 25 мая 2020, 13:42
Hello!
That would be nice! My preferred compression method is zstd (it also has
dictionary generation built in).
Regards,
--
Ilya Kasnacheev
пн, 25 мая 2020 г. в 13:25, Hostettler, Steve <
steve.hostett...@wolterskluwer.com>:
> I like the idea, especially because it also would apply across
I like the idea, especially because it also would apply across the board.
So you propose to build the binary object and to apply dictionary based
compression on top.
I could quickly generate a bunch of binary objects from the tests and apply
java compress/deflate with a dictionary based on the
Hello!
My take is the following: if conserving memory is needed at all, then we
better invest in compression (such as dictionary-based row compression)
rather than implementing varint, compact nulls, etc.
Dictionary-based compression can easily tackle varints, null patterns while
also
I went for a simpler approach (only with null mask( and yes the gain is high
for smaller object but low otherwise. I gain between 5-20% on my objects. But
to me it is the step stone to easily implement other optimisations like varint
and schemaless without using raw. Trying to solve the latest
Nikolay, Alexei,
thanks for your suggestions.
Offline re-encryption does not seem so simple, we need to read/replace
the existing encryption keys on all nodes (therefore, we should be
able to read/write metastore/WAL and exchange data between the
baseline nodes). Re-encryption in maintenance
Hello!
I can't help myself but wonder how large of a benefit will it give. I have
checked the ticket description, it looks the proposed scheme is elaborate
and benefit for non-extreme binary objects rather tiny.
WDYT?
Regards,
--
Ilya Kasnacheev
пн, 18 мая 2020 г. в 22:54,
Maxim,
I looked into the code and checked how do we setup uncaught exception
handlers for internal executors. Indeed it looks not good (OOME leads
to failure handling, other exceptions are silently ignored). Logging
sounds good.
Have doubts about failure handling (e.g. terminating a node). I
пн, 25 мая 2020 г. в 12:00, Nikolay Izhikov :
> > This willl takes us to the re-encryption using full rebalancing
>
> Rebalance will require 2x efforts for reencryption
>
> 1. Read and send data from supplier node.
> 2. Reencrypt and write data on demander node.
>
> Instead of
>
> 1. Read,
Taras Ledkov created IGNITE-13068:
-
Summary: SQL: use cache.localSize instead of index scan to
calculate row count statistic
Key: IGNITE-13068
URL: https://issues.apache.org/jira/browse/IGNITE-13068
> This willl takes us to the re-encryption using full rebalancing
Rebalance will require 2x efforts for reencryption
1. Read and send data from supplier node.
2. Reencrypt and write data on demander node.
Instead of
1. Read, reencrypt and write data on «demander» node.
> 25 мая 2020 г., в
For me, the one big disadvantage for offline re-encryption is the
possibility to run out of WAL history.
If an re-encryption takes a long time we will get full rebalancing with
partition eviction.
This willl takes us to the re-encryption using full rebalancing, proposed
by me earlier.
пн, 25
> And definitely this approach is much simplier to implement
I agree.
If we allow to made nodes offline for reencryption then we can implement a
fully offline procedure:
1. Stop node.
2. Execute some control.sh command that will reencrypt all data without
starting node
3. Start node.
Pavel,
> Can you explain why such restriction is necessary ?
Reencryption should have a minimum impact on the cluster.
> Most likely having a currently re-encrypting node serving only demand
> requests will have least preformance impact on a grid.
Current design assumes that reencryption will started
And definitely this approach is much simplier to implement because all
corner cases are handled by rebalancing code.
пн, 25 мая 2020 г. в 11:16, Alexei Scherbakov :
> I mean: serving supply requests.
>
> пн, 25 мая 2020 г. в 11:15, Alexei Scherbakov <
> alexey.scherbak...@gmail.com>:
>
>>
I mean: serving supply requests.
пн, 25 мая 2020 г. в 11:15, Alexei Scherbakov :
> Nikolay,
>
> Can you explain why such restriction is necessary ?
> Most likely having a currently re-encrypting node serving only demand
> requests will have least preformance impact on a grid.
>
> пн, 25 мая 2020
Nikolay,
Can you explain why such restriction is necessary ?
Most likely having a currently re-encrypting node serving only demand
requests will have least preformance impact on a grid.
пн, 25 мая 2020 г. в 11:08, Nikolay Izhikov :
> Hello, Alexei.
>
> I think we want to implement this feature
Hello,
Branch with duck tape created -
https://github.com/apache/ignite/tree/ignite-ducktape
Any who are willing to contribute to PoC are welcome.
> 21 мая 2020 г., в 22:33, Nikolay Izhikov написал(а):
>
> Hello, Denis.
>
> There is no rush with these improvements.
> We can wait for Maxim
Hello, Alexei.
I think we want to implement this feature without nodes restart.
In the ideal scenario all nodes will stay alive and respond to the user
requests.
> 24 мая 2020 г., в 15:24, Alexei Scherbakov
> написал(а):
>
> Pavel Pereslegin,
>
> I see another opportunity.
> We can use
Hello!
Well, while we ship build-mono.sh I'll try to run it every time :) I'm not
running tests with it, just smoke testing to see if anything happens at all.
I admit it looks like Mono no longer has an ambition to be the stand-alone
dotnet runtime for Linux.
Regards,
--
Ilya Kasnacheev
пт,
35 matches
Mail list logo