[kdenlive] [Bug 485356] External proxy preset, error when setting multiple profiles

2024-05-31 Thread Ron
https://bugs.kde.org/show_bug.cgi?id=485356

--- Comment #1 from Ron  ---
Hi,

Just a followup to this now that the external proxy editing dialog has been
enabled in 24.05.0.

The bug I noted here is still present in the 24.05.0 appimage.  I can see it
manifest if I just
open that dialog and then flick between the various preset options.

If you select the GoPro or Insta option (or any with multiple profiles) then
select a different
profile, then flick back to the GoPro or Insta one (without closing the
dialog), you'll see that
each time you go back to the multiple profile preset, the options in it get
shuffled around
and appear in the wrong fields.

Cheers,
Ron

-- 
You are receiving this mail because:
You are watching all bug changes.

(flink) branch master updated (ce0b61f376b -> 2c35e48addf)

2024-05-29 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from ce0b61f376b [FLINK-35351][checkpoint] Clean up and unify code for the 
custom partitioner test case
 add bc14d551e04 [FLINK-35195][test/test-filesystem] test-filesystem 
support partition.fields option
 add 2c35e48addf [FLINK-35348][table] Introduce refresh materialized table 
rest api

No new revisions were added by this update.

Summary of changes:
 .../file/table/FileSystemTableFactory.java |   2 +-
 .../flink/table/gateway/api/SqlGatewayService.java |  28 ++
 .../gateway/api/utils/MockedSqlGatewayService.java |  14 +
 .../table/gateway/rest/SqlGatewayRestEndpoint.java |  15 +
 .../RefreshMaterializedTableHandler.java   |  95 
 .../RefreshMaterializedTableHeaders.java   |  96 
 .../MaterializedTableIdentifierPathParameter.java  |  46 ++
 .../RefreshMaterializedTableParameters.java|  56 +++
 .../RefreshMaterializedTableRequestBody.java   |  99 
 .../RefreshMaterializedTableResponseBody.java  |  43 ++
 .../gateway/service/SqlGatewayServiceImpl.java |  31 ++
 .../MaterializedTableManager.java  | 127 -
 .../service/operation/OperationExecutor.java   |  24 +
 .../AbstractMaterializedTableStatementITCase.java  | 339 +
 ...GatewayRestEndpointMaterializedTableITCase.java | 187 +++
 .../service/MaterializedTableStatementITCase.java  | 535 +++--
 .../MaterializedTableManagerTest.java  |  77 ++-
 .../resources/sql_gateway_rest_api_v3.snapshot |  57 +++
 .../api/config/MaterializedTableConfigOptions.java |   2 +
 .../file/testutils/TestFileSystemTableFactory.java |  16 +
 .../testutils/TestFileSystemTableFactoryTest.java  |   3 +
 21 files changed, 1602 insertions(+), 290 deletions(-)
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/RefreshMaterializedTableHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/RefreshMaterializedTableHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/MaterializedTableIdentifierPathParameter.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableParameters.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableRequestBody.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableResponseBody.java
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/AbstractMaterializedTableStatementITCase.java
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpointMaterializedTableITCase.java



[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-28 Thread Ron Cohen
Hi Doug,

This draft intends to be a standard track RFC.

Can an enterprise-profile compliant TC modify the source IP address of event 
messages?

Is an enterprise-profile compliant time-transmitter (i.e. Master or Boundary 
clocks) required to support configuration of clock-id to ip-address mappings?

Thanks
Ron

From: Doug Arnold 
Sent: Tuesday, May 28, 2024 4:52 PM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments

Hello Ron,

The enterprise profile draft does not state that TCs MUST modify the source 
addresses of PTP event messages. Nor does it state that TCs MUST NOT modify the 
source addresses.  It is merely pointing out that, in the field, a PTP instance 
can receive PTP event messages with either the source address of the parent 
clock or the source address of a TC in the communication path.  I think that 
this is critically important information for implementors of PTP capable 
devices and should remain in the draft.

I personally prefer TC implementations that do not modify the source address, 
as that is more helpful for people deploying and maintaining PTP networks.  
However, some TC vendors have told me that they don't do that because they 
believe that it violates the standards of the transport network (IP and/or 
Ethernet).  From a layer model architecture point of view, they have a point:

PTP

UDP

IP

Ethernet

Any packet payload sent up to the PTP layer, modified, sent back down the stack 
and retransmitted would be a new packet and a new frame.

Regards,
Doug


From: Ron Cohen mailto:r...@marvell.com>>
Sent: Sunday, May 26, 2024 7:44 AM
To: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: RE: Enterprise Profile: Support for Non standard TCs


Hi Doug,



Thanks for the reference. This note was added in the 2019 version, and I 
believe requires further discussion/clarifications, but I would like to keep 
the focus on the UDP/IP encapsulation, which is the one required by the 
Enterprise profile.



"All messages, including PTP messages, shall be transmitted and received in 
conformance with the standards governing the transport, network, and physical 
layers of the communication paths used."



An IEEE-1588 compliant TC supporting UDP/IP encapsulation must either modify 
the source-IP address of event messages or must not modify the address. Annex E 
of 1588-2019 is the normative specification of this encapsulation.

If an E2E TC changes the source IPv4 address of an event message, it must 
re-calculate the IPv4 header checksum as well. This is an important 
consideration in HW implementations. Update of the IPv4 header checksum is not 
mentioned in Annex E (or anywhere else in the spec). My point is that it is not 
specified in Annex-E because a TC must not modify the IP header fields 
protected by the IPv4 header checksum.



AFAIK, the IEEE-1588-2019 standard does not specify the need for Clock-ID to 
delay-resp mapping to support UDP/IP encapsulation either, for the same reason; 
it is not required for standard E2E TC implementations.



If we are not in agreement what is the mandatory behavior of Annex-E TC with 
regards to source IP address, I suggest to first ratify it with other members 
of the WG / with other established TC vendors before moving forward with the 
draft.



Best,

Ron



From: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>
Sent: Friday, May 24, 2024 12:40 AM
To: Ron Cohen mailto:r...@marvell.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org>
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs



Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments



Hi Ron,



I excluded NATs because I don't think that they are common in networks where 
enterprise profile PTP is used. So I just didn't want to address them,



I wouldn't say the same about TCs.  Some TC implementations do change the 
source address, and some don't.  I've seen both kinds at PTP plugfests.  That 
is why the language in the draft says TCs might change the source. address.  I 
think that this is important for network operators to know.  That is why I want 
that statement in there.



Technically speaking TCs do not forward frames/packets containing PTP event 
messages.  Instead, they take them up the PTP layer, alter them, sed them back 
down to the data link or network layers and then transmit new frames/packets.  
That is officially true even in 1-step cut-through when the implementation 
combines all of these steps. At the PTP layer we call this retransmission, but 
that is not how it is viewed 

Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-28 Thread Ron / BCLUG

Ron / BCLUG wrote on 2024-05-27 18:10:

you'll love both the runit and s6 init
systems.


That's great, I didn't know they ran startup stuff in parallel.

Is it achieved through "script_name &" or something else?


Answering myself, runit looks kinda nifty according to this:

https://en.wikipedia.org/wiki/Runit


It's actually init + services management, which is nice.


And originally from daemontools, by DJB (Daniel Bernstein), who's quite 
a wizard and has written an impressive number of core utilities (qmail, 
djbdns, etc.).



Kinda sounds like Lennart Poettering, come to think of it.



So, yeah, it looks nice, for sure.

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


(flink) 02/02: [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode

2024-05-28 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 49f22254a78d554ac49810058c209297331129cd
Author: fengli 
AuthorDate: Mon May 27 20:54:39 2024 +0800

[FLINK-35425][table-common] Support convert freshness to cron expression in 
full refresh mode
---
 .../flink/table/utils/IntervalFreshnessUtils.java  | 74 
 .../table/utils/IntervalFreshnessUtilsTest.java| 80 +-
 .../SqlCreateMaterializedTableConverter.java   |  6 ++
 ...erializedTableNodeToOperationConverterTest.java |  9 +++
 4 files changed, 168 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
index 121200098ec..cd58bff4d91 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
@@ -31,6 +31,15 @@ import java.time.Duration;
 @Internal
 public class IntervalFreshnessUtils {
 
+private static final String SECOND_CRON_EXPRESSION_TEMPLATE = "0/%s * * * 
* ? *";
+private static final String MINUTE_CRON_EXPRESSION_TEMPLATE = "0 0/%s * * 
* ? *";
+private static final String HOUR_CRON_EXPRESSION_TEMPLATE = "0 0 0/%s * * 
? *";
+private static final String ONE_DAY_CRON_EXPRESSION_TEMPLATE = "0 0 0 * * 
? *";
+
+private static final long SECOND_CRON_UPPER_BOUND = 60;
+private static final long MINUTE_CRON_UPPER_BOUND = 60;
+private static final long HOUR_CRON_UPPER_BOUND = 24;
+
 private IntervalFreshnessUtils() {}
 
 @VisibleForTesting
@@ -69,4 +78,69 @@ public class IntervalFreshnessUtils {
 intervalFreshness.getTimeUnit()));
 }
 }
+
+/**
+ * This is an util method that is used to convert the freshness of 
materialized table to cron
+ * expression in full refresh mode. Since freshness and cron expression 
cannot be converted
+ * equivalently, there are currently only a limited patterns of freshness 
that can be converted
+ * to cron expression.
+ */
+public static String convertFreshnessToCron(IntervalFreshness 
intervalFreshness) {
+switch (intervalFreshness.getTimeUnit()) {
+case SECOND:
+return validateAndConvertCron(
+intervalFreshness,
+SECOND_CRON_UPPER_BOUND,
+SECOND_CRON_EXPRESSION_TEMPLATE);
+case MINUTE:
+return validateAndConvertCron(
+intervalFreshness,
+MINUTE_CRON_UPPER_BOUND,
+MINUTE_CRON_EXPRESSION_TEMPLATE);
+case HOUR:
+return validateAndConvertCron(
+intervalFreshness, HOUR_CRON_UPPER_BOUND, 
HOUR_CRON_EXPRESSION_TEMPLATE);
+case DAY:
+return validateAndConvertDayCron(intervalFreshness);
+default:
+throw new ValidationException(
+String.format(
+"Unknown freshness time unit: %s.",
+intervalFreshness.getTimeUnit()));
+}
+}
+
+private static String validateAndConvertCron(
+IntervalFreshness intervalFreshness, long cronUpperBound, String 
cronTemplate) {
+long interval = Long.parseLong(intervalFreshness.getInterval());
+IntervalFreshness.TimeUnit timeUnit = intervalFreshness.getTimeUnit();
+// Freshness must be less than cronUpperBound for corresponding time 
unit when convert it
+// to cron expression
+if (interval >= cronUpperBound) {
+throw new ValidationException(
+String.format(
+"In full refresh mode, freshness must be less than 
%s when the time unit is %s.",
+cronUpperBound, timeUnit));
+}
+// Freshness must be factors of cronUpperBound for corresponding time 
unit
+if (cronUpperBound % interval != 0) {
+throw new ValidationException(
+String.format(
+"In full refresh mode, only freshness that are 
factors of %s are currently supported when the time unit is %s.",
+cronUpperBound, timeUnit));
+}
+
+return String.format(cronTemplate, interval);
+}
+
+private static String validateAndConvertDayCron(IntervalFreshness 
intervalFreshness) {
+// Since the number of days in each month is different, only one day 
of freshness is
+  

(flink) 01/02: [FLINK-35425][table-common] Introduce IntervalFreshness to support materialized table full refresh mode

2024-05-28 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 61a68bc9dc74926775dd546af64fe176782f70ba
Author: fengli 
AuthorDate: Fri May 24 12:24:49 2024 +0800

[FLINK-35425][table-common] Introduce IntervalFreshness to support 
materialized table full refresh mode
---
 .../catalog/CatalogBaseTableResolutionTest.java|  10 +-
 .../table/catalog/CatalogMaterializedTable.java|  19 +++-
 .../flink/table/catalog/CatalogPropertiesUtil.java |  20 +++-
 .../catalog/DefaultCatalogMaterializedTable.java   |   7 +-
 .../flink/table/catalog/IntervalFreshness.java | 104 +
 .../catalog/ResolvedCatalogMaterializedTable.java  |   5 +-
 .../flink/table/utils/IntervalFreshnessUtils.java  |  72 ++
 .../table/utils/IntervalFreshnessUtilsTest.java|  67 +
 .../SqlCreateMaterializedTableConverter.java   |   9 +-
 .../planner/utils/MaterializedTableUtils.java  |  16 ++--
 ...erializedTableNodeToOperationConverterTest.java |   4 +-
 .../catalog/TestFileSystemCatalogTest.java |   6 +-
 12 files changed, 302 insertions(+), 37 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
index 72a22c22935..a9436ac21df 100644
--- 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
+++ 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
@@ -38,7 +38,6 @@ import org.junit.jupiter.api.Test;
 
 import javax.annotation.Nullable;
 
-import java.time.Duration;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
@@ -235,8 +234,8 @@ class CatalogBaseTableResolutionTest {
 
 assertThat(resolvedCatalogMaterializedTable.getResolvedSchema())
 .isEqualTo(RESOLVED_MATERIALIZED_TABLE_SCHEMA);
-assertThat(resolvedCatalogMaterializedTable.getFreshness())
-.isEqualTo(Duration.ofSeconds(30));
+assertThat(resolvedCatalogMaterializedTable.getDefinitionFreshness())
+.isEqualTo(IntervalFreshness.ofSecond("30"));
 assertThat(resolvedCatalogMaterializedTable.getDefinitionQuery())
 .isEqualTo(DEFINITION_QUERY);
 assertThat(resolvedCatalogMaterializedTable.getLogicalRefreshMode())
@@ -424,7 +423,8 @@ class CatalogBaseTableResolutionTest {
 properties.put("schema.3.comment", "");
 properties.put("schema.primary-key.name", "primary_constraint");
 properties.put("schema.primary-key.columns", "id");
-properties.put("freshness", "PT30S");
+properties.put("freshness-interval", "30");
+properties.put("freshness-unit", "SECOND");
 properties.put("logical-refresh-mode", "CONTINUOUS");
 properties.put("refresh-mode", "CONTINUOUS");
 properties.put("refresh-status", "INITIALIZING");
@@ -454,7 +454,7 @@ class CatalogBaseTableResolutionTest {
 .partitionKeys(partitionKeys)
 .options(Collections.emptyMap())
 .definitionQuery(definitionQuery)
-.freshness(Duration.ofSeconds(30))
+.freshness(IntervalFreshness.ofSecond("30"))
 
.logicalRefreshMode(CatalogMaterializedTable.LogicalRefreshMode.AUTOMATIC)
 .refreshMode(CatalogMaterializedTable.RefreshMode.CONTINUOUS)
 
.refreshStatus(CatalogMaterializedTable.RefreshStatus.INITIALIZING)
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
index 51856cc859e..1b41ed0ddb9 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
@@ -30,6 +30,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Optional;
 
+import static 
org.apache.flink.table.utils.IntervalFreshnessUtils.convertFreshnessToDuration;
+
 /**
  * Represents the unresolved metadata of a materialized table in a {@link 
Catalog}.
  *
@@ -113,9 +115,18 @@ public interface CatalogMaterializedTable extends 
CatalogBaseTable {
 String getDefinitionQuery();
 
 /**
- * Get the freshness of materialized table which is used to determine the 
physical refresh mode.
+ 

(flink) branch master updated (6c417719972 -> 49f22254a78)

2024-05-28 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 6c417719972 [hotfix] Fix modification conflict between FLINK-35465 and 
FLINK-35359
 new 61a68bc9dc7 [FLINK-35425][table-common] Introduce IntervalFreshness to 
support materialized table full refresh mode
 new 49f22254a78 [FLINK-35425][table-common] Support convert freshness to 
cron expression in full refresh mode

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../catalog/CatalogBaseTableResolutionTest.java|  10 +-
 .../table/catalog/CatalogMaterializedTable.java|  19 ++-
 .../flink/table/catalog/CatalogPropertiesUtil.java |  20 ++-
 .../catalog/DefaultCatalogMaterializedTable.java   |   7 +-
 .../flink/table/catalog/IntervalFreshness.java | 104 +++
 .../catalog/ResolvedCatalogMaterializedTable.java  |   5 +-
 .../flink/table/utils/IntervalFreshnessUtils.java  | 146 +
 .../table/utils/IntervalFreshnessUtilsTest.java| 145 
 .../SqlCreateMaterializedTableConverter.java   |  15 ++-
 .../planner/utils/MaterializedTableUtils.java  |  16 ++-
 ...erializedTableNodeToOperationConverterTest.java |  13 +-
 .../catalog/TestFileSystemCatalogTest.java |   6 +-
 12 files changed, 469 insertions(+), 37 deletions(-)
 create mode 100644 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java
 create mode 100644 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/IntervalFreshnessUtilsTest.java



Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-27 05:24:


If you like parallelism,


It is a compelling idea...



you'll love both the runit and s6 init
systems.


That's great, I didn't know they ran startup stuff in parallel.

Is it achieved through "script_name &" or something else?



Try em, you'll like em.


Not really keen on swapping out everything just for an init system, when 
I already have parallelism and a whole bunch more.



Plus, as mentioned elsewhere, having suffered through OS/2 vs Windows, 
and early days of Linux, I prefer to stick to more mainstream stuff 
these days.



But again, I think it's great those init systems use parallelism and am 
curious how they do it.



Thanks!

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-27 14:43:


There's a lot of cross-over with servers and software between the FLOSS
families.

>>

How would you know if you don't run FreeBSD or OpenBSD?


Because I'm not stupid?

I mean, that's a really dumb question; are you disputing the overlap 
between Linux and BSD systems?



Things like Nick's shell scripting presentations have lots of overlap 
between FLOSS systems and are immensely enjoyable and informative.





When the questions are BSD specific, I don't say anything.


You just told me how OpenBSD didn't have tools similar to systemd when
you have no working experience of OpenBSD tools such as hostctl,
smtpctl, sysctl, rcctl etc.


I *asked* if it was possible to get a list like the example I gave.

You answered "I have logs".

Without specific tools to parse the logs, that means "no".


Are you intentionally misunderstanding that? Are you not disclosing such 
tools for some reason?  Do you have reading comprehension difficulties?



It was a simple question that wasn't inflammatory, just a "how to ...?" 
question.


Maybe there is a tool that generates just such a listing in the BSD 
world. That'd be great, I'd like to hear about it.



But you're clearly the wrong person to engage with on such things.


rb



___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-27 12:09:

  This is a list devoted to helping people with *BSD systems. If you 
have no intention of using it, why are you even here?


There's a lot of cross-over with servers and software between the FLOSS 
families.


I try to contribute answers to questions (like your inventory management 
one, for example) when no one else chimes in.



When the questions are BSD specific, I don't say anything.

But I don't disparage BSD, I have no problem with it (them?).


Also, someone needs to challenge the "Linux is becoming a poor 
implementation of Window!!1!" comments.



Hope that helps.

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-27 10:59:


The boot time was so slow that it was obvious.


If different processes were involved in the boot sequence, that just may 
have an effect on time-to-desktop, but since it's left unaddressed, I 
guess we'll never know.




Is something like that even possible on non-systemd machines?


I have log files in /var/log on OpenBSD.


So the answer is "no".

Having log files and processing them to extract startup times,
sequencing, etc. is quite different.


Have you even installed OpenBSD or FreeBSD? Have you ever used *BSD 
for longer than one day?


Installed, yes. Used, no.

I used OS/2 back in the day and have experienced the hassle of using
niche software that isn't well supported and do not wish to subject
myself to that again unless there's a compelling reason.

It was a bit of a hassle initially using Linux as a daily driver, but
it's gotten so much better in the past 10-ish years.


rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-23 19:54:


I don't know what the cause was but I could never get scanning (xsane)
to work on either Linux Mint or Kubuntu.


Scanning has been a solved problem in Linux for a decade or two, so it's 
hard to know what went wrong, nor what purpose is served bringing it up 
really.




One of the claims about systemd is that it would provide faster boot
up. However, my Devuan Linux boots faster than either KdeNeon or
Kubuntu or Linux Mint. All three were installed on the same T480
laptop, which now runs Devuan.


All running KDE?  With the exact same packages installed? Seems like 
apples to oranges without that info.


What times did you measure for them?


Parallelism is pretty much always going to be faster, all else being equal.


General question for the list: how does one diagnose which process(es) 
slow down booting up on non-systemd hosts?


I run `systemd-analyze blame` and get a nice list like this:

1min 10.758s plocate-updatedb.service
 31.549s apt-daily.service
 31.283s apt-daily-upgrade.service
 15.423s fstrim.service
  7.902s dev-loop2.device
  6.296s snapd.service
  4.059s systemd-networkd-wait-online.service
  3.830s systemd-udev-settle.service
  3.819s smartmontools.service
  3.433s zfs-import-cache.service
  2.432s postfix@-.service

I can see at a glance exactly what is going on with my boot sequence timing.

Is something like that even possible on non-systemd machines?







Finally there is the xz exploit, which has a writeup:
https://marc.info/?l=openbsd-misc=171179460913574=2

it leads in with a quote to remember -

"This dependency existed not because of a deliberate design decision
by the developers of OpenSSH, but because of a kludge added by some
Linux distributions to integrate the tool with the operating
system's newfangled orchestration service, systemd."


"kludge". "newfangled".

That's quite the biased take on it, not worth the time it took to read it.


The xz exploit was a nation-state attack targeting sshd via xz-utils as 
a vector, then pivoting via systemd's dynamic linking of xz.


Everyone knows that if one is targeted by nation-state actors, it's 
pretty much game over.


Defenders need 100% success, attackers only need 1 success.



As for systemd linking to xz-utils, everyone realizes that log files get 
compressed, I hope?



When software statically links libraries, people complain because:

* multiple versions statically linked "waste disk space"
* with dynamic linking, a vulnerability only needs one library to be 
patched for all apps to be patched


The flip side is, one compromised library and lots of apps are 
vulnerable, I guess.


There isn't really a Right Answer™ to statically vs dynamically linking.


Anyway, systemd had a patch committed that would statically link 
xz-utils, just waiting for distributions to bundle it, when the xz-utils 
hack happened. FWIW.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


(flink) branch master updated (4b342da6d14 -> 90e2d6cfeea)

2024-05-26 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 4b342da6d14 [FLINK-35426][table-planner] Change the distribution of 
DynamicFilteringDataCollector to Broadcast
 add 90e2d6cfeea [FLINK-35342][table] Fix the unstable 
MaterializedTableStatementITCase test due to wrong job status check logic

No new revisions were added by this update.

Summary of changes:
 .../gateway/service/MaterializedTableStatementITCase.java  | 10 ++
 1 file changed, 10 insertions(+)



[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-26 Thread Ron Cohen
Hi Doug,

Thanks for the reference. This note was added in the 2019 version, and I 
believe requires further discussion/clarifications, but I would like to keep 
the focus on the UDP/IP encapsulation, which is the one required by the 
Enterprise profile.

"All messages, including PTP messages, shall be transmitted and received in 
conformance with the standards governing the transport, network, and physical 
layers of the communication paths used."

An IEEE-1588 compliant TC supporting UDP/IP encapsulation must either modify 
the source-IP address of event messages or must not modify the address. Annex E 
of 1588-2019 is the normative specification of this encapsulation.
If an E2E TC changes the source IPv4 address of an event message, it must 
re-calculate the IPv4 header checksum as well. This is an important 
consideration in HW implementations. Update of the IPv4 header checksum is not 
mentioned in Annex E (or anywhere else in the spec). My point is that it is not 
specified in Annex-E because a TC must not modify the IP header fields 
protected by the IPv4 header checksum.

AFAIK, the IEEE-1588-2019 standard does not specify the need for Clock-ID to 
delay-resp mapping to support UDP/IP encapsulation either, for the same reason; 
it is not required for standard E2E TC implementations.

If we are not in agreement what is the mandatory behavior of Annex-E TC with 
regards to source IP address, I suggest to first ratify it with other members 
of the WG / with other established TC vendors before moving forward with the 
draft.

Best,
Ron

From: Doug Arnold 
Sent: Friday, May 24, 2024 12:40 AM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments
________
Hi Ron,

I excluded NATs because I don't think that they are common in networks where 
enterprise profile PTP is used. So I just didn't want to address them,

I wouldn't say the same about TCs.  Some TC implementations do change the 
source address, and some don't.  I've seen both kinds at PTP plugfests.  That 
is why the language in the draft says TCs might change the source. address.  I 
think that this is important for network operators to know.  That is why I want 
that statement in there.

Technically speaking TCs do not forward frames/packets containing PTP event 
messages.  Instead, they take them up the PTP layer, alter them, sed them back 
down to the data link or network layers and then transmit new frames/packets.  
That is officially true even in 1-step cut-through when the implementation 
combines all of these steps. At the PTP layer we call this retransmission, but 
that is not how it is viewed by the layers below.  IEEE 802.1Q is explicit 
about this, and the IEEE 802.1 working group sent a message to the 1588 WG 
asking us to point this out in the 2019 edition of 1588.

IEEE 1588-2019 subclause 7.3.1 starts with these two paragraphs:
"All messages, including PTP messages, shall be transmitted and received in 
conformance with the
standards governing the transport, network, and physical layers of the 
communication paths used.

NOTE-As an example, consider IEEE 1588 PTP Instances, specifically including 
Transparent Clocks, running on
IEEE 802.1Q communication paths. Suppose we have two Boundary Clocks separated 
by a Transparent Clock. The
Transparent Clock entity (the PTP stack running above the MAC layer) is 
required to insert the appropriate MAC
address of the Transparent Clock into the sourceAddress field of the Ethernet 
header for ALL messages it transmits.
Other communication protocols can have similar requirements."

Regards,
Doug
________
From: Ron Cohen mailto:r...@marvell.com>>
Sent: Wednesday, May 22, 2024 11:57 PM
To: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: RE: Enterprise Profile: Support for Non standard TCs


Hi Doug,



The draft states that deployments with NAT are out of scope of the document.

"In IPv4 networks some clocks might be hidden behind a NAT, which hides their 
IP addresses from the rest of the network. Note also that the use of NATs may 
place limitations on the topology of PTP networks, depending on the port 
forwarding scheme employed. Details of implementing PTP with NATs are out of 
cope of this document."

A PTP TC that is a bridge per 802.1q or an IPv4/6 router must not change the 
source IP address of PTP delay requests.



I've been working with TC solutions for more than 10 years. Both 1-step PTP TCs 
in HW (as well as 2-step in HW+SW) and none modified the source IP address of 
E2E delay requests, when working as either a bridge or router.

This is the case for the products of the company I currently work for as well.



My input 

Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-25 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-25 01:25:


That being said I don't think it calls for a full boycott of Linux,

>

Thanks Kyle. Like you, I don't think systemd calls for a boycott on
Linux, and I hadn't intended to imply it.


Cheers Steve, Kyle, et al.


I just wanted to say, despite the spirited debates, it's absolutely 
wonderful that there's top-notch software available for free that we all 
get to use in the way we choose to use it.



Thanks to all the contributors, and thanks to the list for giving us all 
a place to chat, rant, rave, and in the end, discuss our topics of interest!



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-24 Thread Ron / BCLUG

Kyle Willett wrote on 2024-05-23 21:31:


One piece of software can't be that good at so many different tasks!


I'm not sure that logic holds up:

"Fedora can't be that good at so many different tasks"

"Linux kernel can't be that good at so many different tasks"




GNU utilities - contains logging tools, mcron cron job implementation, 
grub, ...




That's not really an apples-to-apples comparison, but packaging a bunch 
of tools under one moniker isn't uncommon.





> sudo replacement with run0

What did you think about the discussion (was it on this list?) about 
suid and the inherent risks with the (allegedly) spotty implementations 
of that vis-á-vis sudo?


It was over my head, but there were issues raised and sudo CVEs patches 
mentioned.


Someone with very deep knowledge of the topic proposed the run0 vs sudo 
and had some valid-looking reasons for doing so.




Now, granted systemd utils show up in a *lot* of places, giving valid 
reason to be curious about why.


On the other hand, a services management system probably should handle a 
lot of different functionality.



And, some of those new utilities have great features, i.e.:

* show me all log messages from postfix from 2 boots ago *only*

* show me all the "cron" jobs in order of when they next launch, the 
time elapsed since last launch,...


* show me a list of all services that start at boot time and how long 
they took to become active (wow, I just noticed it took 30.566s for 
apt-daily-upgrade.service to come up)







Admittedly, I'm not a fan of resolvectl and some other stuff, and more 
often than not use cron, not timers.



Cheers,

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


(flink) branch master updated (0737220959f -> 71e6746727a)

2024-05-23 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 0737220959f [FLINK-35216] Support for RETURNING clause of JSON_QUERY
 add 0ec6302cff4 [FLINK-35347][table-common] Introduce RefreshWorkflow 
related implementation to support full refresh mode for materialized table
 add 62b8fee5208 [FLINK-35347][table] Introduce embedded scheduler to 
support full refresh mode for materialized table
 add 71e6746727a [FLINK-35347][table] Introduce EmbeddedWorkflowScheduler 
plugin based on embedded scheduler

No new revisions were added by this update.

Summary of changes:
 flink-table/flink-sql-gateway/pom.xml  |  26 ++
 .../table/gateway/rest/SqlGatewayRestEndpoint.java |  60 ++-
 .../CreateEmbeddedSchedulerWorkflowHandler.java|  98 
 .../DeleteEmbeddedSchedulerWorkflowHandler.java|  75 +++
 .../ResumeEmbeddedSchedulerWorkflowHandler.java|  75 +++
 .../SuspendEmbeddedSchedulerWorkflowHandler.java   |  75 +++
 .../AbstractEmbeddedSchedulerWorkflowHeaders.java  |  63 +++
 .../CreateEmbeddedSchedulerWorkflowHeaders.java}   |  65 ++-
 .../DeleteEmbeddedSchedulerWorkflowHeaders.java|  50 ++
 .../ResumeEmbeddedSchedulerWorkflowHeaders.java|  50 ++
 .../SuspendEmbeddedSchedulerWorkflowHeaders.java   |  50 ++
 .../header/session/ConfigureSessionHeaders.java|   4 +-
 .../header/statement/CompleteStatementHeaders.java |   4 +-
 ...CreateEmbeddedSchedulerWorkflowRequestBody.java | 105 +
 ...reateEmbeddedSchedulerWorkflowResponseBody.java |  53 +++
 .../EmbeddedSchedulerWorkflowRequestBody.java  |  55 +++
 .../rest/util/SqlGatewayRestAPIVersion.java|   5 +-
 .../gateway/workflow/EmbeddedRefreshHandler.java   |  84 
 .../workflow/EmbeddedRefreshHandlerSerializer.java |  45 ++
 .../workflow/EmbeddedWorkflowScheduler.java| 235 ++
 .../workflow/EmbeddedWorkflowSchedulerFactory.java |  67 +++
 .../flink/table/gateway/workflow/WorkflowInfo.java | 125 +
 .../scheduler/EmbeddedQuartzScheduler.java | 229 +
 .../workflow/scheduler/QuartzSchedulerUtils.java   | 125 +
 .../workflow/scheduler/SchedulerException.java}|  14 +-
 .../src/main/resources/META-INF/NOTICE |   9 +
 .../org.apache.flink.table.factories.Factory   |   1 +
 .../table/gateway/rest/RestAPIITCaseBase.java  |   6 +-
 .../rest/util/TestingSqlGatewayRestEndpoint.java   |   4 +-
 .../workflow/EmbeddedRefreshHandlerTest.java}  |  28 +-
 .../workflow/EmbeddedSchedulerRelatedITCase.java   | 350 ++
 .../gateway/workflow/QuartzSchedulerUtilsTest.java |  83 
 .../resources/sql_gateway_rest_api_v3.snapshot | 519 +
 .../table/refresh/ContinuousRefreshHandler.java|   2 +
 .../workflow/CreatePeriodicRefreshWorkflow.java|  85 
 ...owException.java => ResumeRefreshWorkflow.java} |  19 +-
 ...wException.java => SuspendRefreshWorkflow.java} |  19 +-
 .../flink/table/workflow/WorkflowException.java|   5 +-
 flink-table/pom.xml|   1 +
 39 files changed, 2887 insertions(+), 81 deletions(-)
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/AbstractEmbeddedSchedulerWorkflowHeaders.java
 copy 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/{statement/CompleteStatementHeaders.java
 => materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHeaders.java} 
(51%)
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/material

Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-23 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-23 18:06:


I'll address his central point, which is that systemd has many
benefits. My rebuttal is that nobody needs that kind of complexity.


Computers are complex, imagine that.



Most systemd features can and have been done better and simpler other
ways.


Asserts facts not in evidence; show your evidence.

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: PG 12.2 ERROR: cannot freeze committed xmax

2024-05-23 Thread Ron Johnson
On Thu, May 23, 2024 at 9:41 AM bruno da silva  wrote:

> Hello,
> I have a deployment with PG 12.2 reporting ERROR: cannot freeze committed
> xmax
> using Red Hat Enterprise Linux 8.9.
>
> What is the recommended to find any bug fixes that the version 12.2 had
> that could have caused this error.
>

https://www.postgresql.org/docs/release/

You're missing *four years* of bug fixes.

Could this error be caused by OS/Hardware related issues?
>

 Four years of bug fixes is more likely the answer.


Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-23 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-23 02:53:


LibreOffice' reason for existence is to interact with MS Office
documents. If it can't do that, why use it?


The blame for poor interaction lies with Microsoft, 100%.

Also, another major reason for LibreOffice is to have a full featured 
office suite that is *not* Microsoft Office, one that runs natively on 
all OSs. Which it has succeeded at nicely.




 From my perspective, LibreOffice suffers from the same problem now
afflicting most Linux distributions: Trying to be easy for Windows
people. Systemd


But systemd has absolutely nothing to do with being easy for Windows 
people. It exists to provide a services lifecycle management system.


Just because you dislike both of them does not mean the two are related 
somehow.




Once again, I'll link "The Tragedy of systemd" by Benno Rice, FreeBSD 
developer.  I'm still waiting for an anti-systemd person to address one 
single point he raised:


The presentation at linux.conf.au:


https://www.youtube.com/watch?v=o_AIw9bGogo


Specifically, "The arguments against systemd that people tend to 
advance", starting with "it violates the Unix philosophy":



https://youtu.be/o_AIw9bGogo?si=0xJ0-JpXGEBGpW0K=1040



The slide show (note its domain):

> 
https://papers.freebsd.org/2018/bsdcan/rice-The_Tragedy_of_systemd.files/rice-The_Tragedy_of_systemd.pdf



___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] LO backups & OO [was OT: is there any office package (especially spreadsheet) that lets me choose a PEN color]

2024-05-23 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-23 08:07:


LO vs OO (topic 1)

I was pissed when I was told I "had" to convert from OO to LO.

LO was buggy (see below)

later found a friend stayed with OO and was happy.

any opinions on which of LO  and OO  (or others) do the best job on reading in 
XL or other formats?


LibreOffice is the project where all the devs forked Open Office (an 
Oracle product at the time).


It's been refactored and received rapid development updates.

OnlyOffice was abandoned by Oracle to the Apache foundation and 
virtually no one works on it other than a few commits by IBM employees 
(IBM has distributed OO in the past so wanted it to survive in some form).



LibreOffice beats Open Office by every metric available.

I can't speak to "XL" (XLS?) format specifically, but assume nothing has 
changed in that regard in 10 years for OO.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Off line HTML Documents and Safari Browser

2024-05-23 Thread Ron Canazzi

Hi Group,

I have a document that I downloaded and sent to my iPhone via e-mail 
attachment.  I saved it to files. When I try to open it, Voice Dream 
Reader grabs it and opens it. How can I allow Safari Browser to open it?


--
Signature:
For a nation to admit it has done grievous wrongs and will strive to correct 
them for the betterment of all is no vice;
For a nation to claim it has always been great, needs no improvement  and to 
cling to its past achievements is no virtue!

--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups "VIPhone" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/27ed8dc7-f09d-4893-b567-7c4ad29561c7%40roadrunner.com.


Re: [Semibug] internal number storage in libre office calc [was: LibreOffice is summing incorrectly]

2024-05-23 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 22:55:


OK, my spreadsheet is only 1.3 MB


1.3MB is miniscule in relation to any disk in the past 10 (20?) years.

What's your time worth?




single precision calculation is faster too.


How many microseconds could you save and how much time are you willing 
to invest into that?




(3) perhaps I could have all integers (100 times bigger than the desired
number), and shift the decimal while displaying, but if I have to learn
to type in everything, that is a LOT more work (mostly i now enter
numbers like 5 or 3 or 2.1 or .12 so typing 500, 300, 210 and 12 would
be more typing) and an even longer learning curve...


Yeah, it's going to cause errors in data entry, just to potentially 
avoid rounding errors of fractions of cents.




Probably LibreOffice and GnuCash are suitable for your needs (I've never 
looked at GnuCash).


Your file sizes without fiddling could still almost fit on a 3½" floppy. 
Or, eleventeen gazillion of them on a fingernail-sized media card. 
You're okay there too.



Sounds like things are actually fine and trying to optimize away via 
single vs double precision just ain't worth it.




Good luck,

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-23 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-22 23:26:


This command can be run in 3 seconds


Ctrl+S == saved, 0.3 seconds.

Haven't personally experienced much instability with LO.

Certainly would *not* advise against using it.



> LibreOffice is notorious for randomly, summarily and permanently
> changing styles.

I have had an issue with styles in the past, but that was in documents 
that were opened by other office suite apps too, so I never knew who to 
blame:


a) me?
b) LO?
c) OnlyOffice?
d) All of the above?
e) Something else (10 years of upgrades between edits)?


Maybe it was LO if it's a known issue.

Everything was recoverable.


rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] re tracking changes in libre office spreadsheet [was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color]

2024-05-23 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 23:46:


Perhaps version tracking would help with this?

All changes can be tracked and reviewed.


ok, found this page:
https://itsfoss.com/libreoffice-version-control/

it says

click on edit/track changes/record.

--done

click on view/toolbars/track changes


That's wrong, at least for v7.3.7.


Try Edit > Track Changes > Manage


A dialogue pops up with a list of all changes since recording started.

One can click through the list to highlight individual changes, and 
Accept or Reject them.



It's pretty nice.




I am on version 7.3.7.2, is that too old?


It works in this version (since about version 4.0 in 2013), but the link 
you found has (currently) invalid info.




when I do a general search for what is the current "libre office spreadsheet", 
I get libre office (overall) is 24.2, so clearly a different series of numbers.


As of February, they're going with the year.month format, which is kinda 
nice, once one knows what's going on.




rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] LibreOffice is summing incorrectly

2024-05-22 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 14:27:


I typed 4.73 into a cell, it was actually stored that way


LibreOffice Calc (which uses 64-bit double-precision numbers 
internally)


https://help.libreoffice.org/latest/en-US/text/scalc/01/calculation_accuracy.html


never have more than 5 (actually 4.5, meaning 199.99 to .01) 
significant digits, so single precision could make the spreadsheet a 
LOT smaller.


Since all numbers are stored as 64 bit double-precision, it doesn't look
like there's a way to reduce spreadsheet size by fiddling with storage.



If I define a cell as ONLY a date OR a time, will it insist on
storing it the internal clock format, which requires double
precision?


Yes. From the link above:

internally, any time is a fraction of a day, 12:00 (noon) being 
represented as 0.5.




Or better yet, store (the non date/time) as 100x integers, meaning
.02 would be stored as 2, and 5 would be stored as 500.


If you input numbers as n*100 and display them back as n÷100, that might
work nicely. I've heard of that technique used in financial transactions.



For displaying, I would like to DISPLAY in non-scientific format,
but with a limited number of significant digits


Number formatting supports custom formats, so that's do-able.



Inherent Accuracy Problem

LibreOffice Calc, just like most other spreadsheet software, uses
floating-point math capabilities available on hardware. Given that
most contemporary hardware uses binary floating-point arithmetic with
limited precision defined in IEEE 754 standard, many decimal numbers
- including as simple as 0.1 - cannot be precisely represented in
LibreOffice Calc (which uses 64-bit double-precision numbers
internally).



That link is pretty interesting, and I didn't realize time formats were 
susceptible to rounding issues; I expected them to be stored in Unix 
epoch format.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-22 Thread Ron Cohen
Hi Doug,

The draft states that deployments with NAT are out of scope of the document.

"In IPv4 networks some clocks might be hidden behind a NAT, which hides their 
IP addresses from the rest of the network. Note also that the use of NATs may 
place limitations on the topology of PTP networks, depending on the port 
forwarding scheme employed. Details of implementing PTP with NATs are out of 
cope of this document."
A PTP TC that is a bridge per 802.1q or an IPv4/6 router must not change the 
source IP address of PTP delay requests.

I've been working with TC solutions for more than 10 years. Both 1-step PTP TCs 
in HW (as well as 2-step in HW+SW) and none modified the source IP address of 
E2E delay requests, when working as either a bridge or router.
This is the case for the products of the company I currently work for as well.

My input is that per my understanding the following is not true for standard 
TCs:

"This is important since Transparent Clocks will treat PTP messages that are 
altered at the PTP application layer as new IP packets and new Layer 2 frames 
when the PTP messages are retranmitted."

And with NAT services out of scope, this part should be removed in my opinion 
too:

"In PTP Networks that contain Transparent Clocks, timeTransmitters might 
receive Delay Request messages that no longer contains the IP Addresses of the 
timeReceivers. This is because Transparent Clocks might replace the IP address 
of Delay Requests with their own IP address after updating the Correction 
Fields. For this deployment scenario timeTransmitters will need to have 
configured tables of timeReceivers' IP addresses and associated Clock 
Identities in order to send Delay Responses to the correct PTP Nodes"

I don't have further new input beyond that.

Best,
Ron

From: Doug Arnold 
Sent: Thursday, May 23, 2024 12:05 AM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments
____
Hello Ron,

For Ethernet - IEEE 802.1Q, I can't remember the RFCs for IPv4 and IPv6 but you 
can look them up.

Here is the thing. I understand from a network layer model perspective a TC 
should not change the payload for a frame/packet and just forward it.  However, 
there is no other way to do a cut-through 1-step TC. I pointed that out to the 
folks in IEEE 802.1 but they ignored me.  I know for a fact that multiple 
companies' implementations of TCs do not replace the source address before 
retransmitting.  I don't blame them.  The standards are preventing a valuable 
use case just to preserve the purity of their layer model.  I would be 
surprised if 1588 is the only technology that needs to change message fields on 
the fly in a cut through switch.

Regards,
Doug
____
From: Ron Cohen mailto:r...@marvell.com>>
Sent: Wednesday, May 22, 2024 2:58 AM
To: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: RE: Enterprise Profile: Support for Non standard TCs


Hi Doug,



TC are not supposed to change source IP address of delay requests.



If the TC is a layer2 switch/bridge, it must not modify the source MAC address 
while forwarding and must never touch the layer3 addresses.

If the TC is a layer3 IP router, it must not modify the source IP address while 
forwarding and must change the source MAC address to the MAC address of its 
egress port.



If the TC is a layer4 device, e.g., a NAT device, it modifies the source IP 
address of messages as it is its functionality. It may be the case that such 
functionality is required in the enterprise. My point is that it is far from 
obvious and the draft needs to elaborate why it's needed.



>> This is required by the standards that specify the transport networks.

I would appreciate if you point to the relevant standards.



The draft states that additional support is required for this deployment 
scenario:

"For this deployment scenario timeTransmitters will need to have configured 
tables of timeReceivers' IP addresses and associated Clock Identities in order 
to send Delay Responses to the correct PTP Nodes"



These tables would be part of the IEEE1588 spec if this TC behavior was 
standard. It is not trivial to add support for these tables in HW, if you want 
to support scale and speed.



Best,

Ron



From: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>
Sent: Wednesday, May 22, 2024 12:36 AM
To: Ron Cohen mailto:r...@marvell.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org>
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs



Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments

___

Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-22 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 15:34:


I would like to choose a PEN color, e.g. red, no matter what I enter or
change, no matter where, it will be in the pen color.


As Carl mentioned, LibreOffice supports colourizing text.



so I can make a group of changes, then go back and verify them, when
confirmed, change everything to black.


Perhaps version tracking would help with this?

All changes can be tracked and reviewed.


Also, there's the ability to add comments, which might help with the 
review process.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


[Int-area] ICMP Considerations

2024-05-22 Thread Ron Bonica
Folks,

Over the years, I have written several forwarding plane documents that mention 
ICMP. During the review of these documents, people have raised issues of the 
like the following:


  *
shouldn't we mention that ICMP message delivery is not reliable?
  *
shouldn't we mention that ICMP messages are rate limited?
  *
How is the ICMP message processed at its destination?

In each of these documents, I have added an ICMP considerations section to 
address these issues. Rather than repeating that text in every document we 
write in the future, I have abstracted it into a separate document.

If anyone would like to contribute to this document, it can be found at 
https://github.com/ronbonica/ICMP

Please send a private email if you are interested in contributing to the 
document.


Ron

[https://opengraph.githubassets.com/7f935b93000c8251bb7045f430e9243476e1d8f5fc79ffb6b46dab706b6dc4de/ronbonica/ICMP]<https://github.com/ronbonica/ICMP>
GitHub - ronbonica/ICMP: ICMP Inherited Wisdom 
Draft<https://github.com/ronbonica/ICMP>
ICMP Inherited Wisdom Draft. Contribute to ronbonica/ICMP development by 
creating an account on GitHub.
github.com




Juniper Business Use Only
___
Int-area mailing list -- int-area@ietf.org
To unsubscribe send an email to int-area-le...@ietf.org


Re: search_path and SET ROLE

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 2:02 PM Isaac Morland 
wrote:

> On Wed, 22 May 2024 at 13:48, Ron Johnson  wrote:
>
> As a superuser administrator, I need to be able to see ALL tables in ALL
>> schemas when running "\dt", not just the ones in "$user" and public.  And I
>> need it to act consistently across all the systems.
>>
>
> \dt *.*
>

Also shows information_schema, pg_catalog, and pg_toast.  I can adjust to
that, though.


> But I am skeptical how often you really want this in a real database with
> more than a few tables. Surely \dn+ followed by \dt [schemaname].* for a
> few strategically chosen [schemaname] would be more useful?
>

More than you'd think.  I'm always looking up the definition of this table
or that table (mostly for indices and keys), and I never remember which
schema they're in.


Re: search_path wildcard?

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 1:58 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > That would be a helpful feature for administrators, when there are
> multiple
> > schemas in multiple databases, on multiple servers: superusers get ALTER
> > ROLE foo SET SEARCH_PATH  = '*'; and they're done with it.
>
> ... and they're pwned within five minutes by any user with the wits
> to create a trojan-horse function or operator.  Generally speaking,
> you want admins to run with a minimal search path not a maximal one.
>

Missing tables when running "\t" is a bigger hassle.


Re: search_path wildcard?

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 12:53 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, May 22, 2024, 10:36 Ron Johnson  wrote:
>
>> This doesn't work, and I've found nothing similar:
>> ALTER ROLE foo SET SEARCH_PATH  = '*';
>>
>
> Correct, you cannot do that.
>

That would be a helpful feature for administrators, when there are multiple
schemas in multiple databases, on multiple servers: superusers get ALTER
ROLE foo SET SEARCH_PATH  = '*'; and they're done with it.


Re: search_path and SET ROLE

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 1:10 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > It seems that the search_path of the role that you SET ROLE to does not
> > become the new search_path.
>
> It does for me:
>
> regression=# create role r1;
> CREATE ROLE
> regression=# create schema r1 authorization r1;
> CREATE SCHEMA
> regression=# select current_schemas(true), current_user;
>current_schemas   | current_user
> -+--
>  {pg_catalog,public} | postgres
> (1 row)
>
> regression=# set role r1;
> SET
> regression=> select current_schemas(true), current_user;
> current_schemas | current_user
> +--
>  {pg_catalog,r1,public} | r1
> (1 row)
>
> regression=> show search_path ;
>search_path
> -
>  "$user", public
> (1 row)
>
> The fine manual says that $user tracks the result of
> CURRENT_USER, and at least in this example it's doing that.
> (I hasten to add that I would not swear there are no
> bugs in this area.)
>
> > Am I missing something, or is that PG's behavior?
>
> I bet what you missed is granting (at least) USAGE on the
> schema to that role.  PG will silently ignore unreadable
> schemas when computing the effective search path.
>

There are multiple schemata in (sometimes) multiple databases on (many)
multiple servers.

As a superuser administrator, I need to be able to see ALL tables in ALL
schemas when running "\dt", not just the ones in "$user" and public.  And I
need it to act consistently across all the systems.

(Heck, none of our schemas are named the same as roles.)

This would be useful for account maintenance:

CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;
ALTER ROLE dbagrp SET search_path = public, dba, sch1, sch2, sch3, sch4;
CREATE USER joe IN GROUP dbagrp INHERIT PASSWORD = 'linenoise';

Then, as user joe:
SHOW search_path;
   search_path
-
 "$user", public
(1 row)
SET ROLE dbagrp RELOAD SESSION; -- note the new clause
SHOW search_path;
   search_path
---
public , dba, sch1, sch2, sch3, sch4
(1 row)

When a new DBA comes on board, add him/her to dbagrp, and they
automagically have everything  that dbagrp has.
Now, each dba must individually be given a search_path.  If you forget, or
forget to add some schemas, etc, mistakes ger made and time is wasted.


search_path wildcard?

2024-05-22 Thread Ron Johnson
This doesn't work, and I've found nothing similar:
ALTER ROLE foo SET SEARCH_PATH  = '*';

Is there a single SQL statement which will generate a search path based
on information_schema.schemata, or do I have to write an anonymous DO
procedure?
SELECT schema_name FROM information_schema.schemata WHERE schema_name !=
'information_schema' AND schema_name NOT LIKE 'pg_%';


search_path and SET ROLE

2024-05-22 Thread Ron Johnson
PG 9.6.24 (Soon, I swear!)

It seems that the search_path of the role that you SET ROLE to does not
become the new search_path.

Am I missing something, or is that PG's behavior?

AS USER postgres


$ psql -h 10.143.170.52 -Xac "CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;"
CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;
CREATE ROLE

$ psql -h 10.143.170.52 -Xac "CREATE USER rjohnson IN GROUP dbagrp INHERIT;"
CREATE USER rjohnson IN GROUP dbagrp INHERIT;
CREATE ROLE

[postgres@FISPMONDB001 ~]$ psql -h 10.143.170.52 -Xac "CREATE USER
\"11026270\" IN GROUP dbagrp INHERIT PASSWORD '${NewPass}' VALID UNTIL
'2024-06-30 23:59:59';"
CREATE USER "11026270" IN GROUP dbagrp INHERIT PASSWORD 'linenoise' VALID
UNTIL '2024-06-30 23:59:59';
CREATE ROLE

$ psql -h 10.143.170.52 -Xac "ALTER ROLE dbagrp set search_path = dbagrp,
public, dba, cds, tms;"
ALTER ROLE dbagrp set search_path = dbagrp, public, dba, cds, tms;
ALTER ROLE

AS USER rjohnson


[rjohnson@fpslbxcdsdbppg1 ~]$ psql -dCDSLBXW
psql (9.6.24)
Type "help" for help.

CDSLBXW=> SET ROLE dbagrp;
SET
CDSLBXW=#
CDSLBXW=# SHOW SEARCH_PATH;
   search_path
-
 "$user", public
(1 row)


Back to user postgres
=

$ psql -h 10.143.170.52 -Xac "ALTER ROLE rjohnson set search_path = dbagrp,
public, dba, cds, tms;"
ALTER ROLE rjohnson set search_path = dbagrp, public, dba, cds, tms;
ALTER ROLE

Back to user rjohnson
=

[rjohnson@fpslbxcdsdbppg1 ~]$ psql -dCDSLBXW
psql (9.6.24)
Type "help" for help.

CDSLBXW=>
CDSLBXW=> SET ROLE dbagrp;
SET

CDSLBXW=# SHOW SEARCH_PATH;
  search_path
---
 dbagrp, public, dba, cds, tms
(1 row)


Re: DFSort query

2024-05-22 Thread Ron Thomas
My Apologies Kolusu for wrong details inputted  

Thanks Much for the sample job and it is working good for my requirement.

Regards
 Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-22 Thread Ron Cohen
Hi Doug,

TC are not supposed to change source IP address of delay requests.

If the TC is a layer2 switch/bridge, it must not modify the source MAC address 
while forwarding and must never touch the layer3 addresses.
If the TC is a layer3 IP router, it must not modify the source IP address while 
forwarding and must change the source MAC address to the MAC address of its 
egress port.

If the TC is a layer4 device, e.g., a NAT device, it modifies the source IP 
address of messages as it is its functionality. It may be the case that such 
functionality is required in the enterprise. My point is that it is far from 
obvious and the draft needs to elaborate why it's needed.

>> This is required by the standards that specify the transport networks.
I would appreciate if you point to the relevant standards.

The draft states that additional support is required for this deployment 
scenario:
"For this deployment scenario timeTransmitters will need to have configured 
tables of timeReceivers' IP addresses and associated Clock Identities in order 
to send Delay Responses to the correct PTP Nodes"

These tables would be part of the IEEE1588 spec if this TC behavior was 
standard. It is not trivial to add support for these tables in HW, if you want 
to support scale and speed.

Best,
Ron

From: Doug Arnold 
Sent: Wednesday, May 22, 2024 12:36 AM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments
________
Hello Ron,

Yes.  A TC is required to change the source address of a message at least for 
Ethernet and IP mappings.  This is not an IEEE 1588 decision.  This is required 
by the standards that specify the transport networks. Ethernet (IEEE 802.1Q) 
IPv4 and IPv6.  A TC effectively changes the payload of the messages from the 
point of view of L2 and L3, so it is a new frame and new packet to those 
layers.  I think that IPv4 has an option to alter a message in-route, but the 
node is supposed to zero out the source address.

Regards,
Doug

________
From: Ron Cohen mailto:r...@marvell.com>>
Sent: Tuesday, May 7, 2024 12:43 PM
To: tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: [TICTOC]Enterprise Profile: Support for Non standard TCs


Hi,



I'm late to the game here. I apologize in advance if this has already been 
discussed and decided:



I can't figure out why the profile needs to support non-standard TCs, or what 
seems to be a strange combination of a NAT+TC devices:



"In PTP Networks that contain Transparent Clocks, timeTransmitters

   might receive Delay Request messages that no longer contains the IP

   Addresses of the timeReceivers.  This is because Transparent Clocks

   might replace the IP address of Delay Requests with their own IP

   address after updating the Correction Fields.  For this deployment

   scenario timeTransmitters will need to have configured tables of

   timeReceivers' IP addresses and associated Clock Identities in order

   to send Delay Responses to the correct PTP Nodes."



Is a standard TC allowed to change the source IP address of messages?



There should be a strong reason to require support for such devices in a 
standard profile.



Best,

Ron



/*

*  Ron Cohen

*  Email: r...@marvell.com<mailto:r...@marvell.com>

*  Mobile: +972.54.5751506

*/


___
TICTOC mailing list -- tictoc@ietf.org
To unsubscribe send an email to tictoc-le...@ietf.org


[TICTOC]Enterprise Profile: Support for Non standard TCs

2024-05-21 Thread Ron Cohen
Hi,

I'm late to the game here. I apologize in advance if this has already been 
discussed and decided:

I can't figure out why the profile needs to support non-standard TCs, or what 
seems to be a strange combination of a NAT+TC devices:

"In PTP Networks that contain Transparent Clocks, timeTransmitters
   might receive Delay Request messages that no longer contains the IP
   Addresses of the timeReceivers.  This is because Transparent Clocks
   might replace the IP address of Delay Requests with their own IP
   address after updating the Correction Fields.  For this deployment
   scenario timeTransmitters will need to have configured tables of
   timeReceivers' IP addresses and associated Clock Identities in order
   to send Delay Responses to the correct PTP Nodes."

Is a standard TC allowed to change the source IP address of messages?

There should be a strong reason to require support for such devices in a 
standard profile.

Best,
Ron

/*
*  Ron Cohen
*  Email: r...@marvell.com<mailto:r...@marvell.com>
*  Mobile: +972.54.5751506
*/

___
TICTOC mailing list -- tictoc@ietf.org
To unsubscribe send an email to tictoc-le...@ietf.org


DFSort query

2024-05-21 Thread Ron Thomas
Hi All-

In the below Data we need to extract with in the cross ref nbr , if  seq Nbr  
=1  get Pacct_NBR and its related acct nbrs from the set 

In the below dataset for cross ref nbr = 24538 we have 2 sets of data  and 
24531 we have 1 set .



Acct _NBR   Pacct_NBR   LAST_CHANGE_TS  CROSS_REF_NBR   SEQ_NBR
600392811   1762220138659   2024-04-18-10.38.09.570030  24538   1
505756281   1500013748790   2024-04-18-10.38.09.570030  24538   2
593830611500013748790   2024-04-18-10.38.09.570030  24538   3
592670711500013748790   2024-04-18-10.38.09.570030  24538   4
505756281   1500013748790   2024-01-15-08.05.14.038792  24538   1
593830611500013748790   2024-01-15-08.05.14.038792  24538   2
592670711500013748790   2024-01-15-08.05.14.038792  24538   3
600392811   1762220138659   2024-01-15-08.05.14.038792  24538   4
600392561   1762220138631   2024-01-15-08.05.14.038792  24531   1

Output 

Acct _NBR   Pacct_NBR
600392811   1762220138659
505756281   1762220138659
593830611762220138659
592670711762220138659
505756281   1500013748790
593830611500013748790
592670711500013748790
600392811   1500013748790
600392561   1762220138631

Data size
Acct _NBR 10 bytes
Pacct_NBR 15 bytes
LAST_CHANGE_TS 20 bytes
CROSS_REF_NBR  5 bytes
SEQ_NBR 2 bytes

Could someone please let me know how we can build this data using dfsort ?

Regards
Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: pg_dump and not MVCC-safe commands

2024-05-20 Thread Ron Johnson
On Mon, May 20, 2024 at 11:54 AM Christophe Pettus  wrote:

>
>
> > On May 20, 2024, at 08:49, PetSerAl  wrote:
> > Basically, you need application cooperation to make
> > consistent live database backup.
>
> If it is critical that you have a completely consistent backup as of a
> particular point in time, and you are not concerned about restoring to a
> different processor architecture, pg_basebackup is a superior solution to
> pg_dump.
>

Single-threaded, and thus dreadfully slow.  I'll stick with PgBackRest.


Re: [DISCUSSION] FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-19 Thread Ron Liu
Hi, Lincoln

>  2. Regarding the options in HashAggCodeGenerator, since this new feature
has gone
through a couple of release cycles and could be considered for
PublicEvolving now,
cc @Ron Liu   WDYT?

Thanks for cc'ing me,  +1 for public these options now.

Best,
Ron

Benchao Li  于2024年5月20日周一 13:08写道:

> I agree with Lincoln about the experimental features.
>
> Some of these configurations do not even have proper implementation,
> take 'table.exec.range-sort.enabled' as an example, there was a
> discussion[1] about it before.
>
> [1] https://lists.apache.org/thread/q5h3obx36pf9po28r0jzmwnmvtyjmwdr
>
> Lincoln Lee  于2024年5月20日周一 12:01写道:
> >
> > Hi Jane,
> >
> > Thanks for the proposal!
> >
> > +1 for the changes except for these annotated as experimental ones.
> >
> > For the options annotated as experimental,
> >
> > +1 for the moving of IncrementalAggregateRule & RelNodeBlock.
> >
> > For the rest of the options, there are some suggestions:
> >
> > 1. for the batch related parameters, it's recommended to either delete
> > them (leaving the necessary defaults value in place) or leave them as
> they
> > are. Including:
> > FlinkRelMdRowCount
> > FlinkRexUtil
> > BatchPhysicalSortRule
> > JoinDeriveNullFilterRule
> > BatchPhysicalJoinRuleBase
> > BatchPhysicalSortMergeJoinRule
> >
> > What I understand about the history of these options is that they were
> once
> > used for fine
> > tuning for tpc testing, and the current flink planner no longer relies on
> > these internal
> > options when testing tpc[1]. In addition, these options are too obscure
> for
> > SQL users,
> > and some of them are actually magic numbers.
> >
> > 2. Regarding the options in HashAggCodeGenerator, since this new feature
> > has gone
> > through a couple of release cycles and could be considered for
> > PublicEvolving now,
> > cc @Ron Liu   WDYT?
> >
> > 3. Regarding WindowEmitStrategy, IIUC it is currently unsupported on TVF
> > window, so
> > it's recommended to keep it untouched for now and follow up in
> > FLINK-29692[2]. cc @Xuyang 
> >
> > [1]
> >
> https://github.com/ververica/flink-sql-benchmark/blob/master/tools/common/flink-conf.yaml
> > [2] https://issues.apache.org/jira/browse/FLINK-29692
> >
> >
> > Best,
> > Lincoln Lee
> >
> >
> > Yubin Li  于2024年5月17日周五 10:49写道:
> >
> > > Hi Jane,
> > >
> > > Thank Jane for driving this proposal !
> > >
> > > This makes sense for users, +1 for that.
> > >
> > > Best,
> > > Yubin
> > >
> > > On Thu, May 16, 2024 at 3:17 PM Jark Wu  wrote:
> > > >
> > > > Hi Jane,
> > > >
> > > > Thanks for the proposal. +1 from my side.
> > > >
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > > On Thu, 16 May 2024 at 10:28, Xuannan Su 
> wrote:
> > > >
> > > > > Hi Jane,
> > > > >
> > > > > Thanks for driving this effort! And +1 for the proposed changes.
> > > > >
> > > > > I have one comment on the migration plan.
> > > > >
> > > > > For options to be moved to another module/package, I think we have
> to
> > > > > mark the old option deprecated in 1.20 for it to be removed in 2.0,
> > > > > according to the API compatibility guarantees[1]. We can introduce
> the
> > > > > new option in 1.20 with the same option key in the intended class.
> > > > > WDYT?
> > > > >
> > > > > Best,
> > > > > Xuannan
> > > > >
> > > > > [1]
> > > > >
> > >
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/#api-compatibility-guarantees
> > > > >
> > > > >
> > > > >
> > > > > On Wed, May 15, 2024 at 6:20 PM Jane Chan 
> > > wrote:
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to start a discussion on FLIP-457: Improve Table/SQL
> > > > > Configuration
> > > > > > for Flink 2.0 [1]. This FLIP revisited all Table/SQL
> configurations
> > > to
> > > > > > improve user-friendliness and maintainability as Flink moves
> toward
> > > 2.0.
> > > > > >
> > > > > > I am looking forward to your feedback.
> > > > > >
> > > > > > Best regards,
> > > > > > Jane
> > > > > >
> > > > > > [1]
> > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992
> > > > >
> > >
>
>
>
> --
>
> Best,
> Benchao Li
>


Which iPhone App Allows The Creation Of level 1 2 and so on of Headings that is accessible with VoiceOver?

2024-05-18 Thread Ron Canazzi

Hi Group,

I want to make a document as part of a game I am creating that will 
quickly allow me to access aspects of the game via a document using 
header navigation. I tried using Microsoft Word for PC and moving it to 
the iPhone but the header navigation seemed broken. Only a few words 
would appear on each line which Ms Word had created a heading.


Which app on the iPhone itself would be used to create various levels of 
headings that would be accessible with VoiceOver?


--
Signature:
For a nation to admit it has done grievous wrongs and will strive to correct 
them for the betterment of all is no vice;
For a nation to claim it has always been great, needs no improvement  and to 
cling to its past achievements is no virtue!

--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups "VIPhone" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/d9a737e0-dec2-b5e1-e6db-3d91fadc5096%40roadrunner.com.


[cctalk] Re: Mylar/Sponge Keyboard Repair Kits

2024-05-17 Thread Ron Pool via cctalk
TexElec makes and sells replacement "foam and foil" discs for those keyboards.  
See https://texelec.com/product/foam-capacitive-pads-keytronic/ .  They are 
usually shown as on backorder.  The one time I ordered a set, they were on 
backorder and arrived a few weeks after I placed the order.  I wouldn't 
recommend waiting for them to be in stock before ordering as that might require 
a VERY long wait.

-- Ron Pool

-Original Message-
From: Marvin Johnston via cctalk  
Sent: Friday, May 17, 2024 5:49 AM
To: cctalk@classiccmp.org
Cc: Marvin Johnston 
Subject: [cctalk] Mylar/Sponge Keyboard Repair Kits

I've got a couple of keyboards where the sponge has disintegrated to the 
point they no longer work. The latest one is a Vector 3 keyboard and I 
would love to get it fixed.

Can repair kits still be purchased and/or are the instructions for 
making those sponge/mylar pieces available?

Thanks!

Marvin





(flink) branch master updated: [FLINK-35346][table-common] Introduce workflow scheduler interface for materialized table

2024-05-16 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 1378979f02e [FLINK-35346][table-common] Introduce workflow scheduler 
interface for materialized table
1378979f02e is described below

commit 1378979f02eed55bbf3f91b08ec166d55b2c42a6
Author: Ron 
AuthorDate: Thu May 16 19:41:54 2024 +0800

[FLINK-35346][table-common] Introduce workflow scheduler interface for 
materialized table

[FLINK-35346][table-common] Introduce workflow scheduler interface for 
materialized table

This closes #24767
---
 .../apache/flink/table/factories/FactoryUtil.java  |   9 +-
 .../table/factories/WorkflowSchedulerFactory.java  |  56 +++
 .../factories/WorkflowSchedulerFactoryUtil.java| 156 ++
 .../table/workflow/CreateRefreshWorkflow.java  |  29 
 .../table/workflow/DeleteRefreshWorkflow.java  |  48 ++
 .../table/workflow/ModifyRefreshWorkflow.java  |  40 +
 .../flink/table/workflow/RefreshWorkflow.java  |  34 
 .../flink/table/workflow/WorkflowException.java|  37 +
 .../flink/table/workflow/WorkflowScheduler.java|  91 +++
 .../workflow/TestWorkflowSchedulerFactory.java | 175 +
 .../workflow/WorkflowSchedulerFactoryUtilTest.java | 107 +
 .../org.apache.flink.table.factories.Factory   |   1 +
 12 files changed, 782 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
index d8d6d7e9000..5d66b23c3d8 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
@@ -167,6 +167,13 @@ public final class FactoryUtil {
 + "tasks to advance their watermarks 
without the need to wait for "
 + "watermarks from this source while it is 
idle.");
 
+public static final ConfigOption WORKFLOW_SCHEDULER_TYPE =
+ConfigOptions.key("workflow-scheduler.type")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"Specify the workflow scheduler type that is used 
for materialized table.");
+
 /**
  * Suffix for keys of {@link ConfigOption} in case a connector requires 
multiple formats (e.g.
  * for both key and value).
@@ -903,7 +910,7 @@ public final class FactoryUtil {
 return loadResults;
 }
 
-private static String stringifyOption(String key, String value) {
+public static String stringifyOption(String key, String value) {
 if (GlobalConfiguration.isSensitive(key)) {
 value = HIDDEN_CONTENT;
 }
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java
new file mode 100644
index 000..72e144f7d19
--- /dev/null
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.workflow.WorkflowScheduler;
+
+import java.util.Map;
+
+/**
+ * A factory to create a {@link WorkflowScheduler} instance.
+ *
+ * See {@link Factory} for more information about the general design of a 
factory.
+ */
+@PublicEvolving
+public interface WorkflowSchedulerFactory extends Factory {
+
+/** Create a workflow scheduler instance which interacts with external 
scheduler service. */
+ 

Re: [EVDL] 46 Pure EVs for sale, Teslas competition.

2024-05-15 Thread Ron Solberg via EV
t seems that since 2017, Tesla has gone into reverse on their original 
master plan.

So let China take the lead/heat for pushing out ICE cars. In five or ten years 
the ICE folks can adjust/catch up. No bailout needed. The pressure on Musk is 
reduced and Optimus can go to Mars and or drive Robotaxis, a win-win except for 
the carbon problem.

Ron Solberg

> On May 14, 2024, at 7:22 PM, EV List Lackey via EV  wrote:
> 
> On 14 May 2024 at 10:35, Rush via EV wrote:
> 
>> I think that anybody having any knowledge of how a business is conducted
>> would say that 'yes, profit is a good thing'.
> 
> Let's restore the context:
> 
>> AND still make a hefty profit on each car
> 
> As I understood it, and someone correct me if this is wrong, the original 
> Tesla "master plan" was to get to mass market EVs.  They'd start with 
> building luxury EVs for rich people, and use the presumably *hefty* profits 
> from that venture to design and build EVs for the rest of us.
> 
> That plan was written a long time ago - maybe 2008?  Again, someone please 
> help me out here.
> 
> The Model 3 was introduced 7 years ago, in 2017.  That was real progress 
> toward affordable EVs, 9 years on from the master plan's inception.  Not 
> bad.
> 
> Is that master plan still their guide?  If so, what progress have they made 
> on it since?
> 
> Not the Model Y (2020).  It's more expensive.
> 
> I'm pretty sure it's not the Cybertruck (2023), either.
> 
> It seems that since 2017, Tesla has gone into reverse on their original 
> master plan.
> 
> Their recent investor call suggested pretty strongly that they're going to 
> start using their EV profits less to develop EVs, and more to develop AI, 
> autonomy software, and robotaxis.
> 
> Their recent layoffs seem to confirm that direction.
> 
> What do you think of this?
> 
> Is it a good thing?
> 
> Is it likely to be permanent, or is it just another Elon Musk shot-from-the-
> hip that he'll change next month or next year?
> 
> David Roden, EVDL moderator & general lackey
> 
> To reach me, don't reply to this message; I won't get it.  Use my 
> offlist address here : http://evdl.org/help/index.html#supt
> 
> = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = 
> 
> If economists wished to study the horse, they wouldn't go and look at 
> horses. They'd sit in their studies and say to themselves, "What would 
> I do if I were a horse?"
> 
>  -- Ely Devons
> 
> = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = 
> 
> ___
> Address messages to ev@lists.evdl.org
> No other addresses in TO and CC fields
> HELP: http://www.evdl.org/help/
> 

___
Address messages to ev@lists.evdl.org
No other addresses in TO and CC fields
HELP: http://www.evdl.org/help/



(flink) 01/04: [FLINK-35193][table] Support drop materialized table syntax

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8551ef39e0387f723a72299cc73aaaf827cf74bf
Author: Feng Jin 
AuthorDate: Mon May 13 20:06:41 2024 +0800

[FLINK-35193][table] Support drop materialized table syntax
---
 .../src/main/codegen/data/Parser.tdd   |  1 +
 .../src/main/codegen/includes/parserImpls.ftl  | 30 ++
 .../sql/parser/ddl/SqlDropMaterializedTable.java   | 68 ++
 .../flink/sql/parser/utils/ParserResource.java |  3 +
 .../MaterializedTableStatementParserTest.java  | 25 
 5 files changed, 127 insertions(+)

diff --git a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd 
b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
index 81b3412954c..883b6aec1b2 100644
--- a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
+++ b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
@@ -76,6 +76,7 @@
 "org.apache.flink.sql.parser.ddl.SqlDropCatalog"
 "org.apache.flink.sql.parser.ddl.SqlDropDatabase"
 "org.apache.flink.sql.parser.ddl.SqlDropFunction"
+"org.apache.flink.sql.parser.ddl.SqlDropMaterializedTable"
 "org.apache.flink.sql.parser.ddl.SqlDropPartitions"
 
"org.apache.flink.sql.parser.ddl.SqlDropPartitions.AlterTableDropPartitionsContext"
 "org.apache.flink.sql.parser.ddl.SqlDropTable"
diff --git 
a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl 
b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
index bdc97818914..b2a5ea02d0f 100644
--- a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
+++ b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
@@ -1801,6 +1801,34 @@ SqlCreate SqlCreateMaterializedTable(Span s, boolean 
replace, boolean isTemporar
 }
 }
 
+/**
+* Parses a DROP MATERIALIZED TABLE statement.
+*/
+SqlDrop SqlDropMaterializedTable(Span s, boolean replace, boolean isTemporary) 
:
+{
+SqlIdentifier tableName = null;
+boolean ifExists = false;
+}
+{
+
+ {
+ if (isTemporary) {
+ throw SqlUtil.newContextException(
+ getPos(),
+ 
ParserResource.RESOURCE.dropTemporaryMaterializedTableUnsupported());
+ }
+ }
+ 
+
+ifExists = IfExistsOpt()
+
+tableName = CompoundIdentifier()
+
+{
+return new SqlDropMaterializedTable(s.pos(), tableName, ifExists);
+}
+}
+
 /**
 * Parses alter materialized table.
 */
@@ -2427,6 +2455,8 @@ SqlDrop SqlDropExtended(Span s, boolean replace) :
 (
 drop = SqlDropCatalog(s, replace)
 |
+drop = SqlDropMaterializedTable(s, replace, isTemporary)
+|
 drop = SqlDropTable(s, replace, isTemporary)
 |
 drop = SqlDropView(s, replace, isTemporary)
diff --git 
a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java
 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java
new file mode 100644
index 000..ec9439fb13a
--- /dev/null
+++ 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl;
+
+import org.apache.calcite.sql.SqlDrop;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlSpecialOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.util.ImmutableNullableList;
+
+import java.util.List;
+
+/** DROP MATERIALIZED TABLE DDL sql call. */
+public class SqlDropMaterializedTable extends SqlDrop {
+
+private static final SqlOperator OPERATOR =
+new SqlSpecialOperator("DROP MATERIALIZED TABLE", 
SqlKind.DRO

(flink) 03/04: [FLINK-35193][table] Support execution of drop materialized table

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 51b744bca1bdf53385152ed237f2950525046488
Author: Feng Jin 
AuthorDate: Mon May 13 20:08:38 2024 +0800

[FLINK-35193][table] Support execution of drop materialized table
---
 .../MaterializedTableManager.java  | 115 +-
 .../service/operation/OperationExecutor.java   |   9 +
 .../service/MaterializedTableStatementITCase.java  | 241 ++---
 .../apache/flink/table/catalog/CatalogManager.java |   4 +-
 4 files changed, 328 insertions(+), 41 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
index b4ba12b8755..a51b1885c98 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -20,6 +20,7 @@ package 
org.apache.flink.table.gateway.service.materializedtable;
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.JobStatus;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.catalog.CatalogMaterializedTable;
@@ -34,6 +35,7 @@ import org.apache.flink.table.gateway.api.results.ResultSet;
 import org.apache.flink.table.gateway.service.operation.OperationExecutor;
 import org.apache.flink.table.gateway.service.result.ResultFetcher;
 import org.apache.flink.table.gateway.service.utils.SqlExecutionException;
+import org.apache.flink.table.operations.command.DescribeJobOperation;
 import org.apache.flink.table.operations.command.StopJobOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableChangeOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableRefreshOperation;
@@ -93,6 +95,9 @@ public class MaterializedTableManager {
 } else if (op instanceof AlterMaterializedTableResumeOperation) {
 return callAlterMaterializedTableResume(
 operationExecutor, handle, 
(AlterMaterializedTableResumeOperation) op);
+} else if (op instanceof DropMaterializedTableOperation) {
+return callDropMaterializedTableOperation(
+operationExecutor, handle, 
(DropMaterializedTableOperation) op);
 }
 
 throw new SqlExecutionException(
@@ -146,8 +151,7 @@ public class MaterializedTableManager {
 materializedTableIdentifier,
 e);
 operationExecutor.callExecutableOperation(
-handle,
-new 
DropMaterializedTableOperation(materializedTableIdentifier, true, false));
+handle, new 
DropMaterializedTableOperation(materializedTableIdentifier, true));
 throw e;
 }
 }
@@ -170,7 +174,8 @@ public class MaterializedTableManager {
 materializedTable.getSerializedRefreshHandler(),
 
operationExecutor.getSessionContext().getUserClassloader());
 
-String savepointPath = stopJobWithSavepoint(operationExecutor, handle, 
refreshHandler);
+String savepointPath =
+stopJobWithSavepoint(operationExecutor, handle, 
refreshHandler.getJobId());
 
 ContinuousRefreshHandler updateRefreshHandler =
 new ContinuousRefreshHandler(
@@ -183,9 +188,12 @@ public class MaterializedTableManager {
 CatalogMaterializedTable.RefreshStatus.SUSPENDED,
 
materializedTable.getRefreshHandlerDescription().orElse(null),
 serializeContinuousHandler(updateRefreshHandler));
+List tableChanges = new ArrayList<>();
+tableChanges.add(
+
TableChange.modifyRefreshStatus(CatalogMaterializedTable.RefreshStatus.ACTIVATED));
 AlterMaterializedTableChangeOperation 
alterMaterializedTableChangeOperation =
 new AlterMaterializedTableChangeOperation(
-tableIdentifier, Collections.emptyList(), 
updatedMaterializedTable);
+tableIdentifier, tableChanges, 
updatedMaterializedTable);
 
 operationExecutor.callExecutableOperation(handle, 
alterMaterializedTableChangeOperation);
 
@@ -284,8 +292,7 @@ public class MaterializedTableManager {
 // drop materialized table while submit flink streaming job occur 
exception. Thus, weak
 // atomicity is guar

(flink) 04/04: [FLINK-35342][table] Fix MaterializedTableStatementITCase test can check for wrong status

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 94d861b08fef1e350d80a3f5f0f63168d327bc64
Author: Feng Jin 
AuthorDate: Tue May 14 11:18:40 2024 +0800

[FLINK-35342][table] Fix MaterializedTableStatementITCase test can check 
for wrong status
---
 .../service/MaterializedTableStatementITCase.java| 20 +++-
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
 
b/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
index 105c51ea597..dd7d25e124c 100644
--- 
a/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
+++ 
b/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
@@ -272,7 +272,7 @@ public class MaterializedTableStatementITCase {
 waitUntilAllTasksAreRunning(
 restClusterClient, 
JobID.fromHexString(activeRefreshHandler.getJobId()));
 
-// check the background job is running
+// verify the background job is running
 String describeJobDDL = String.format("DESCRIBE JOB '%s'", 
activeRefreshHandler.getJobId());
 OperationHandle describeJobHandle =
 service.executeStatement(sessionHandle, describeJobDDL, -1, 
new Configuration());
@@ -653,7 +653,7 @@ public class MaterializedTableStatementITCase {
 assertThat(suspendMaterializedTable.getRefreshStatus())
 .isEqualTo(CatalogMaterializedTable.RefreshStatus.SUSPENDED);
 
-// check background job is stopped
+// verify background job is stopped
 byte[] refreshHandler = 
suspendMaterializedTable.getSerializedRefreshHandler();
 ContinuousRefreshHandler suspendRefreshHandler =
 ContinuousRefreshHandlerSerializer.INSTANCE.deserialize(
@@ -667,7 +667,7 @@ public class MaterializedTableStatementITCase {
 List jobResults = fetchAllResults(service, sessionHandle, 
describeJobHandle);
 
assertThat(jobResults.get(0).getString(2).toString()).isEqualTo("FINISHED");
 
-// check savepoint is created
+// verify savepoint is created
 assertThat(suspendRefreshHandler.getRestorePath()).isNotEmpty();
 String actualSavepointPath = 
suspendRefreshHandler.getRestorePath().get();
 
@@ -692,7 +692,17 @@ public class MaterializedTableStatementITCase {
 assertThat(resumedCatalogMaterializedTable.getRefreshStatus())
 .isEqualTo(CatalogMaterializedTable.RefreshStatus.ACTIVATED);
 
-// check background job is running
+waitUntilAllTasksAreRunning(
+restClusterClient,
+JobID.fromHexString(
+ContinuousRefreshHandlerSerializer.INSTANCE
+.deserialize(
+resumedCatalogMaterializedTable
+.getSerializedRefreshHandler(),
+getClass().getClassLoader())
+.getJobId()));
+
+// verify background job is running
 refreshHandler = 
resumedCatalogMaterializedTable.getSerializedRefreshHandler();
 ContinuousRefreshHandler resumeRefreshHandler =
 ContinuousRefreshHandlerSerializer.INSTANCE.deserialize(
@@ -706,7 +716,7 @@ public class MaterializedTableStatementITCase {
 jobResults = fetchAllResults(service, sessionHandle, 
describeResumeJobHandle);
 
assertThat(jobResults.get(0).getString(2).toString()).isEqualTo("RUNNING");
 
-// check resumed job is restored from savepoint
+// verify resumed job is restored from savepoint
 Optional actualRestorePath =
 getJobRestoreSavepointPath(restClusterClient, resumeJobId);
 assertThat(actualRestorePath).isNotEmpty();



(flink) 02/04: [FLINK-35193][table] Support convert drop materialized table node to operation

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit fd333941553c68c36e1460102ab023f80a5b1362
Author: Feng Jin 
AuthorDate: Mon May 13 20:07:39 2024 +0800

[FLINK-35193][table] Support convert drop materialized table node to 
operation
---
 .../DropMaterializedTableOperation.java|  6 ++--
 .../SqlDropMaterializedTableConverter.java | 41 ++
 .../operations/converters/SqlNodeConverters.java   |  1 +
 ...erializedTableNodeToOperationConverterTest.java | 21 +++
 4 files changed, 65 insertions(+), 4 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
index e5eee557bfc..46dd86ad96b 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
@@ -33,9 +33,8 @@ import java.util.Map;
 public class DropMaterializedTableOperation extends DropTableOperation
 implements MaterializedTableOperation {
 
-public DropMaterializedTableOperation(
-ObjectIdentifier tableIdentifier, boolean ifExists, boolean 
isTemporary) {
-super(tableIdentifier, ifExists, isTemporary);
+public DropMaterializedTableOperation(ObjectIdentifier tableIdentifier, 
boolean ifExists) {
+super(tableIdentifier, ifExists, false);
 }
 
 @Override
@@ -43,7 +42,6 @@ public class DropMaterializedTableOperation extends 
DropTableOperation
 Map params = new LinkedHashMap<>();
 params.put("identifier", getTableIdentifier());
 params.put("IfExists", isIfExists());
-params.put("isTemporary", isTemporary());
 
 return OperationUtils.formatWithChildren(
 "DROP MATERIALIZED TABLE",
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlDropMaterializedTableConverter.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlDropMaterializedTableConverter.java
new file mode 100644
index 000..6501dc0c453
--- /dev/null
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlDropMaterializedTableConverter.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.operations.converters;
+
+import org.apache.flink.sql.parser.ddl.SqlDropMaterializedTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.UnresolvedIdentifier;
+import org.apache.flink.table.operations.Operation;
+import 
org.apache.flink.table.operations.materializedtable.DropMaterializedTableOperation;
+
+/** A converter for {@link SqlDropMaterializedTable}. */
+public class SqlDropMaterializedTableConverter
+implements SqlNodeConverter {
+@Override
+public Operation convertSqlNode(
+SqlDropMaterializedTable sqlDropMaterializedTable, ConvertContext 
context) {
+UnresolvedIdentifier unresolvedIdentifier =
+
UnresolvedIdentifier.of(sqlDropMaterializedTable.fullTableName());
+ObjectIdentifier identifier =
+
context.getCatalogManager().qualifyIdentifier(unresolvedIdentifier);
+// Currently we don't support temporary materialized table, so 
isTemporary is always false
+return new DropMaterializedTableOperation(
+identifier, sqlDropMaterializedTable.getIfExists());
+}
+}
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlNodeConverters.java
 
b/flink-table/flink-table-planner/

(flink) branch master updated (65d31e26534 -> 94d861b08fe)

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 65d31e26534 [FLINK-33986][runtime] Extend ShuffleMaster to support 
snapshot and restore state.
 new 8551ef39e03 [FLINK-35193][table] Support drop materialized table syntax
 new fd333941553 [FLINK-35193][table] Support convert drop materialized 
table node to operation
 new 51b744bca1b [FLINK-35193][table] Support execution of drop 
materialized table
 new 94d861b08fe [FLINK-35342][table] Fix MaterializedTableStatementITCase 
test can check for wrong status

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../MaterializedTableManager.java  | 115 -
 .../service/operation/OperationExecutor.java   |   9 +
 .../service/MaterializedTableStatementITCase.java  | 261 ++---
 .../src/main/codegen/data/Parser.tdd   |   1 +
 .../src/main/codegen/includes/parserImpls.ftl  |  30 +++
 ...pCatalog.java => SqlDropMaterializedTable.java} |  40 ++--
 .../flink/sql/parser/utils/ParserResource.java |   3 +
 .../MaterializedTableStatementParserTest.java  |  25 ++
 .../apache/flink/table/catalog/CatalogManager.java |   4 +-
 .../DropMaterializedTableOperation.java|   6 +-
 ...java => SqlDropMaterializedTableConverter.java} |  20 +-
 .../operations/converters/SqlNodeConverters.java   |   1 +
 ...erializedTableNodeToOperationConverterTest.java |  21 ++
 13 files changed, 455 insertions(+), 81 deletions(-)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/{SqlDropCatalog.java
 => SqlDropMaterializedTable.java} (68%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterMaterializedTableSuspendConverter.java
 => SqlDropMaterializedTableConverter.java} (59%)



[RESULT][VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-13 Thread Ron Liu
Hi, Dev

I'm happy to announce that FLIP-448: Introduce Pluggable Workflow Scheduler
Interface for Materialized Table[1] has been accepted with 8 approving
votes (4 binding) [2].

- Xuyang
- Feng Jin
- Lincoln Lee(binding)
- Jark Wu(binding)
- Ron Liu(binding)
- Shengkai Fang(binding)
- Keith Lee
- Ahmed Hamdy

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
[2] https://lists.apache.org/thread/8qvh3brgvo46xprv4mxq4kyhyy0tsvny

Best,
Ron


Re: [9fans] Balancing Progress and Accessibility in the Plan 9 Community. (Was: [9fans] Interoperating between 9legacy and 9front)

2024-05-13 Thread ron minnich
On Sun, May 12, 2024 at 10:55 PM ibrahim via 9fans <9fans@9fans.net> wrote:

>
>
> Please correct me if I'm wrong.
> Permalink
> 
>

In my opinion? you are wrong. And that's as far as I will stay involved in
this discussion.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tcf128fa955b8aafc-M918765fe95c422bafdedbbf1
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Balancing Progress and Accessibility in the Plan 9 Community. (Was: [9fans] Interoperating between 9legacy and 9front)

2024-05-12 Thread ron minnich
On Sun, May 12, 2024 at 8:53 PM ibrahim via 9fans <9fans@9fans.net> wrote:

> Not a single developer who uses plan9 for distributed systems, commercial
> products will dare to use a system like 9front as the sources. The reason
> is quite simple :
>
> You ignore copyrights as you please and distributed 9front under an MIT
> license long before Nokia as the owner of it decided to do so. You did that
> at a time when plan9 was placed under GPL
>

I do not agree with what you are saying here. I was involved in the license
discussions starting in 2003, and was involved in both the GPL release and
the more recent MIT license release. The choice of license, both times, was
made by the same person in Bell Labs, even as the Bell Labs corporate
parent changed. In fact, in 2013, we were *required* to use the GPL,
whereas in the later release, the GPL was specifically mentioned as a
license we could *not* use. I won't pretend to understand why.

At no time in all this was there any evidence of incorrect behavior on the
part of 9front. None. Zip. Zero. Zed. They have always been careful to
follow the rules.

Further, when people in 9front wrote new code, they released it under MIT,
and Cinap among others was very kind in letting Harvey use it.

So, Ibrahim,  I can not agree with your statement here.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tcf128fa955b8aafc-M3d0b948ec892b2d0de94a895
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


(flink) 01/01: [FLINK-35197][table] Support the execution of suspend, resume materialized table in continuous refresh mode

2024-05-12 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e4972c003f68da6dc4066459d4c6e5d981f07e96
Author: Feng Jin 
AuthorDate: Thu May 9 16:26:12 2024 +0800

[FLINK-35197][table] Support the execution of suspend, resume materialized 
table in continuous refresh mode

This closes #24765
---
 .../MaterializedTableManager.java  | 215 ++-
 .../service/MaterializedTableStatementITCase.java  | 302 -
 .../MaterializedTableManagerTest.java  |  39 +++
 .../table/refresh/ContinuousRefreshHandler.java|  22 +-
 4 files changed, 561 insertions(+), 17 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
index ff0670462e0..b4ba12b8755 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -34,8 +34,11 @@ import org.apache.flink.table.gateway.api.results.ResultSet;
 import org.apache.flink.table.gateway.service.operation.OperationExecutor;
 import org.apache.flink.table.gateway.service.result.ResultFetcher;
 import org.apache.flink.table.gateway.service.utils.SqlExecutionException;
+import org.apache.flink.table.operations.command.StopJobOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableChangeOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableRefreshOperation;
+import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableResumeOperation;
+import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableSuspendOperation;
 import 
org.apache.flink.table.operations.materializedtable.CreateMaterializedTableOperation;
 import 
org.apache.flink.table.operations.materializedtable.DropMaterializedTableOperation;
 import 
org.apache.flink.table.operations.materializedtable.MaterializedTableOperation;
@@ -46,17 +49,23 @@ import 
org.apache.flink.table.types.logical.LogicalTypeFamily;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.Set;
+import java.util.stream.Collectors;
 
 import static org.apache.flink.api.common.RuntimeExecutionMode.BATCH;
 import static org.apache.flink.api.common.RuntimeExecutionMode.STREAMING;
+import static 
org.apache.flink.configuration.CheckpointingOptions.SAVEPOINT_DIRECTORY;
 import static org.apache.flink.configuration.DeploymentOptions.TARGET;
 import static org.apache.flink.configuration.ExecutionOptions.RUNTIME_MODE;
 import static org.apache.flink.configuration.PipelineOptions.NAME;
+import static 
org.apache.flink.configuration.StateRecoveryOptions.SAVEPOINT_PATH;
 import static 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions.CHECKPOINTING_INTERVAL;
 import static 
org.apache.flink.table.api.internal.TableResultInternal.TABLE_RESULT_OK;
 import static 
org.apache.flink.table.catalog.CatalogBaseTable.TableKind.MATERIALIZED_TABLE;
@@ -78,6 +87,12 @@ public class MaterializedTableManager {
 } else if (op instanceof AlterMaterializedTableRefreshOperation) {
 return callAlterMaterializedTableRefreshOperation(
 operationExecutor, handle, 
(AlterMaterializedTableRefreshOperation) op);
+} else if (op instanceof AlterMaterializedTableSuspendOperation) {
+return callAlterMaterializedTableSuspend(
+operationExecutor, handle, 
(AlterMaterializedTableSuspendOperation) op);
+} else if (op instanceof AlterMaterializedTableResumeOperation) {
+return callAlterMaterializedTableResume(
+operationExecutor, handle, 
(AlterMaterializedTableResumeOperation) op);
 }
 
 throw new SqlExecutionException(
@@ -115,6 +130,105 @@ public class MaterializedTableManager {
 CatalogMaterializedTable catalogMaterializedTable =
 createMaterializedTableOperation.getCatalogMaterializedTable();
 
+try {
+executeContinuousRefreshJob(
+operationExecutor,
+handle,
+catalogMaterializedTable,
+materializedTableIdentifier,
+Collections.emptyMap(),
+Optional.empty());
+} catch (Exception e) {
+// drop

(flink) branch master updated (9fe8d7bf870 -> e4972c003f6)

2024-05-12 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 9fe8d7bf870 [FLINK-35198][table] Support manual refresh materialized 
table
 add e80c2864db5 [FLINK-35197][table] Fix incomplete serialization and 
deserialization of materialized tables
 add 3b6e8db11fe [FLINK-35197][table] Support convert alter materialized 
table suspend/resume nodes to operations
 new e4972c003f6 [FLINK-35197][table] Support the execution of suspend, 
resume materialized table in continuous refresh mode

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../MaterializedTableManager.java  | 215 ++-
 .../service/MaterializedTableStatementITCase.java  | 302 -
 .../MaterializedTableManagerTest.java  |  39 +++
 ... => AlterMaterializedTableResumeOperation.java} |  43 ++-
 .../AlterMaterializedTableSuspendOperation.java}   |  23 +-
 .../catalog/CatalogBaseTableResolutionTest.java|  73 -
 .../flink/table/catalog/CatalogPropertiesUtil.java |  10 +-
 .../table/refresh/ContinuousRefreshHandler.java|  22 +-
 ... SqlAlterMaterializedTableResumeConverter.java} |  36 ++-
 ...SqlAlterMaterializedTableSuspendConverter.java} |  22 +-
 .../operations/converters/SqlNodeConverters.java   |   2 +
 ...erializedTableNodeToOperationConverterTest.java |  40 ++-
 12 files changed, 735 insertions(+), 92 deletions(-)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/{AlterMaterializedTableRefreshOperation.java
 => AlterMaterializedTableResumeOperation.java} (56%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/{command/ShowJobsOperation.java
 => materializedtable/AlterMaterializedTableSuspendOperation.java} (63%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterMaterializedTableRefreshConverter.java
 => SqlAlterMaterializedTableResumeConverter.java} (54%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterMaterializedTableRefreshConverter.java
 => SqlAlterMaterializedTableSuspendConverter.java} (69%)



(flink) branch master updated (86c8304d735 -> 9fe8d7bf870)

2024-05-11 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 86c8304d735 [FLINK-35041][test] Fix the 
IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration failed
 add 9fe8d7bf870 [FLINK-35198][table] Support manual refresh materialized 
table

No new revisions were added by this update.

Summary of changes:
 .../MaterializedTableManager.java  | 144 -
 .../service/MaterializedTableStatementITCase.java  | 238 +
 .../gateway/service/SqlGatewayServiceITCase.java   |  30 +--
 .../MaterializedTableManagerTest.java  |  54 +
 .../service/utils/SqlGatewayServiceTestUtil.java   |  19 ++
 .../sql/parser/ddl/SqlAlterMaterializedTable.java  |   4 +
 .../ddl/SqlAlterMaterializedTableRefresh.java  |  10 +-
 .../flink/table/operations/OperationUtils.java |   6 +-
 .../AlterMaterializedTableRefreshOperation.java|  68 ++
 ...SqlAlterMaterializedTableRefreshConverter.java} |  31 ++-
 .../operations/converters/SqlNodeConverters.java   |   1 +
 ...erializedTableNodeToOperationConverterTest.java |  30 +++
 12 files changed, 590 insertions(+), 45 deletions(-)
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManagerTest.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableRefreshOperation.java
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterTableDropPartitionConverter.java
 => SqlAlterMaterializedTableRefreshConverter.java} (53%)



Re: Unnecessary buffer usage with multicolumn index, row comparison, and equility constraint

2024-05-10 Thread Ron Johnson
On Fri, May 10, 2024 at 11:28 PM WU Yan <4wu...@gmail.com> wrote:

> Hi everyone, first time here. Please kindly let me know if this is not the
> right place to ask.
>
> I notice a simple query can read a lot of buffer blocks in a meaningless
> way, when
> 1. there is an index scan on a multicolumn index
> 2. there is row constructor comparison in the Index Cond
> 3. there is also an equality constraint on the leftmost column of the
> multicolumn index
>
>
> ## How to reproduce
>
> I initially noticed it on AWS Aurora RDS, but it can be reproduced in
> docker container as well.
> ```bash
> docker run --name test-postgres -e POSTGRES_PASSWORD=mysecretpassword -d
> -p 5432:5432 postgres:16.3
> ```
>
> Create a table with a multicolumn index. Populate 12 million rows with
> random integers.
> ```sql
> CREATE TABLE t(a int, b int);
> CREATE INDEX my_idx ON t USING BTREE (a, b);
>
> INSERT INTO t(a, b)
> SELECT
> (random() * 123456)::int AS a,
> (random() * 123456)::int AS b
> FROM
> generate_series(1, 12345678);
>
> ANALYZE t;
> ```
>
> Simple query that uses the multicolumn index.
> ```
> postgres=# explain (analyze, buffers) select * from t where row(a, b) >
> row(123450, 123450) and a = 0 order by a, b;
>

Out of curiosity, why "where row(a, b) > row(123450, 123450)" instead of "where
a > 123450 and b > 123450"?


Re: Avoid inserting \beginL

2024-05-10 Thread Ron Yutkin
>
> You can specify in the Language package "always babel" and pass the option
> in the document class options.
>
>
How do I specify options in the document class options?
My usepackage line is as follows:

\usepackage[bidi=basic, layout=tabular, provide=*]{babel}

Which I assume I should comment out, I then have:

\babelprovide[main, import]{hebrew}

\babelprovide{rl}

Which I assume I shouldn't comment out.


Thanks!
-- 
lyx-users mailing list
lyx-users@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-users


Avoid inserting \beginL

2024-05-10 Thread Ron Yutkin
Hi,

I'm using LyX for university and I'm trying to write a document in Hebrew
but I had to switch to using babel and LuaLaTeX due to some annoying
packages like so:
1. Document Settings > Language > Language package > none
2. Latex preamble:

\usepackage[bidi=basic, layout=tabular, provide=*]{babel}

\babelprovide[main, import]{hebrew}

\babelprovide{rl}

3. File > Export > PDF (LuaTeX)


Once I try to export, I "undefined control sequence" on the following
commands: \beginL, \endL, \beginR, \endR, \R, \L

It seems like LyX adds these around numbers and words in english.


To mitigate this issue I tried to change the document language to English
under Document settings > language and also ctrl+a > right click > language
> english which then compiles successfully and looks correct thanks to
babel.

But when I do that, the text in LyX is backwards and I can't work like
that, so I have to switch the language to english every time I want to
export then switch back to hebrew when I want to edit my document.

Another problem is that when I switch to english LyX swaps the parenthesis
so they are swapped in the final PDF.

It's not a very fun way to use LyX.


Is there a way to tell LyX to not insert those commands? Or somehow
disassociate the document display language from the compile language? (And
set the compile to english and display to hebrew).


Thanks.
-- 
lyx-users mailing list
lyx-users@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-users


Topband: T32JV on160 and 80 morning of 5/10

2024-05-10 Thread Ron Spencer via Topband
George, T32JV, was quite loud this morning on 160. I worked him from here in NC 
at 1000Z. Pretty easy copy with the array listening NW to escape most of the 
QRN from the ongoing storms in Georgia. George was not nearly as loud on 3523 
when I worked him a few minutes later. 



As Dave, W0FLS posted, nice to see some life left in the band. George, you're 
RIB certainly works very, very well. Always a strong signal. Thanks for getting 
on!



Ron

N4XD


Sent using https://www.zoho.com/mail/
_
Searchable Archives: http://www.contesting.com/_topband - Topband Reflector


Re: Re: [VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-09 Thread Ron Liu
+1(binding)

Best,
Ron

Jark Wu  于2024年5月10日周五 09:51写道:

> +1 (binding)
>
> Best,
> Jark
>
> On Thu, 9 May 2024 at 21:27, Lincoln Lee  wrote:
>
> > +1 (binding)
> >
> > Best,
> > Lincoln Lee
> >
> >
> > Feng Jin  于2024年5月9日周四 19:45写道:
> >
> > > +1 (non-binding)
> > >
> > >
> > > Best,
> > > Feng
> > >
> > >
> > > On Thu, May 9, 2024 at 7:37 PM Xuyang  wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > >
> > > > --
> > > >
> > > > Best!
> > > > Xuyang
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > At 2024-05-09 13:57:07, "Ron Liu"  wrote:
> > > > >Sorry for the re-post, just to format this email content.
> > > > >
> > > > >Hi Dev
> > > > >
> > > > >Thank you to everyone for the feedback on FLIP-448: Introduce
> > Pluggable
> > > > >Workflow Scheduler Interface for Materialized Table[1][2].
> > > > >I'd like to start a vote for it. The vote will be open for at least
> 72
> > > > >hours unless there is an objection or not enough votes.
> > > > >
> > > > >[1]
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
> > > > >
> > > > >[2]
> https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
> > > > >
> > > > >Best,
> > > > >Ron
> > > > >
> > > > >Ron Liu  于2024年5月9日周四 13:52写道:
> > > > >
> > > > >> Hi Dev, Thank you to everyone for the feedback on FLIP-448:
> > Introduce
> > > > >> Pluggable Workflow Scheduler Interface for Materialized
> Table[1][2].
> > > I'd
> > > > >> like to start a vote for it. The vote will be open for at least 72
> > > hours
> > > > >> unless there is an objection or not enough votes. [1]
> > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
> > > > >>
> > > > >> [2]
> > https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
> > > > >> Best, Ron
> > > > >>
> > > >
> > >
> >
>


(flink) branch release-1.19 updated: [FLINK-35184][table-runtime] Fix mini-batch join hash collision when use InputSideHasNoUniqueKeyBundle (#24749)

2024-05-09 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch release-1.19
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.19 by this push:
 new 17e7c3eaf14 [FLINK-35184][table-runtime] Fix mini-batch join hash 
collision when use InputSideHasNoUniqueKeyBundle (#24749)
17e7c3eaf14 is described below

commit 17e7c3eaf14b6c63f55d28a308e30ad6a3a80c95
Author: Roman Boyko 
AuthorDate: Fri May 10 10:57:45 2024 +0700

[FLINK-35184][table-runtime] Fix mini-batch join hash collision when use 
InputSideHasNoUniqueKeyBundle (#24749)
---
 .../bundle/InputSideHasNoUniqueKeyBundle.java  | 25 --
 .../join/stream/StreamingJoinOperatorTestBase.java |  4 +-
 .../stream/StreamingMiniBatchJoinOperatorTest.java | 95 +-
 3 files changed, 93 insertions(+), 31 deletions(-)

diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
index b5738835b95..fdc9e1d5193 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
@@ -96,15 +96,26 @@ public class InputSideHasNoUniqueKeyBundle extends 
BufferBundle leftTypeInfo =
+protected InternalTypeInfo leftTypeInfo =
 InternalTypeInfo.of(
 RowType.of(
 new LogicalType[] {
@@ -57,7 +57,7 @@ public abstract class StreamingJoinOperatorTestBase {
 new LogicalType[] {new CharType(false, 20), new 
CharType(true, 10)},
 new String[] {"line_order_id0", 
"line_order_ship_mode"}));
 
-protected final RowDataKeySelector leftKeySelector =
+protected RowDataKeySelector leftKeySelector =
 HandwrittenSelectorUtil.getRowDataSelector(
 new int[] {1},
 leftTypeInfo.toRowType().getChildren().toArray(new 
LogicalType[0]));
diff --git 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
index 62b8116a0b0..7e92f72cf5e 100644
--- 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
+++ 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
@@ -25,13 +25,13 @@ import 
org.apache.flink.table.runtime.operators.bundle.trigger.CountCoBundleTrig
 import org.apache.flink.table.runtime.operators.join.FlinkJoinType;
 import 
org.apache.flink.table.runtime.operators.join.stream.state.JoinInputSideSpec;
 import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+import org.apache.flink.table.types.logical.BigIntType;
 import org.apache.flink.table.types.logical.CharType;
 import org.apache.flink.table.types.logical.LogicalType;
 import org.apache.flink.table.types.logical.RowType;
 import org.apache.flink.table.utils.HandwrittenSelectorUtil;
 import org.apache.flink.types.RowKind;
 
-import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Tag;
 import org.junit.jupiter.api.Test;
 import org.junit.jupiter.api.TestInfo;
@@ -55,27 +55,6 @@ public final class StreamingMiniBatchJoinOperatorTest 
extends StreamingJoinOpera
 private RowDataKeySelector leftUniqueKeySelector;
 private RowDataKeySelector rightUniqueKeySelector;
 
-@BeforeEach
-public void beforeEach(TestInfo testInfo) throws Exception {
-rightTypeInfo =
-InternalTypeInfo.of(
-RowType.of(
-new LogicalType[] {
-new CharType(false, 20),
-new CharType(false, 20),
-new CharType(true, 10)
-},
-new String[] {
-"order_id#", "line_order_id0", 
"line_order_ship_mode"
-}));
-
-rightKeySelector =
-HandwrittenSelectorUtil.getRowDataSelector(
-new int[] {1},
-rightTypeInfo.toRowType().getChildren().toArray(new 
LogicalType[0]));
-super.beforeEach(testInfo);
-}
-
 @

Re: [VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-08 Thread Ron Liu
Sorry for the re-post, just to format this email content.

Hi Dev

Thank you to everyone for the feedback on FLIP-448: Introduce Pluggable
Workflow Scheduler Interface for Materialized Table[1][2].
I'd like to start a vote for it. The vote will be open for at least 72
hours unless there is an objection or not enough votes.

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table

[2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1

Best,
Ron

Ron Liu  于2024年5月9日周四 13:52写道:

> Hi Dev, Thank you to everyone for the feedback on FLIP-448: Introduce
> Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd
> like to start a vote for it. The vote will be open for at least 72 hours
> unless there is an objection or not enough votes. [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
>
> [2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
> Best, Ron
>


[VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-08 Thread Ron Liu
Hi Dev, Thank you to everyone for the feedback on FLIP-448: Introduce
Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd
like to start a vote for it. The vote will be open for at least 72 hours
unless there is an objection or not enough votes. [1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table

[2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
Best, Ron


Attempting to Activate Account with Gmail Address

2024-05-08 Thread Ron Gordon
Per Post Edit Apr-2024 by TerryE, I am warning the forum through the “AOO dev 
mailing list" that I have applied for an account.
Unfortunately, I have a gmail address.

My proposed User ID is RGordon3503

AOO Ver 4.1.15 on MacOS 14.4.1

Thank you,

Ron Gordon
rucanoe...@gmail.com






Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-08 Thread Ron Liu
Hi, Dev

Thank you all for joining this thread and giving your comments and
suggestions, they have helped improve this proposal and I look forward to
further feedback.
If there are no further comments, I'd like to close the discussion and
start the voting one day later.

Best,
Ron

Ron Liu  于2024年5月7日周二 20:51写道:

> Hi, dev
>
> Following the recent PoC[1], and drawing on the excellent code design
> within Flink, I have made the following optimizations to the Public
> Interfaces section of FLIP:
>
> 1. I have renamed WorkflowOperation to RefreshWorkflow. This change better
> conveys its purpose. RefreshWorkflow is used to provide the necessary
> information required for creating, modifying, and deleting workflows. Using
> WorkflowOperation could mislead people into thinking it is a command
> operation, whereas in fact, it does not represent an operation but merely
> provides the essential context information for performing operations on
> workflows. The specific operations are completed within WorkflowScheduler.
> Additionally, I felt that using WorkflowOperation could potentially
> conflict with the Operation[2] interface in the table.
> 2. I have refined the signatures of the modifyRefreshWorkflow and
> deleteRefreshWorkflow interface methods in WorkflowScheduler. The parameter
> T refreshHandler is now provided by ModifyRefreshWorkflow and
> deleteRefreshWorkflow, which makes the overall interface design more
> symmetrical and clean.
>
> [1] https://github.com/lsyldliu/flink/tree/FLIP-448-PoC
> [2]
> https://github.com/apache/flink/blob/29736b8c01924b7da03d4bcbfd9c812a8e5a08b4/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/Operation.java
>
> Best,
> Ron
>
> Ron Liu  于2024年5月7日周二 14:30写道:
>
>> > 4. It appears that in the section on `public interfaces`, within
>> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>>
>> `CreateWorkflowOperation`, right?
>>
>> After discussing with Xuyang offline, we need to support periodic
>> workflow and one-time workflow, they need different information, for
>> example, periodic workflow needs cron expression, one-time workflow needs
>> refresh partition, downstream cascade materialized table, etc. Therefore,
>> CreateWorkflowOperation correspondingly will have two different
>> implementation classes, which will be cleaner for both the implementer and
>> the caller.
>>
>> Best,
>> Ron
>>
>> Ron Liu  于2024年5月6日周一 20:48写道:
>>
>>> Hi, Xuyang
>>>
>>> Thanks for joining this discussion
>>>
>>> > 1. In the sequence diagram, it appears that there is a missing step
>>> for obtaining the refresh handler from the catalog during the suspend
>>> operation.
>>>
>>> Good catch
>>>
>>> > 2. The term "cascade refresh" does not seem to be mentioned in
>>> FLIP-435. The workflow it creates is marked as a "one-time workflow". This
>>> is different
>>>
>>> from a "periodic workflow," and it appears to be a one-off execution. Is
>>> this actually referring to the Refresh command in FLIP-435?
>>>
>>> The cascade refresh is a future work, we don't propose the corresponding
>>> syntax in FLIP-435. However, intuitively, it would be an extension of the
>>> Refresh command in FLIP-435.
>>>
>>> > 3. The workflow-scheduler.type has no default value; should it be set
>>> to CRON by default?
>>>
>>> Firstly, CRON is not a workflow scheduler. Secondly, I believe that
>>> configuring the Scheduler should be an action that users are aware of, and
>>> default values should not be set.
>>>
>>> > 4. It appears that in the section on `public interfaces`, within
>>> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>>>
>>> `CreateWorkflowOperation`, right?
>>>
>>> Sorry, I don't get your point. Can you give more description?
>>>
>>> Best,
>>> Ron
>>>
>>> Xuyang  于2024年5月6日周一 20:26写道:
>>>
>>>> Hi, Ron.
>>>>
>>>> Thanks for driving this. After reading the entire flip, I have the
>>>> following questions:
>>>>
>>>>
>>>>
>>>>
>>>> 1. In the sequence diagram, it appears that there is a missing step for
>>>> obtaining the refresh handler from the catalog during the suspend 
>>>> operation.
>>>>
>>>>
>>>>
>>>>
>>>> 2. The term "cascade refresh

Forcing INTERVAL days display, even if the interval is less than one day

2024-05-07 Thread Ron Johnson
PG 9.6.24, if relevant.  (Hopefully we're migrating next month.)

Displaying how long ago a date was is easy, but interval casts "helpfully"
suppress "X days ago" if the interval is less than one day ago.

How do I make it display "days ago", even when days ago is zero?
Explicitly casting "day to second" didn't work.

CDSLBXW=# with
tables as
(
select schemaname||'.'||relname as table_name
 , greatest(last_vacuum, last_autovacuum) as latest_vacuum
from pg_stat_user_tables
)
select table_name, latest_vacuum,
   date_trunc('second', (current_timestamp - latest_vacuum))::interval
day to second as vacuumed_ago
from tables
order by latest_vacuum desc
limit 30;
   table_name   | latest_vacuum |
 vacuumed_ago
+---+-
 cds.x937_file  | 2024-05-07 10:53:38.971431-04 | 00:01:45
 cds.lockbox_end_of_day | 2024-05-07 10:53:38.758813-04 | 00:01:45
 dba.index_bloat_2stg   | 2024-05-07 10:49:09.196655-04 | 00:06:15
 dba.index_bloat_1stg   | 2024-05-07 10:49:03.153449-04 | 00:06:21
 dba.table_bloat_2stg   | 2024-05-07 10:48:56.681218-04 | 00:06:28
 dba.table_bloat_1stg   | 2024-05-07 10:48:50.233984-04 | 00:06:34
 cds.x937_cash_letter   | 2024-05-07 10:45:38.763453-04 | 00:09:45
 tms.batch  | 2024-05-07 10:37:50.758763-04 | 00:17:33
 cds.cdslockbox | 2024-05-07 10:35:38.625663-04 | 00:19:46
 tms.item_mapping   | 2024-05-07 10:29:09.16413-04  | 00:26:15
 public.job | 2024-05-07 10:03:38.270296-04 | 00:51:46
 cds.mail_out_address   | 2024-05-07 09:55:38.269805-04 | 00:59:46
 cds.rebatching_rule| 2024-05-07 09:38:38.062069-04 | 01:16:46
 cds.cds_job_history| 2024-05-07 09:16:40.071253-04 | 01:38:44
 tms.document   | 2024-05-07 08:01:15.545398-04 | 02:54:09
 cds.cdsdocument| 2024-05-07 08:00:13.793372-04 | 02:55:10
 cds.all_day_event_trigger  | 2024-05-07 07:54:38.202722-04 | 03:00:46
 public.job_history | 2024-05-07 01:45:25.265417-04 | 09:09:59
 tms.chk_image  | 2024-05-06 15:39:12.708045-04 | 19:16:12
 tms.transaction| 2024-05-06 15:38:32.878078-04 | 19:16:51
 tms.payment| 2024-05-06 14:10:17.76129-04  | 20:45:06
 public.schedule| 2024-05-05 00:00:49.160792-04 | 2 days
10:54:35
 tms.gl_ticket_image| 2024-05-04 23:55:05.632414-04 | 2 days
11:00:19
 tms.alerted_watchlist  | 2024-05-04 23:55:05.62597-04  | 2 days
11:00:19
 cds.balancing_record_imagerps  | 2024-05-04 23:55:05.625671-04 | 2 days
11:00:19
 cds.balancing_record_publisher | 2024-05-04 23:55:05.618346-04 | 2 days
11:00:19
 tms.credit_card| 2024-05-04 23:55:05.617497-04 | 2 days
11:00:19
 tms.chk_original_image | 2024-05-04 23:55:05.607952-04 | 2 days
11:00:19
 cds.billing_volume_header  | 2024-05-04 23:55:05.60093-04  | 2 days
11:00:19
 cds.balancing_publisher_batch  | 2024-05-04 23:55:05.590679-04 | 2 days
11:00:19
(30 rows)


Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-07 Thread Ron Liu
Hi, dev

Following the recent PoC[1], and drawing on the excellent code design
within Flink, I have made the following optimizations to the Public
Interfaces section of FLIP:

1. I have renamed WorkflowOperation to RefreshWorkflow. This change better
conveys its purpose. RefreshWorkflow is used to provide the necessary
information required for creating, modifying, and deleting workflows. Using
WorkflowOperation could mislead people into thinking it is a command
operation, whereas in fact, it does not represent an operation but merely
provides the essential context information for performing operations on
workflows. The specific operations are completed within WorkflowScheduler.
Additionally, I felt that using WorkflowOperation could potentially
conflict with the Operation[2] interface in the table.
2. I have refined the signatures of the modifyRefreshWorkflow and
deleteRefreshWorkflow interface methods in WorkflowScheduler. The parameter
T refreshHandler is now provided by ModifyRefreshWorkflow and
deleteRefreshWorkflow, which makes the overall interface design more
symmetrical and clean.

[1] https://github.com/lsyldliu/flink/tree/FLIP-448-PoC
[2]
https://github.com/apache/flink/blob/29736b8c01924b7da03d4bcbfd9c812a8e5a08b4/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/Operation.java

Best,
Ron

Ron Liu  于2024年5月7日周二 14:30写道:

> > 4. It appears that in the section on `public interfaces`, within
> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>
> `CreateWorkflowOperation`, right?
>
> After discussing with Xuyang offline, we need to support periodic workflow
> and one-time workflow, they need different information, for example,
> periodic workflow needs cron expression, one-time workflow needs refresh
> partition, downstream cascade materialized table, etc. Therefore,
> CreateWorkflowOperation correspondingly will have two different
> implementation classes, which will be cleaner for both the implementer and
> the caller.
>
> Best,
> Ron
>
> Ron Liu  于2024年5月6日周一 20:48写道:
>
>> Hi, Xuyang
>>
>> Thanks for joining this discussion
>>
>> > 1. In the sequence diagram, it appears that there is a missing step for
>> obtaining the refresh handler from the catalog during the suspend operation.
>>
>> Good catch
>>
>> > 2. The term "cascade refresh" does not seem to be mentioned in
>> FLIP-435. The workflow it creates is marked as a "one-time workflow". This
>> is different
>>
>> from a "periodic workflow," and it appears to be a one-off execution. Is
>> this actually referring to the Refresh command in FLIP-435?
>>
>> The cascade refresh is a future work, we don't propose the corresponding
>> syntax in FLIP-435. However, intuitively, it would be an extension of the
>> Refresh command in FLIP-435.
>>
>> > 3. The workflow-scheduler.type has no default value; should it be set
>> to CRON by default?
>>
>> Firstly, CRON is not a workflow scheduler. Secondly, I believe that
>> configuring the Scheduler should be an action that users are aware of, and
>> default values should not be set.
>>
>> > 4. It appears that in the section on `public interfaces`, within
>> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>>
>> `CreateWorkflowOperation`, right?
>>
>> Sorry, I don't get your point. Can you give more description?
>>
>> Best,
>> Ron
>>
>> Xuyang  于2024年5月6日周一 20:26写道:
>>
>>> Hi, Ron.
>>>
>>> Thanks for driving this. After reading the entire flip, I have the
>>> following questions:
>>>
>>>
>>>
>>>
>>> 1. In the sequence diagram, it appears that there is a missing step for
>>> obtaining the refresh handler from the catalog during the suspend operation.
>>>
>>>
>>>
>>>
>>> 2. The term "cascade refresh" does not seem to be mentioned in FLIP-435.
>>> The workflow it creates is marked as a "one-time workflow". This is
>>> different
>>>
>>> from a "periodic workflow," and it appears to be a one-off execution. Is
>>> this actually referring to the Refresh command in FLIP-435?
>>>
>>>
>>>
>>>
>>> 3. The workflow-scheduler.type has no default value; should it be set to
>>> CRON by default?
>>>
>>>
>>>
>>>
>>> 4. It appears that in the section on `public interfaces`, within
>>> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>>>
>>> `CreateWorkflow

(flink) 01/08: [FLINK-35195][test/test-filesystem] test-filesystem Catalog support create generic table

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 84f0632b15c2a192aa22a525c7b4937f80f20a34
Author: fengli 
AuthorDate: Tue Apr 30 16:23:25 2024 +0800

[FLINK-35195][test/test-filesystem] test-filesystem Catalog support create 
generic table
---
 .../file/testutils/TestFileSystemTableFactory.java | 35 +-
 .../testutils/catalog/TestFileSystemCatalog.java   | 26 +--
 .../catalog/TestFileSystemCatalogITCase.java   | 79 +-
 .../catalog/TestFileSystemCatalogTest.java | 38 +++
 4 files changed, 170 insertions(+), 8 deletions(-)

diff --git 
a/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/TestFileSystemTableFactory.java
 
b/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/TestFileSystemTableFactory.java
index aa5cd5e17bb..c7009af581c 100644
--- 
a/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/TestFileSystemTableFactory.java
+++ 
b/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/TestFileSystemTableFactory.java
@@ -22,10 +22,14 @@ import org.apache.flink.annotation.Internal;
 import org.apache.flink.connector.file.table.FileSystemTableFactory;
 import org.apache.flink.connector.file.table.TestFileSystemTableSource;
 import org.apache.flink.connector.file.table.factories.BulkReaderFormatFactory;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
 import org.apache.flink.table.connector.source.DynamicTableSource;
 import org.apache.flink.table.factories.DeserializationFormatFactory;
 import org.apache.flink.table.factories.Factory;
 import org.apache.flink.table.factories.FactoryUtil;
+import org.apache.flink.table.file.testutils.catalog.TestFileSystemCatalog;
+
+import java.util.Collections;
 
 /** Test filesystem {@link Factory}. */
 @Internal
@@ -40,9 +44,21 @@ public class TestFileSystemTableFactory extends 
FileSystemTableFactory {
 
 @Override
 public DynamicTableSource createDynamicTableSource(Context context) {
+final boolean isFileSystemTable =
+
TestFileSystemCatalog.isFileSystemTable(context.getCatalogTable().getOptions());
+if (!isFileSystemTable) {
+return FactoryUtil.createDynamicTableSource(
+null,
+context.getObjectIdentifier(),
+context.getCatalogTable(),
+Collections.emptyMap(),
+context.getConfiguration(),
+context.getClassLoader(),
+context.isTemporary());
+}
+
 FactoryUtil.TableFactoryHelper helper = 
FactoryUtil.createTableFactoryHelper(this, context);
 validate(helper);
-
 return new TestFileSystemTableSource(
 context.getObjectIdentifier(),
 context.getPhysicalRowDataType(),
@@ -51,4 +67,21 @@ public class TestFileSystemTableFactory extends 
FileSystemTableFactory {
 discoverDecodingFormat(context, BulkReaderFormatFactory.class),
 discoverDecodingFormat(context, 
DeserializationFormatFactory.class));
 }
+
+@Override
+public DynamicTableSink createDynamicTableSink(Context context) {
+final boolean isFileSystemTable =
+
TestFileSystemCatalog.isFileSystemTable(context.getCatalogTable().getOptions());
+if (!isFileSystemTable) {
+return FactoryUtil.createDynamicTableSink(
+null,
+context.getObjectIdentifier(),
+context.getCatalogTable(),
+Collections.emptyMap(),
+context.getConfiguration(),
+context.getClassLoader(),
+context.isTemporary());
+}
+return super.createDynamicTableSink(context);
+}
 }
diff --git 
a/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/catalog/TestFileSystemCatalog.java
 
b/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/catalog/TestFileSystemCatalog.java
index 490dd29d608..6d64ecee032 100644
--- 
a/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/catalog/TestFileSystemCatalog.java
+++ 
b/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/catalog/TestFileSystemCatalog.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.table.file.testutils.catalog;
 
+import org.apache.flink.annotation.Internal;
 import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.java.tuple.Tuple4;
 import

(flink) 06/08: [FLINK-35195][table] Introduce MaterializedTableChange to support update materialized table refresh status and RefreshHandler

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 192e1e8fb04c3a8c88673fde1b66dd359b5b0fe0
Author: fengli 
AuthorDate: Mon May 6 20:21:09 2024 +0800

[FLINK-35195][table] Introduce MaterializedTableChange to support update 
materialized table refresh status and RefreshHandler
---
 .../apache/flink/table/catalog/CatalogManager.java |   3 +-
 .../operations/ddl/AlterTableChangeOperation.java  |   6 +-
 .../AlterMaterializedTableChangeOperation.java | 107 ++
 .../AlterMaterializedTableOperation.java   |  42 
 .../apache/flink/table/catalog/TableChange.java| 120 +
 5 files changed, 275 insertions(+), 3 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
index 9e7bf5ec007..51b69c650eb 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
@@ -1151,7 +1151,8 @@ public final class CatalogManager implements 
CatalogRegistry, AutoCloseable {
 (catalog, path) -> {
 final CatalogBaseTable resolvedTable = 
resolveCatalogBaseTable(table);
 catalog.alterTable(path, resolvedTable, ignoreIfNotExists);
-if (resolvedTable instanceof CatalogTable) {
+if (resolvedTable instanceof CatalogTable
+|| resolvedTable instanceof 
CatalogMaterializedTable) {
 catalogModificationListeners.forEach(
 listener ->
 listener.onEvent(
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/AlterTableChangeOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/AlterTableChangeOperation.java
index 158bdd22121..7a597415235 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/AlterTableChangeOperation.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/AlterTableChangeOperation.java
@@ -58,7 +58,9 @@ public class AlterTableChangeOperation extends 
AlterTableOperation {
 @Override
 public String asSummaryString() {
 String changes =
-
tableChanges.stream().map(this::toString).collect(Collectors.joining(",\n"));
+tableChanges.stream()
+.map(AlterTableChangeOperation::toString)
+.collect(Collectors.joining(",\n"));
 return String.format(
 "ALTER TABLE %s%s\n%s",
 ignoreIfTableNotExists ? "IF EXISTS " : "",
@@ -66,7 +68,7 @@ public class AlterTableChangeOperation extends 
AlterTableOperation {
 changes);
 }
 
-private String toString(TableChange tableChange) {
+public static String toString(TableChange tableChange) {
 if (tableChange instanceof TableChange.SetOption) {
 TableChange.SetOption setChange = (TableChange.SetOption) 
tableChange;
 return String.format("  SET '%s' = '%s'", setChange.getKey(), 
setChange.getValue());
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableChangeOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableChangeOperation.java
new file mode 100644
index 000..49f220a8ddc
--- /dev/null
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableChangeOperation.java
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the L

(flink) 07/08: [FLINK-35195][table] Introduce DropMaterializedTableOperation to support drop materialized table

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e0d342a2a6ba320c1fe0f7a42239254d25f95fd5
Author: fengli 
AuthorDate: Mon May 6 20:22:19 2024 +0800

[FLINK-35195][table] Introduce DropMaterializedTableOperation to support 
drop materialized table
---
 .../DropMaterializedTableOperation.java| 54 ++
 1 file changed, 54 insertions(+)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
new file mode 100644
index 000..e5eee557bfc
--- /dev/null
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations.materializedtable;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.OperationUtils;
+import org.apache.flink.table.operations.ddl.DropTableOperation;
+
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+/** Operation to describe a DROP MATERIALIZED TABLE statement. */
+@Internal
+public class DropMaterializedTableOperation extends DropTableOperation
+implements MaterializedTableOperation {
+
+public DropMaterializedTableOperation(
+ObjectIdentifier tableIdentifier, boolean ifExists, boolean 
isTemporary) {
+super(tableIdentifier, ifExists, isTemporary);
+}
+
+@Override
+public String asSummaryString() {
+Map params = new LinkedHashMap<>();
+params.put("identifier", getTableIdentifier());
+params.put("IfExists", isIfExists());
+params.put("isTemporary", isTemporary());
+
+return OperationUtils.formatWithChildren(
+"DROP MATERIALIZED TABLE",
+params,
+Collections.emptyList(),
+Operation::asSummaryString);
+}
+}



(flink) 05/08: [FLINK-35195][table] Introduce ContinuousRefreshHandler and serializer for continuous refresh mode

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit c99eb54ce8f362d970c173a2a579e8fc28ac
Author: fengli 
AuthorDate: Mon May 6 20:19:49 2024 +0800

[FLINK-35195][table] Introduce ContinuousRefreshHandler and serializer for 
continuous refresh mode
---
 .../table/refresh/ContinuousRefreshHandler.java| 50 ++
 .../ContinuousRefreshHandlerSerializer.java| 44 +++
 2 files changed, 94 insertions(+)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/refresh/ContinuousRefreshHandler.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/refresh/ContinuousRefreshHandler.java
new file mode 100644
index 000..60a92bed02e
--- /dev/null
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/refresh/ContinuousRefreshHandler.java
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.refresh;
+
+import org.apache.flink.annotation.Internal;
+
+import java.io.Serializable;
+
+/** Embedded continuous refresh handler of Flink streaming job for 
materialized table. */
+@Internal
+public class ContinuousRefreshHandler implements RefreshHandler, Serializable {
+
+// TODO: add clusterId for yarn and k8s resource manager
+private final String executionTarget;
+private final String jobId;
+
+public ContinuousRefreshHandler(String executionTarget, String jobId) {
+this.executionTarget = executionTarget;
+this.jobId = jobId;
+}
+
+public String getExecutionTarget() {
+return executionTarget;
+}
+
+public String getJobId() {
+return jobId;
+}
+
+@Override
+public String asSummaryString() {
+return String.format("{\n executionTarget: %s,\n jobId: %s\n}", 
executionTarget, jobId);
+}
+}
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/refresh/ContinuousRefreshHandlerSerializer.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/refresh/ContinuousRefreshHandlerSerializer.java
new file mode 100644
index 000..f62ccc99e09
--- /dev/null
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/refresh/ContinuousRefreshHandlerSerializer.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.refresh;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.util.InstantiationUtil;
+
+import java.io.IOException;
+
+/** Serializer for {@link ContinuousRefreshHandler}. */
+@Internal
+public class ContinuousRefreshHandlerSerializer
+implements RefreshHandlerSerializer {
+
+public static final ContinuousRefreshHandlerSerializer INSTANCE =
+new ContinuousRefreshHandlerSerializer();
+
+@Override
+public byte[] serialize(ContinuousRefreshHandler refreshHandler) throws 
IOException {
+return InstantiationUtil.serializeObject(refreshHandler);
+}
+
+@Override
+public ContinuousRefreshHandler deserialize(byte[] serializedBytes, 
ClassLoader cl)
+throws IOException, ClassNotFoundException {
+return InstantiationUtil.deserializeObject(serializedBytes, cl);
+}
+}



(flink) 08/08: [FLINK-35195][table] Support execute CreateMaterializedTableOperation for continuous refresh mode in SqlGateway

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 29736b8c01924b7da03d4bcbfd9c812a8e5a08b4
Author: fengli 
AuthorDate: Mon May 6 20:24:16 2024 +0800

[FLINK-35195][table] Support execute CreateMaterializedTableOperation for 
continuous refresh mode in SqlGateway
---
 flink-table/flink-sql-gateway/pom.xml  |   6 +
 .../MaterializedTableManager.java  | 182 ++
 .../service/operation/OperationExecutor.java   |  25 +-
 .../service/MaterializedTableStatementITCase.java  | 274 +
 4 files changed, 483 insertions(+), 4 deletions(-)

diff --git a/flink-table/flink-sql-gateway/pom.xml 
b/flink-table/flink-sql-gateway/pom.xml
index 1a50d665a18..61f1e75942e 100644
--- a/flink-table/flink-sql-gateway/pom.xml
+++ b/flink-table/flink-sql-gateway/pom.xml
@@ -127,6 +127,12 @@
test-jar
test

+
+org.apache.flink
+flink-table-filesystem-test-utils
+${project.version}
+test
+
 
 
 
diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
new file mode 100644
index 000..fed60634a3a
--- /dev/null
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.gateway.service.materializedtable;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.CatalogMaterializedTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.TableChange;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.gateway.api.operation.OperationHandle;
+import org.apache.flink.table.gateway.api.results.ResultSet;
+import org.apache.flink.table.gateway.service.operation.OperationExecutor;
+import org.apache.flink.table.gateway.service.result.ResultFetcher;
+import org.apache.flink.table.gateway.service.utils.SqlExecutionException;
+import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableChangeOperation;
+import 
org.apache.flink.table.operations.materializedtable.CreateMaterializedTableOperation;
+import 
org.apache.flink.table.operations.materializedtable.DropMaterializedTableOperation;
+import 
org.apache.flink.table.operations.materializedtable.MaterializedTableOperation;
+import org.apache.flink.table.refresh.ContinuousRefreshHandler;
+import org.apache.flink.table.refresh.ContinuousRefreshHandlerSerializer;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.apache.flink.api.common.RuntimeExecutionMode.STREAMING;
+import static org.apache.flink.configuration.DeploymentOptions.TARGET;
+import static org.apache.flink.configuration.ExecutionOptions.RUNTIME_MODE;
+import static org.apache.flink.configuration.PipelineOptions.NAME;
+import static 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions.CHECKPOINTING_INTERVAL;
+import static 
org.apache.flink.table.api.internal.TableResultInternal.TABLE_RESULT_OK;
+
+/** Manager is responsible for execute the {@link MaterializedTableOperation}. 
*/
+@Internal
+public class MaterializedTableManager {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(MaterializedTableManager.class);
+
+public static ResultFetcher callMaterializedTableOperation(
+OperationExecutor operationExecutor,
+OperationHandle handle,
+MaterializedTableOperation op,
+String statement) {
+if (op instanceof CreateMateri

(flink) 04/08: [FLINK-35195][table] Convert SqlCreateMaterializedTable node to CreateMaterializedTableOperation

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e28e495cdd3e0e7cbb58685cb09e1fa08af7223e
Author: fengli 
AuthorDate: Mon May 6 20:17:57 2024 +0800

[FLINK-35195][table] Convert SqlCreateMaterializedTable node to 
CreateMaterializedTableOperation
---
 .../flink/sql/parser/SqlConstraintValidator.java   |   2 +-
 .../sql/parser/ddl/SqlCreateMaterializedTable.java |   1 -
 .../CreateMaterializedTableOperation.java  |  76 ++
 .../MaterializedTableOperation.java|  26 +++
 .../planner/operations/SqlNodeConvertContext.java  |   8 +
 .../SqlCreateMaterializedTableConverter.java   | 210 +
 .../operations/converters/SqlNodeConverter.java|   5 +
 .../operations/converters/SqlNodeConverters.java   |   1 +
 .../planner/utils/MaterializedTableUtils.java  |  98 
 ...erializedTableNodeToOperationConverterTest.java | 259 +
 .../SqlNodeToOperationConversionTestBase.java  |   2 +-
 .../SqlRTASNodeToOperationConverterTest.java   |   2 +-
 12 files changed, 686 insertions(+), 4 deletions(-)

diff --git 
a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/SqlConstraintValidator.java
 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/SqlConstraintValidator.java
index 8a9a7727b54..f157e5034a8 100644
--- 
a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/SqlConstraintValidator.java
+++ 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/SqlConstraintValidator.java
@@ -89,7 +89,7 @@ public class SqlConstraintValidator {
 }
 
 /** Check table constraint. */
-private static void validate(SqlTableConstraint constraint) throws 
SqlValidateException {
+public static void validate(SqlTableConstraint constraint) throws 
SqlValidateException {
 if (constraint.isUnique()) {
 throw new SqlValidateException(
 constraint.getParserPosition(), "UNIQUE constraint is not 
supported yet");
diff --git 
a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateMaterializedTable.java
 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateMaterializedTable.java
index 1630a0f0117..eae6f1fcba9 100644
--- 
a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateMaterializedTable.java
+++ 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateMaterializedTable.java
@@ -132,7 +132,6 @@ public class SqlCreateMaterializedTable extends SqlCreate {
 return freshness;
 }
 
-@Nullable
 public Optional getRefreshMode() {
 return Optional.ofNullable(refreshMode);
 }
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/CreateMaterializedTableOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/CreateMaterializedTableOperation.java
new file mode 100644
index 000..d4eff00254d
--- /dev/null
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/CreateMaterializedTableOperation.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations.materializedtable;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.internal.TableResultImpl;
+import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.catalog.CatalogMaterializedTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ResolvedCatalogMaterializedTable;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.OperationUtils;
+import org.apache.flink.table.operations.ddl.CreateOperation;
+
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+/** Operation to describe a CREATE MATERIALIZED TABLE st

(flink) 03/08: [FLINK-35195][table] Convert CatalogMaterializedTable to CatalogTable to generate execution plan for planner

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit d8491c0f9c07f0d3d5e1428cad54902acb6ae0d0
Author: fengli 
AuthorDate: Mon May 6 20:12:54 2024 +0800

[FLINK-35195][table] Convert CatalogMaterializedTable to CatalogTable to 
generate execution plan for planner
---
 .../flink/table/catalog/ContextResolvedTable.java  | 26 ++
 .../catalog/ResolvedCatalogMaterializedTable.java  | 13 +++
 .../planner/catalog/DatabaseCalciteSchema.java |  3 ++-
 .../operations/SqlNodeToOperationConversion.java   |  4 +++-
 4 files changed, 44 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/ContextResolvedTable.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/ContextResolvedTable.java
index e7b9e5f0835..70a0b5c16d0 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/ContextResolvedTable.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/ContextResolvedTable.java
@@ -142,6 +142,20 @@ public final class ContextResolvedTable {
 return (T) resolvedTable.getOrigin();
 }
 
+/**
+ * Convert the {@link ResolvedCatalogMaterializedTable} in {@link 
ContextResolvedTable} to
+ * {@link ResolvedCatalogTable }.
+ */
+public ContextResolvedTable toCatalogTable() {
+if (resolvedTable.getTableKind() == 
CatalogBaseTable.TableKind.MATERIALIZED_TABLE) {
+return ContextResolvedTable.permanent(
+objectIdentifier,
+catalog,
+((ResolvedCatalogMaterializedTable) 
resolvedTable).toResolvedCatalogTable());
+}
+return this;
+}
+
 /**
  * Copy the {@link ContextResolvedTable}, replacing the underlying {@link 
CatalogTable} options.
  */
@@ -150,6 +164,12 @@ public final class ContextResolvedTable {
 throw new ValidationException(
 String.format("View '%s' cannot be enriched with new 
options.", this));
 }
+if (resolvedTable.getTableKind() == 
CatalogBaseTable.TableKind.MATERIALIZED_TABLE) {
+return ContextResolvedTable.permanent(
+objectIdentifier,
+catalog,
+((ResolvedCatalogMaterializedTable) 
resolvedTable).copy(newOptions));
+}
 return new ContextResolvedTable(
 objectIdentifier,
 catalog,
@@ -159,6 +179,12 @@ public final class ContextResolvedTable {
 
 /** Copy the {@link ContextResolvedTable}, replacing the underlying {@link 
ResolvedSchema}. */
 public ContextResolvedTable copy(ResolvedSchema newSchema) {
+if (resolvedTable.getTableKind() == 
CatalogBaseTable.TableKind.MATERIALIZED_TABLE) {
+throw new ValidationException(
+String.format(
+"Materialized table '%s' cannot be copied with new 
schema %s.",
+this, newSchema));
+}
 return new ContextResolvedTable(
 objectIdentifier,
 catalog,
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/ResolvedCatalogMaterializedTable.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/ResolvedCatalogMaterializedTable.java
index a0206af3111..f876cd74c4d 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/ResolvedCatalogMaterializedTable.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/ResolvedCatalogMaterializedTable.java
@@ -182,4 +182,17 @@ public class ResolvedCatalogMaterializedTable
 + resolvedSchema
 + '}';
 }
+
+/** Convert this object to a {@link ResolvedCatalogTable} object for 
planner optimize query. */
+public ResolvedCatalogTable toResolvedCatalogTable() {
+return new ResolvedCatalogTable(
+CatalogTable.newBuilder()
+.schema(getUnresolvedSchema())
+.comment(getComment())
+.partitionKeys(getPartitionKeys())
+.options(getOptions())
+.snapshot(getSnapshot().orElse(null))
+.build(),
+getResolvedSchema());
+}
 }
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
index 7ba1e04d83e..d3e738ae5ff 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
+++ 
b/fli

(flink) 02/08: [FLINK-35195][table] Introduce materialized table reflated config options

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b037f56b10c7dce4505ed6f4e28030350742a523
Author: fengli 
AuthorDate: Mon May 6 20:07:09 2024 +0800

[FLINK-35195][table] Introduce materialized table reflated config options
---
 docs/content.zh/docs/dev/table/config.md   |  6 +++
 docs/content/docs/dev/table/config.md  |  6 +++
 .../materialized_table_config_configuration.html   | 24 +
 .../api/config/MaterializedTableConfigOptions.java | 59 ++
 4 files changed, 95 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/config.md 
b/docs/content.zh/docs/dev/table/config.md
index f1a0be083c5..1748fde1b72 100644
--- a/docs/content.zh/docs/dev/table/config.md
+++ b/docs/content.zh/docs/dev/table/config.md
@@ -134,6 +134,12 @@ Flink SQL> SET 'table.exec.mini-batch.size' = '5000';
 
 {{< generated/table_config_configuration >}}
 
+### Materialized Table 配置
+
+以下配置可以用于调整 Materialized Table 的行为。
+
+{{< generated/materialized_table_config_configuration >}}
+
 ### SQL Client 配置
 
 以下配置可以用于调整 sql client 的行为。
diff --git a/docs/content/docs/dev/table/config.md 
b/docs/content/docs/dev/table/config.md
index 697d820db4c..51a264414c8 100644
--- a/docs/content/docs/dev/table/config.md
+++ b/docs/content/docs/dev/table/config.md
@@ -149,6 +149,12 @@ The following options can be used to adjust the behavior 
of the table planner.
 
 {{< generated/table_config_configuration >}}
 
+### Materialized Table Options
+
+The following options can be used to adjust the behavior of the materialized 
table.
+
+{{< generated/materialized_table_config_configuration >}}
+
 ### SQL Client Options
 
 The following options can be used to adjust the behavior of the sql client.
diff --git 
a/docs/layouts/shortcodes/generated/materialized_table_config_configuration.html
 
b/docs/layouts/shortcodes/generated/materialized_table_config_configuration.html
new file mode 100644
index 000..d5829bf3224
--- /dev/null
+++ 
b/docs/layouts/shortcodes/generated/materialized_table_config_configuration.html
@@ -0,0 +1,24 @@
+
+
+
+Key
+Default
+Type
+Description
+
+
+
+
+
materialized-table.refresh-mode.freshness-threshold Batch Streaming
+30 min
+Duration
+Specifies a time threshold for determining the materialized 
table refresh mode. If the materialized table defined FRESHNESS is below this 
threshold, it run in continuous mode. Otherwise, it switches to full refresh 
mode.
+
+
+partition.fields.#.date-formatter Batch Streaming
+(none)
+String
+Specifies the time partition formatter for the partitioned 
materialized table, where '#' denotes a string-based partition field name. This 
serves as a hint to the framework regarding which partition to refresh in full 
refresh mode.
+
+
+
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/MaterializedTableConfigOptions.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/MaterializedTableConfigOptions.java
new file mode 100644
index 000..b08466e05ab
--- /dev/null
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/MaterializedTableConfigOptions.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.config;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.docs.Documentation;
+import org.apache.flink.configuration.ConfigOption;
+
+import java.time.Duration;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+
+/**
+ * This class holds {@link org.apache.flink.configuration.ConfigOption}s used 
by table module for
+ * materialized table.
+ */
+@PublicEvolving
+public class MaterializedTableConfigOptions {
+
+private MaterializedTableConfigOptions() {}
+
+public static final String PAR

(flink) branch master updated (ea4112aefa7 -> 29736b8c019)

2024-05-07 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from ea4112aefa7 [FLINK-35161][state] Implement StateExecutor for 
ForStStateBackend
 new 84f0632b15c [FLINK-35195][test/test-filesystem] test-filesystem 
Catalog support create generic table
 new b037f56b10c [FLINK-35195][table] Introduce materialized table reflated 
config options
 new d8491c0f9c0 [FLINK-35195][table] Convert CatalogMaterializedTable to 
CatalogTable to generate execution plan for planner
 new e28e495cdd3 [FLINK-35195][table] Convert SqlCreateMaterializedTable 
node to CreateMaterializedTableOperation
 new c99eb54ce8f [FLINK-35195][table] Introduce ContinuousRefreshHandler 
and serializer for continuous refresh mode
 new 192e1e8fb04 [FLINK-35195][table] Introduce MaterializedTableChange to 
support update materialized table refresh status and RefreshHandler
 new e0d342a2a6b [FLINK-35195][table] Introduce 
DropMaterializedTableOperation to support drop materialized table
 new 29736b8c019 [FLINK-35195][table] Support execute 
CreateMaterializedTableOperation for continuous refresh mode in SqlGateway

The 8 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/content.zh/docs/dev/table/config.md   |   6 +
 docs/content/docs/dev/table/config.md  |   6 +
 .../materialized_table_config_configuration.html   |  24 ++
 flink-table/flink-sql-gateway/pom.xml  |   6 +
 .../MaterializedTableManager.java  | 182 ++
 .../service/operation/OperationExecutor.java   |  25 +-
 .../service/MaterializedTableStatementITCase.java  | 274 +
 .../flink/sql/parser/SqlConstraintValidator.java   |   2 +-
 .../sql/parser/ddl/SqlCreateMaterializedTable.java |   1 -
 .../api/config/MaterializedTableConfigOptions.java |  59 +
 .../apache/flink/table/catalog/CatalogManager.java |   3 +-
 .../flink/table/catalog/ContextResolvedTable.java  |  26 ++
 .../operations/ddl/AlterTableChangeOperation.java  |   6 +-
 .../AlterMaterializedTableChangeOperation.java | 107 
 .../AlterMaterializedTableOperation.java   |  42 
 .../CreateMaterializedTableOperation.java  |  76 ++
 .../DropMaterializedTableOperation.java|  54 
 .../MaterializedTableOperation.java|  26 ++
 .../catalog/ResolvedCatalogMaterializedTable.java  |  13 +
 .../apache/flink/table/catalog/TableChange.java| 120 +
 .../table/refresh/ContinuousRefreshHandler.java|  50 
 .../ContinuousRefreshHandlerSerializer.java|  44 
 .../planner/catalog/DatabaseCalciteSchema.java |   3 +-
 .../planner/operations/SqlNodeConvertContext.java  |   8 +
 .../operations/SqlNodeToOperationConversion.java   |   4 +-
 .../SqlCreateMaterializedTableConverter.java   | 210 
 .../operations/converters/SqlNodeConverter.java|   5 +
 .../operations/converters/SqlNodeConverters.java   |   1 +
 .../planner/utils/MaterializedTableUtils.java  |  98 
 ...erializedTableNodeToOperationConverterTest.java | 259 +++
 .../SqlNodeToOperationConversionTestBase.java  |   2 +-
 .../SqlRTASNodeToOperationConverterTest.java   |   2 +-
 .../file/testutils/TestFileSystemTableFactory.java |  35 ++-
 .../testutils/catalog/TestFileSystemCatalog.java   |  26 +-
 .../catalog/TestFileSystemCatalogITCase.java   |  79 +-
 .../catalog/TestFileSystemCatalogTest.java |  38 +++
 36 files changed, 1901 insertions(+), 21 deletions(-)
 create mode 100644 
docs/layouts/shortcodes/generated/materialized_table_config_configuration.html
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/MaterializedTableConfigOptions.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableChangeOperation.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableOperation.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/CreateMaterializedTableOperation.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operatio

Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-07 Thread Ron Liu
> 4. It appears that in the section on `public interfaces`, within
`WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to

`CreateWorkflowOperation`, right?

After discussing with Xuyang offline, we need to support periodic workflow
and one-time workflow, they need different information, for example,
periodic workflow needs cron expression, one-time workflow needs refresh
partition, downstream cascade materialized table, etc. Therefore,
CreateWorkflowOperation correspondingly will have two different
implementation classes, which will be cleaner for both the implementer and
the caller.

Best,
Ron

Ron Liu  于2024年5月6日周一 20:48写道:

> Hi, Xuyang
>
> Thanks for joining this discussion
>
> > 1. In the sequence diagram, it appears that there is a missing step for
> obtaining the refresh handler from the catalog during the suspend operation.
>
> Good catch
>
> > 2. The term "cascade refresh" does not seem to be mentioned in FLIP-435.
> The workflow it creates is marked as a "one-time workflow". This is
> different
>
> from a "periodic workflow," and it appears to be a one-off execution. Is
> this actually referring to the Refresh command in FLIP-435?
>
> The cascade refresh is a future work, we don't propose the corresponding
> syntax in FLIP-435. However, intuitively, it would be an extension of the
> Refresh command in FLIP-435.
>
> > 3. The workflow-scheduler.type has no default value; should it be set to
> CRON by default?
>
> Firstly, CRON is not a workflow scheduler. Secondly, I believe that
> configuring the Scheduler should be an action that users are aware of, and
> default values should not be set.
>
> > 4. It appears that in the section on `public interfaces`, within
> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>
> `CreateWorkflowOperation`, right?
>
> Sorry, I don't get your point. Can you give more description?
>
> Best,
> Ron
>
> Xuyang  于2024年5月6日周一 20:26写道:
>
>> Hi, Ron.
>>
>> Thanks for driving this. After reading the entire flip, I have the
>> following questions:
>>
>>
>>
>>
>> 1. In the sequence diagram, it appears that there is a missing step for
>> obtaining the refresh handler from the catalog during the suspend operation.
>>
>>
>>
>>
>> 2. The term "cascade refresh" does not seem to be mentioned in FLIP-435.
>> The workflow it creates is marked as a "one-time workflow". This is
>> different
>>
>> from a "periodic workflow," and it appears to be a one-off execution. Is
>> this actually referring to the Refresh command in FLIP-435?
>>
>>
>>
>>
>> 3. The workflow-scheduler.type has no default value; should it be set to
>> CRON by default?
>>
>>
>>
>>
>> 4. It appears that in the section on `public interfaces`, within
>> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>>
>> `CreateWorkflowOperation`, right?
>>
>>
>>
>>
>> --
>>
>> Best!
>> Xuyang
>>
>>
>>
>>
>>
>> At 2024-04-22 14:41:39, "Ron Liu"  wrote:
>> >Hi, Dev
>> >
>> >I would like to start a discussion about FLIP-448: Introduce Pluggable
>> >Workflow Scheduler Interface for Materialized Table.
>> >
>> >In FLIP-435[1], we proposed Materialized Table, which has two types of
>> data
>> >refresh modes: Full Refresh & Continuous Refresh Mode. In Full Refresh
>> >mode, the Materialized Table relies on a workflow scheduler to perform
>> >periodic refresh operation to achieve the desired data freshness.
>> >
>> >There are numerous open-source workflow schedulers available, with
>> popular
>> >ones including Airflow and DolphinScheduler. To enable Materialized Table
>> >to work with different workflow schedulers, we propose a pluggable
>> workflow
>> >scheduler interface for Materialized Table in this FLIP.
>> >
>> >For more details, see FLIP-448 [2]. Looking forward to your feedback.
>> >
>> >[1] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs
>> >[2]
>> >
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
>> >
>> >Best,
>> >Ron
>>
>


Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-06 Thread Ron Liu
Hi, Xuyang

Thanks for joining this discussion

> 1. In the sequence diagram, it appears that there is a missing step for
obtaining the refresh handler from the catalog during the suspend operation.

Good catch

> 2. The term "cascade refresh" does not seem to be mentioned in FLIP-435.
The workflow it creates is marked as a "one-time workflow". This is
different

from a "periodic workflow," and it appears to be a one-off execution. Is
this actually referring to the Refresh command in FLIP-435?

The cascade refresh is a future work, we don't propose the corresponding
syntax in FLIP-435. However, intuitively, it would be an extension of the
Refresh command in FLIP-435.

> 3. The workflow-scheduler.type has no default value; should it be set to
CRON by default?

Firstly, CRON is not a workflow scheduler. Secondly, I believe that
configuring the Scheduler should be an action that users are aware of, and
default values should not be set.

> 4. It appears that in the section on `public interfaces`, within
`WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to

`CreateWorkflowOperation`, right?

Sorry, I don't get your point. Can you give more description?

Best,
Ron

Xuyang  于2024年5月6日周一 20:26写道:

> Hi, Ron.
>
> Thanks for driving this. After reading the entire flip, I have the
> following questions:
>
>
>
>
> 1. In the sequence diagram, it appears that there is a missing step for
> obtaining the refresh handler from the catalog during the suspend operation.
>
>
>
>
> 2. The term "cascade refresh" does not seem to be mentioned in FLIP-435.
> The workflow it creates is marked as a "one-time workflow". This is
> different
>
> from a "periodic workflow," and it appears to be a one-off execution. Is
> this actually referring to the Refresh command in FLIP-435?
>
>
>
>
> 3. The workflow-scheduler.type has no default value; should it be set to
> CRON by default?
>
>
>
>
> 4. It appears that in the section on `public interfaces`, within
> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to
>
> `CreateWorkflowOperation`, right?
>
>
>
>
> --
>
> Best!
> Xuyang
>
>
>
>
>
> At 2024-04-22 14:41:39, "Ron Liu"  wrote:
> >Hi, Dev
> >
> >I would like to start a discussion about FLIP-448: Introduce Pluggable
> >Workflow Scheduler Interface for Materialized Table.
> >
> >In FLIP-435[1], we proposed Materialized Table, which has two types of
> data
> >refresh modes: Full Refresh & Continuous Refresh Mode. In Full Refresh
> >mode, the Materialized Table relies on a workflow scheduler to perform
> >periodic refresh operation to achieve the desired data freshness.
> >
> >There are numerous open-source workflow schedulers available, with popular
> >ones including Airflow and DolphinScheduler. To enable Materialized Table
> >to work with different workflow schedulers, we propose a pluggable
> workflow
> >scheduler interface for Materialized Table in this FLIP.
> >
> >For more details, see FLIP-448 [2]. Looking forward to your feedback.
> >
> >[1] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs
> >[2]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
> >
> >Best,
> >Ron
>


(flink) branch release-1.17 updated: [FLINK-34379][table] Fix OutOfMemoryError with large queries

2024-05-05 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new d2f93a5527b [FLINK-34379][table] Fix OutOfMemoryError with large 
queries
d2f93a5527b is described below

commit d2f93a5527b05583fc97bbae511ca0ac95325c02
Author: Jeyhun Karimov 
AuthorDate: Tue Apr 2 00:24:02 2024 +0200

[FLINK-34379][table] Fix OutOfMemoryError with large queries
---
 .../utils/DynamicPartitionPruningUtils.java|   9 +-
 .../DynamicPartitionPruningProgramTest.java|  85 +++
 .../program/DynamicPartitionPruningProgramTest.xml | 618 +
 3 files changed, 711 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java
index 90f7b40bc0b..089e1fd 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java
@@ -61,8 +61,10 @@ import org.apache.calcite.util.ImmutableIntList;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Optional;
+import java.util.Set;
 import java.util.stream.Collectors;
 
 /** Planner utils for Dynamic partition Pruning. */
@@ -115,7 +117,7 @@ public class DynamicPartitionPruningUtils {
 private final RelNode relNode;
 private boolean hasFilter;
 private boolean hasPartitionedScan;
-private final List tables = new ArrayList<>();
+private final Set tables = new HashSet<>();
 
 public DppDimSideChecker(RelNode relNode) {
 this.relNode = relNode;
@@ -235,9 +237,14 @@ public class DynamicPartitionPruningUtils {
 if (tables.size() == 0) {
 tables.add(catalogTable);
 } else {
+boolean hasAdded = false;
 for (ContextResolvedTable thisTable : new ArrayList<>(tables)) 
{
+if (hasAdded) {
+break;
+}
 if 
(!thisTable.getIdentifier().equals(catalogTable.getIdentifier())) {
 tables.add(catalogTable);
+hasAdded = true;
 }
 }
 }
diff --git 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/optimize/program/DynamicPartitionPruningProgramTest.java
 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/optimize/program/DynamicPartitionPruningProgramTest.java
index 8e957e2958a..c7ab3e40ef8 100644
--- 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/optimize/program/DynamicPartitionPruningProgramTest.java
+++ 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/optimize/program/DynamicPartitionPruningProgramTest.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.table.planner.plan.optimize.program;
 
+import org.apache.flink.table.api.Table;
 import org.apache.flink.table.api.TableConfig;
 import org.apache.flink.table.api.config.OptimizerConfigOptions;
 import org.apache.flink.table.catalog.ObjectPath;
@@ -30,6 +31,11 @@ import org.apache.flink.table.planner.utils.TableTestBase;
 import org.junit.Before;
 import org.junit.Test;
 
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.apache.flink.table.api.Expressions.col;
+
 /**
  * Tests for rules that extend {@link FlinkDynamicPartitionPruningProgram} to 
create {@link
  * 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchPhysicalDynamicFilteringTableSourceScan}.
@@ -80,6 +86,85 @@ public class DynamicPartitionPruningProgramTest extends 
TableTestBase {
 + ")");
 }
 
+@Test
+public void testLargeQueryPlanShouldNotOutOfMemoryWithTableApi() {
+// TABLE_OPTIMIZER_DYNAMIC_FILTERING_ENABLED is already enabled
+List selectStmts = new ArrayList<>();
+for (int i = 0; i < 100; i++) {
+util.tableEnv()
+.executeSql(
+"CREATE TABLE IF NOT EXISTS table"
++ i
++ "(att STRING,filename STRING) "
++ "with("
++ " 'connector' = 'values', "
++ " 'runtime-sourc

Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-04 Thread Ron Liu
Hi, Lincoln

Thanks for join this discussion.

After rethinking, I think your suggestion is make sense, although currently
deleting the workflow on the Scheduler and relying only on the
RefreshHandler is enough, if in the future we support cascading deletion,
the DeleteWorkflowOperation can provide the necessary information without
the need to provide a new interface.

I've updated the public interface section of FLIP.

Best,
Ron

Lincoln Lee  于2024年4月30日周二 21:27写道:

> Thanks Ron for starting this flip! It will complete the user story for
> flip-435[1].
>
> Regarding the WorkflowOperation, I have a question about whether we
> should add Delete/DropWorkflowOperation as well for when the
> Materialized Table is dropped or refresh mode changed from full to
> continuous?
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines?src=contextnavpagetreemode
>
>
> Best,
> Lincoln Lee
>
>
>  于2024年4月30日周二 15:37写道:
>
> > Hello Ron, thank you for your detailed answers!
> >
> > For the Visitor pattern, I thought about it the other way around, so that
> > operations visit the scheduler, and not vice-versa :) In this way
> > operations can get the required information in order to be executed in a
> > tailored way.
> >
> > Thank you for your effort, but, as you say:
> > > furthermore, I think the current does not see the benefits of the time,
> > simpler instead of better, similar to the design of
> > CatalogModificationEvent[2] and CatalogModificationListener[3], the
> > developer only needs instanceof judgment.
> >
> > In java, most of the times, `instanceof` is considered an anti-pattern,
> > that's why I was also thinking about a command pattern (every operations
> > defines an `execute` method). However, I also understand this part is not
> > crucial for the FLIP under discussion, and the implementation details can
> > simply wait for the PRs to come.
> >
> > > After discussing with Shengkai offline, there is no need for this REST
> > API
> > to support multiple tables to be refreshed at the same time, so it would
> be
> > more appropriate to put the materialized table identifier in the path of
> > the URL, thanks for the suggestion.
> >
> > Very good!
> >
> > Thank you!
> > On Apr 29, 2024 at 05:04 +0200, Ron Liu , wrote:
> > > Hi, Lorenzo
> > >
> > > > I have a question there: how can the gateway update the
> refreshHandler
> > in
> > > the Catalog before getting it from the scheduler?
> > >
> > > The refreshHandler in CatalogMateriazedTable is null before getting it
> > from
> > > the scheduler, you can look at the CatalogMaterializedTable.Builder[1]
> > for
> > > more details.
> > >
> > > > You have a typo here: WorkflowScheudler -> WorkflowScheduler :)
> > >
> > > Fix it now, thanks very much.
> > >
> > > > For the operations part, I still think that the FLIP would benefit
> from
> > > providing a specific pattern for operations. You could either propose a
> > > command pattern [1] or a visitor pattern (where the scheduler visits
> the
> > > operation to get relevant info) [2] for those operations at your
> choice.
> > >
> > > Thank you for your input, I find it very useful. I tried to understand
> > your
> > > thinking through code and implemented the following pseudo code using
> the
> > > visitor design pattern:
> > > 1. first defined WorkflowOperationVisitor, providing several overloaded
> > > visit methods.
> > >
> > > public interface WorkflowOperationVisitor {
> > >
> > >  T visit(CreateWorkflowOperation
> > > createWorkflowOperation);
> > >
> > > void visit(ModifyWorkflowOperation operation);
> > > }
> > >
> > > 2. then in the WorkflowOperation add the accept method.
> > >
> > > @PublicEvolving
> > > public interface WorkflowOperation {
> > >
> > > void accept(WorkflowOperationVisitor visitor);
> > > }
> > >
> > >
> > > 3. in the WorkflowScheduler call the implementation class of
> > > WorkflowOperationVisitor, complete the corresponding operations.
> > >
> > > I recognize this design pattern purely from a code design point of
> view,
> > > but from the point of our specific scenario:
> > > 1. For CreateWorkflowOperation, the visit method needs to return
> > > RefreshHandler, for ModifyWorkflowOperation

Re: [External] : Re: New candidate JEP: 471: Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal

2024-05-03 Thread Ron Pressler


> On 3 May 2024, at 18:33, David Lloyd  wrote:
> 
> 
> On Fri, May 3, 2024 at 10:12 AM Mark Reinhold  
> wrote:
> https://openjdk.org/jeps/471
> 
>   Summary: Deprecate the memory-access methods in sun.misc.Unsafe for
>   removal in a future release.
> 
> 
> We still use Unsafe fairly often in various Red Hat products (primarily 
> because our baseline support JDK for these products is typically 11 or 17 at 
> present), in a variety of ways for a variety of reasons. Most of these uses 
> of Unsafe should be transitionable to `MemorySegment` using multi-release 
> JARs, and a bit of exploratory work has already been done on this. However 
> there is one unfortunate exception (that I know of).
> 
> In order to avoid false sharing in certain specific high-concurrency 
> situations, I have lately used arrays to space out certain value locations by 
> using the smallest data cache line size (which is detected via an existing 
> library) and dividing it by the array scale to determine the length of array 
> to allocate in order to accommodate these values. I then use multiples of the 
> cache line size (in bytes), offset from the array base, to locate the 
> elements to access.
> 
> It is possible to continue this more or less as-is for primitive types (at 
> least, it is if one assumes certain facts around primitive data type size and 
> alignment to be true), but for objects, without knowing their size, we can't 
> know how much padding to reserve around the value location to ensure that the 
> contended values are not falsely shared.
> 
> I seem to recall (years ago now so I might be a bit fuzzy on it) that the 
> lack of public API around `@Contended` was mildly controversial in the past. 
> The proposed remedy was to use arrays for this purpose, if I recall 
> correctly. However there does not seem to be any good way to do this anymore 
> (at least for objects) without simply guessing, and this seems like a small 
> but significant hole in this plan as it stands for now.
> 
> It seems to me that the JDK could fill this gap by introducing some API which 
> can construct and provide access to an array or something like it, with 
> striding and/or alignment guarantees that each element will reside on a 
> separate data cache line (or barring that, perhaps using a minimum 
> per-element size and/or alignment that is given as an argument to the 
> factory), and with the gamut of atomic accessors via a `VarHandle` or 
> similar. This could be especially valuable if/when objects start coming in a 
> variety of shapes and sizes in memory, once value types hit.
> 
> Could such a thing be added into the plan?
> 
> -- 
> - DML • he/him

[redirecting to core-libs]

Adding some VarHandle operation that takes into account the cache lines size is 
interesting — although preserving cache-line *alignment* could be tricky as the 
GC relocates arrays, so an array element that’s at the start of a cache line at 
time t0 might not be at the start of a cache line at time t1 — but that’s 
unrelated to this JEP.

What is related to this JEP is that you’re using Unsafe to determine the size 
of an oop (in particular, to tell if oops are compressed or no)t. Is that what 
you’re asking for?

— Ron

Re: Freelance Opportunity : Django Developer for building A Dialer

2024-05-03 Thread Raunak Ron
I am Interested.

On Monday 29 April 2024 at 19:45:53 UTC+5:30 Pankaj Saini wrote:

> I am interested in this position.
>
> I have an experience in Django Development with strong Python.
>
> On Tue, Apr 2, 2024, 10:49 PM Abhishek J  
> wrote:
>
>> Dear Developers,
>>
>> I need to build an android and IOS phone dialer similar to Truecaller.
>>
>> We are seeking experienced and dedicated candidates who are proficient in 
>> Django and possess a keen interest in contributing to this impactful 
>> initiative.
>>
>> We look forward to the opportunity to collaborate with talented 
>> individuals who are passionate about creating innovative solutions in the 
>> education sector.
>>
>> Thank you for considering this opportunity.
>>
>> -- 
>>
> You received this message because you are subscribed to the Google Groups 
>> "Django users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to django-users...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/django-users/CAKkngwDHBygGho4gkHRhNkpVJf_d2UOkHQ%3DemN3BtcFSVRU8sA%40mail.gmail.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/b0b71507-abfb-4c45-8701-92ef9f972affn%40googlegroups.com.


Re: Ora2pg Delta Migration: Oracle to PostgreSQL

2024-05-02 Thread Ron Johnson
On Thu, May 2, 2024 at 8:28 PM Amit Sharma  wrote:

> Hello,
>
> Has anyone tried delta/incremental data migration for Oracle to PostgreSQL
> using Ora2pg? Or what are the best options to run delta migration for
> Oracle to PostgreSQL?
>

What do the ora2pg docs say about whether or not that feature is
implemented?  (It wasn't when I last used it in 2022.)


[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

2024-05-02 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48091:

Description: 
When using an `explode` function, and `transform` function in the same select 
statement, aliases used inside the transformed column are ignored.

This behaviour only happens using the pyspark API, and not when using the SQL 
API

 
{code:java}
from pyspark.sql import functions as F

# Create the df
df = spark.createDataFrame([
{"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
]){code}
Good case, where all aliases are used

 
{code:java}
df.select(
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema() 

root
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true){code}
Bad case, when using explode, the alises inside the transformed column is 
ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
instead of `some_alias`

 

 
{code:java}
df.select(
F.explode("array1").alias("exploded"),
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- x_17: long (nullable = true)
 |||-- id: long (nullable = true) {code}
 

 

 

When using the SQL API instead, it works fine
{code:java}
spark.sql(
"""
select explode(array1) as exploded, transform(array2, x-> struct(x as 
some_alias, id as second_alias)) as array2 from {df}
""", df=df
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true) {code}
 

Workaround: for now, using F.named_struct can be used as a workaround

  was:
When using an `explode` function, and `transform` function in the same select 
statement, aliases used inside the transformed column are ignored.

This behaviour only happens using the pyspark API, and not when using the SQL 
API

 
{code:java}
from pyspark.sql import functions as F

# Create the df
df = spark.createDataFrame([
{"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
]){code}
Good case, where all aliases are used

 
{code:java}
df.select(
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema() 

root
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true){code}
Bad case, when using explode, the alises inside the transformed column is 
ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
instead of `some_alias`

 

 
{code:java}
df.select(
F.explode("array1").alias("exploded"),
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- x_17: long (nullable = true)
 |||-- id: long (nullable = true) {code}
 

 

 

When using the SQL API instead, it works fine
{code:java}
spark.sql(
"""
select explode(array1) as exploded, transform(array2, x-> struct(x as 
some_alias, id as second_alias)) as array2 from {df}
""", df=df
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true) {code}
 


> Using `explode` together with `transform` in the same select statement causes 
> aliases in the transformed column to be ignored
> -
>
> Key: SPARK-48091
>     URL: https://issues.apache.org/jira/browse/SPARK-48091
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>  

Re: Prevent users from executing pg_dump against tables

2024-05-02 Thread Ron Johnson
On Thu, May 2, 2024 at 1:47 AM RAJAMOHAN  wrote:

> Hello all,
>
> In our production db infrastructure, we have one read_only role which has
> read privileges against all tables in schema A.
>
> We are planning to grant this role to some developers for viewing the
> data, but also I want to limit the users from executing statements like
> copy or using pg_dump. Main reason being I don't want the data to be copied
> from the database to their local machines.
>
> I tried by implementing triggers, but was not able to figure out a way to
> restrict the pg_dump and allow only select statements.
>

> Is there a way to implement this? Please advise.
> 
>

If you can query a table, then you can save the query contents to your
local context.  That's a fundamental law of nature, since you gave them
read privs.

For example:
psql --host=SomeEC2Node $DB -Xc "SELECT * FROM read_only_table;" >
read_only_table.txt

That even works on Windows.


[jira] [Created] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

2024-05-02 Thread Ron Serruya (Jira)
Ron Serruya created SPARK-48091:
---

 Summary: Using `explode` together with `transform` in the same 
select statement causes aliases in the transformed column to be ignored
 Key: SPARK-48091
 URL: https://issues.apache.org/jira/browse/SPARK-48091
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 3.5.1, 3.5.0, 3.4.0
 Environment: Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, 
14.3, Pyspark 3.4.0, 3.5.0, 3.5.1
Reporter: Ron Serruya


When using an `explode` function, and `transform` function in the same select 
statement, aliases used inside the transformed column are ignored.

This behaviour only happens using the pyspark API, and not when using the SQL 
API

 
{code:java}
from pyspark.sql import functions as F

# Create the df
df = spark.createDataFrame([
{"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
]){code}
Good case, where all aliases are used

 
{code:java}
df.select(
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema() 

root
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true){code}
Bad case, when using explode, the alises inside the transformed column is 
ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
instead of `some_alias`

 

 
{code:java}
df.select(
F.explode("array1").alias("exploded"),
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- x_17: long (nullable = true)
 |||-- id: long (nullable = true) {code}
 

 

 

When using the SQL API instead, it works fine
{code:java}
spark.sql(
"""
select explode(array1) as exploded, transform(array2, x-> struct(x as 
some_alias, id as second_alias)) as array2 from {df}
""", df=df
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true) {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[PATCH] ash: fix parsing of alias expansion + bash features

2024-05-02 Thread Ron Yorston
An alias expansion immediately followed by '<' and a newline is
parsed incorrectly:

   ~ $ alias x='echo yo'
   ~ $ x<
   yo
   ~ $
   sh: syntax error: unexpected newline

The echo is executed and an error is printed on the next command
submission.  In dash the echo isn't executed and the error is
reported immediately:

   $ alias x='echo yo'
   $ x<
   dash: 3: Syntax error: newline unexpected
   $

The difference between BusyBox and dash is that BusyBox supports
bash-style process substitution and output redirection.  These
require checking for '<(', '>(' and '&>' in readtoken1().

In the case above, when the end of the alias is found, the '<' and
the following newline are both read to check for '<('.  Since
there's no match both characters are pushed back.

The next input is obtained by reading the expansion of the alias.
Once this string is exhausted the next call to __pgetc() calls
preadbuffer() which pops the string, reverts to the previous input
and recursively calls __pgetc().  This request is satisified from
the pungetc buffer.  But the first __pgetc() doesn't know this:
it sees the character has come from preadbuffer() so it (incorrectly)
updates the pungetc buffer.

Resolve the issue by moving the code to pop the string and fetch
the next character up from preadbuffer() into __pgetc().

function old new   delta
pgetc 28 589+561
__pgetc  607   --607
--
(add/remove: 0/1 grow/shrink: 1/0 up/down: 561/-607)  Total: -46 bytes

Signed-off-by: Ron Yorston 
---
 shell/ash.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/shell/ash.c b/shell/ash.c
index 4ca4c6c56..5df0ba625 100644
--- a/shell/ash.c
+++ b/shell/ash.c
@@ -10934,11 +10934,6 @@ preadbuffer(void)
char *q;
int more;
 
-   if (unlikely(g_parsefile->strpush)) {
-   popstring();
-   return __pgetc();
-   }
-
if (g_parsefile->buf == NULL) {
pgetc_debug("preadbuffer PEOF1");
return PEOF;
@@ -11053,8 +11048,13 @@ static int __pgetc(void)
 
if (--g_parsefile->left_in_line >= 0)
c = (unsigned char)*g_parsefile->next_to_pgetc++;
-   else
+   else {
+   if (unlikely(g_parsefile->strpush)) {
+   popstring();
+   return __pgetc();
+   }
c = preadbuffer();
+   }
 
g_parsefile->lastc[1] = g_parsefile->lastc[0];
g_parsefile->lastc[0] = c;
-- 
2.44.0

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: Linked directory or explicit reference

2024-05-02 Thread Ron Johnson
On Thu, May 2, 2024 at 12:50 AM Senor Cervesa 
wrote:
[snip]

>  I'm not sure what would trigger "directory not empty".


The lost+found directory.


Pgadmin4 for arm64 Linux

2024-05-02 Thread Ron Jewell
I’m running Linux on VMWare installed on my iMac. Is there an arm64 
pgadmin4 for Linux?   If not could you suggest an alternative?

Thanks

Ron. 





installing Pgadmin4 onto linux using the download pgadmin4-8.5-arm64.dmg

2024-05-01 Thread Ron Jewell
I've seen lots of videos installing pgadmin4  onto Linux using Curl followed by 
Add key technique and others using sudo apt install but nothing from 
pgadmin4-8.5-arm64.dmg.   

But I haven’t seen anything anywhere on downloading a DMG file and installing 
pgadmin4 from a DMG file on an arm64 computer running linux.

Could someone in support can work up a video on installing arm64 pgadmin4 from 
an DMG download —not on a Mac --- but onto Linux?  The 

Surely I’m not the only one out there trying to install pgadmin4 from 
pgadmin4-8.5-arm64.dmg on an arm64 linux OS.

Thanks

Ron






Re: Posgresql 14 and CarbonBlack on RHEL8?

2024-05-01 Thread Ron Johnson
On Tue, Apr 30, 2024 at 10:07 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > When running stress tests on the systems (in prod, during the maintenance
> > window), 171K events/second are generated on the RHEL8 servers, and CB
> > needs (according to top(1)) 325% of CPU to handle that, and still
> dropping
> > 92% of them.
> > The RHEL6 system doesn't bat an eye at running the exact same test (36
> cron
> > jobs running psql executing SELECT statements).
>
> Is JIT enabled on the newer system?  If so try turning it off, or else
> raise the associated cost settings.  We've seen lots of reports of
> workloads where, by default, the planner is too aggressive about
> applying JIT.
>

A puzzling suggestion.  Why should it impact AV software?

At one point, I disabled JIT to test its impact on PG, performance was a
bit of a wash (some queries were a bit faster, some were a bit slower), but
I didn't monitor CB.

Just now, I did ALTER SYSTEM SET jit='off'; and re-ran the stress test.  No
impact to CarbonBlack.


Re: [PATCH v2] fixdep: add fstat error handling

2024-05-01 Thread Ron Yorston
Sam James  wrote:
>David Leonard  writes:
>> I worry that the fprintf() may destroy the errno which perror() uses,
>> so you could get a random error message.
>> Perhaps remove the fprintf(s) completely? Because the context should be
>> clear enough from the filename alone that perror displays.
>
>Ah, a great point. Any preference between just stripping the fprintfs vs
>a better argument to perror, as we do in some places (but not very
>consistently)?

Or:

   fprintf(stderr, "fixdep: fstat %s %s\n", depfile, strerror(errno));

Cheers,

Ron
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Posgresql 14 and CarbonBlack on RHEL8?

2024-04-30 Thread Ron Johnson
(CarbonBlack is cross-platform AV software sold by VMware.)

Currently we're running PG 9.6.24 on RHEL 6.10 with CB (version unknown to
me) in production, and testing PG 14.11 on RHEL 8.9 with CB 2.15.2
(hopefully going into production next month).

Both old and new VMs are 32 CPU with 128GB RAM.
Nothing but PG, CB and itsm software runs on these systems.

When running stress tests on the systems (in prod, during the maintenance
window), 171K events/second are generated on the RHEL8 servers, and CB
needs (according to top(1)) 325% of CPU to handle that, and still dropping
92% of them.
The RHEL6 system doesn't bat an eye at running the exact same test (36 cron
jobs running psql executing SELECT statements).

The small RHEL8/PG14 non-prod systems show similar load when lots of SELECT
statements run.

Has anyone else seen this?  If so, how did you resolve it?


Re: Linked directory or explicit reference

2024-04-30 Thread Ron Johnson
On Tue, Apr 30, 2024 at 7:00 PM Senor Cervesa 
wrote:

> Hi All;
>
> When doing an initial install of PostgreSQL on RHEL 7 or 8 derived OS via
> rpm, what are pros, cons and recommendations of these 2 procedures for
> utilizing a second disk?
>
> Secondary SSD or RAID mounted at /disk2.
>
> Option #1
>
>1. install the rpm which creates basic user and home
>2. Create symlink /var/lib/pgsql/15/data --> /disk2/data
>3. initdb with no special options
>
> Or Option #2
>
>1. install the rpm which creates basic user and home
>2. initdb with --pgdata=/disk2/data
>Probably using included 'postgresql-12-setup' script
>
> I also link /var/lib/pgsql/data  --> ../15/data so automation can
> reference postgresql.conf without knowing version (legacy stuff).
>

In my experience,The PgBackRest restore feature does not like symlinks.


> The install is automated with a bash script which handles several options
> including whether there is a second disk for DB. Scripting the install with
> or without the second disk is straight forward but I'm concerned with
> either scenario causing unforeseen differences.
>
> I don't think there's a benefit to using tablespace here but I have no
> experience with it. The systemd service is configured with a dependency on
> the disk mount so I don't think there are different risks for starting
> postgres with missing data directory.
>
> I've run postgres in both scenarios and not had any issues. I'm interested
> in comments from others on their experience using these or other options.
>
Is the mount point just "/disk2" when using "--pgdata=/disk2/data"?  I've
gotten "directory not empty" errors when the mount point is
"/Database/x.y/data".


(flink) branch master updated: [FLINK-35194][table] Support describe job statement for SqlGateway

2024-04-30 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 44528e0ee9f [FLINK-35194][table] Support describe job statement for 
SqlGateway
44528e0ee9f is described below

commit 44528e0ee9fbed11b5417253534078d60fed3a12
Author: xuyang 
AuthorDate: Fri Apr 26 20:29:56 2024 +0800

[FLINK-35194][table] Support describe job statement for SqlGateway

This closes #24728
---
 .../service/operation/OperationExecutor.java   | 53 +
 .../gateway/service/SqlGatewayServiceITCase.java   | 51 +
 .../src/main/codegen/data/Parser.tdd   |  5 +-
 .../src/main/codegen/includes/parserImpls.ftl  | 18 ++
 .../flink/sql/parser/dql/SqlDescribeJob.java   | 66 ++
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |  6 ++
 .../operations/command/DescribeJobOperation.java   | 52 +
 .../converters/SqlDescribeJobConverter.java| 32 +++
 .../operations/converters/SqlNodeConverters.java   |  1 +
 .../table/planner/calcite/FlinkPlannerImpl.scala   |  3 +-
 10 files changed, 285 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/operation/OperationExecutor.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/operation/OperationExecutor.java
index 945265089c3..c50ba8c2bbf 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/operation/OperationExecutor.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/operation/OperationExecutor.java
@@ -83,6 +83,7 @@ import 
org.apache.flink.table.operations.StatementSetOperation;
 import org.apache.flink.table.operations.UnloadModuleOperation;
 import org.apache.flink.table.operations.UseOperation;
 import org.apache.flink.table.operations.command.AddJarOperation;
+import org.apache.flink.table.operations.command.DescribeJobOperation;
 import org.apache.flink.table.operations.command.ExecutePlanOperation;
 import org.apache.flink.table.operations.command.RemoveJarOperation;
 import org.apache.flink.table.operations.command.ResetOperation;
@@ -481,6 +482,8 @@ public class OperationExecutor {
 return callStopJobOperation(tableEnv, handle, (StopJobOperation) 
op);
 } else if (op instanceof ShowJobsOperation) {
 return callShowJobsOperation(tableEnv, handle, (ShowJobsOperation) 
op);
+} else if (op instanceof DescribeJobOperation) {
+return callDescribeJobOperation(tableEnv, handle, 
(DescribeJobOperation) op);
 } else if (op instanceof RemoveJarOperation) {
 return callRemoveJar(handle, ((RemoveJarOperation) op).getPath());
 } else if (op instanceof AddJarOperation
@@ -774,6 +777,56 @@ public class OperationExecutor {
 resultRows);
 }
 
+public ResultFetcher callDescribeJobOperation(
+TableEnvironmentInternal tableEnv,
+OperationHandle operationHandle,
+DescribeJobOperation describeJobOperation)
+throws SqlExecutionException {
+Configuration configuration = tableEnv.getConfig().getConfiguration();
+Duration clientTimeout = 
configuration.get(ClientOptions.CLIENT_TIMEOUT);
+String jobId = describeJobOperation.getJobId();
+Optional jobStatusOp =
+runClusterAction(
+configuration,
+operationHandle,
+clusterClient -> {
+try {
+JobID expectedJobId = 
JobID.fromHexString(jobId);
+return clusterClient.listJobs()
+.get(clientTimeout.toMillis(), 
TimeUnit.MILLISECONDS)
+.stream()
+.filter(job -> 
expectedJobId.equals(job.getJobId()))
+.findFirst();
+} catch (Exception e) {
+throw new SqlExecutionException(
+String.format(
+"Failed to get job %s in the 
cluster.", jobId),
+e);
+}
+});
+
+if (!jobStatusOp.isPresent()) {
+throw new SqlExecutionException(
+String.format("Described job %s does not exist in the 
cluster.", jobId));
+}
+JobStatusMessage job = jobStatusOp.get();
+
+RowData resultRow =
+GenericRowData.of(
+S

(flink) branch master updated: [FLINK-35184][table-runtime] Fix mini-batch join hash collision when use InputSideHasNoUniqueKeyBundle

2024-04-29 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f543cc543e9 [FLINK-35184][table-runtime] Fix mini-batch join hash 
collision when use InputSideHasNoUniqueKeyBundle
f543cc543e9 is described below

commit f543cc543e9b0eb05415095190e86d3b22cdf1a4
Author: Roman Boyko 
AuthorDate: Tue Apr 23 12:13:58 2024 +0700

[FLINK-35184][table-runtime] Fix mini-batch join hash collision when use 
InputSideHasNoUniqueKeyBundle

This closes #24703
---
 .../bundle/InputSideHasNoUniqueKeyBundle.java  | 25 --
 .../join/stream/StreamingJoinOperatorTestBase.java |  4 +-
 .../stream/StreamingMiniBatchJoinOperatorTest.java | 95 +-
 3 files changed, 93 insertions(+), 31 deletions(-)

diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
index b5738835b95..fdc9e1d5193 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
@@ -96,15 +96,26 @@ public class InputSideHasNoUniqueKeyBundle extends 
BufferBundle leftTypeInfo =
+protected InternalTypeInfo leftTypeInfo =
 InternalTypeInfo.of(
 RowType.of(
 new LogicalType[] {
@@ -57,7 +57,7 @@ public abstract class StreamingJoinOperatorTestBase {
 new LogicalType[] {new CharType(false, 20), new 
CharType(true, 10)},
 new String[] {"line_order_id0", 
"line_order_ship_mode"}));
 
-protected final RowDataKeySelector leftKeySelector =
+protected RowDataKeySelector leftKeySelector =
 HandwrittenSelectorUtil.getRowDataSelector(
 new int[] {1},
 leftTypeInfo.toRowType().getChildren().toArray(new 
LogicalType[0]));
diff --git 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
index 62b8116a0b0..7e92f72cf5e 100644
--- 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
+++ 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
@@ -25,13 +25,13 @@ import 
org.apache.flink.table.runtime.operators.bundle.trigger.CountCoBundleTrig
 import org.apache.flink.table.runtime.operators.join.FlinkJoinType;
 import 
org.apache.flink.table.runtime.operators.join.stream.state.JoinInputSideSpec;
 import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+import org.apache.flink.table.types.logical.BigIntType;
 import org.apache.flink.table.types.logical.CharType;
 import org.apache.flink.table.types.logical.LogicalType;
 import org.apache.flink.table.types.logical.RowType;
 import org.apache.flink.table.utils.HandwrittenSelectorUtil;
 import org.apache.flink.types.RowKind;
 
-import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Tag;
 import org.junit.jupiter.api.Test;
 import org.junit.jupiter.api.TestInfo;
@@ -55,27 +55,6 @@ public final class StreamingMiniBatchJoinOperatorTest 
extends StreamingJoinOpera
 private RowDataKeySelector leftUniqueKeySelector;
 private RowDataKeySelector rightUniqueKeySelector;
 
-@BeforeEach
-public void beforeEach(TestInfo testInfo) throws Exception {
-rightTypeInfo =
-InternalTypeInfo.of(
-RowType.of(
-new LogicalType[] {
-new CharType(false, 20),
-new CharType(false, 20),
-new CharType(true, 10)
-},
-new String[] {
-"order_id#", "line_order_id0", 
"line_order_ship_mode"
-}));
-
-rightKeySelector =
-HandwrittenSelectorUtil.getRowDataSelector(
-new int[] {1},
-rightTypeInfo.toRowType().getChildren().toArray(new 
LogicalType[0]));
-super.beforeEach(testInfo);
-}
-
 @

(flink) branch master updated: [FLINK-35191][table-api] Support alter materialized table related syntax: suspend, resume, refresh, set and reset

2024-04-29 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 330f524d185 [FLINK-35191][table-api] Support alter materialized table 
related syntax: suspend, resume, refresh, set and reset
330f524d185 is described below

commit 330f524d185d575ceb679a6c587e9c39612e844c
Author: Feng Jin 
AuthorDate: Mon Apr 29 10:21:12 2024 +0800

[FLINK-35191][table-api] Support alter materialized table related syntax: 
suspend, resume, refresh, set and reset

This closes #24737
---
 .../src/main/codegen/data/Parser.tdd   |  12 ++
 .../src/main/codegen/includes/parserImpls.ftl  |  94 +++
 .../sql/parser/ddl/SqlAlterMaterializedTable.java  |  61 +++
 .../ddl/SqlAlterMaterializedTableFreshness.java|  60 +++
 .../ddl/SqlAlterMaterializedTableOptions.java  |  67 
 .../ddl/SqlAlterMaterializedTableRefresh.java  |  61 +++
 .../ddl/SqlAlterMaterializedTableRefreshMode.java  |  62 +++
 .../parser/ddl/SqlAlterMaterializedTableReset.java |  67 
 .../ddl/SqlAlterMaterializedTableResume.java   |  74 
 .../ddl/SqlAlterMaterializedTableSuspend.java  |  47 ++
 .../flink/sql/parser/utils/ParserResource.java |   2 +-
 .../MaterializedTableStatementParserTest.java  | 187 -
 12 files changed, 792 insertions(+), 2 deletions(-)

diff --git a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd 
b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
index dfb43353a4b..100e9edd2fb 100644
--- a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
+++ b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
@@ -35,6 +35,14 @@
 
"org.apache.flink.sql.parser.ddl.SqlAddPartitions.AlterTableAddPartitionContext"
 "org.apache.flink.sql.parser.ddl.SqlAlterDatabase"
 "org.apache.flink.sql.parser.ddl.SqlAlterFunction"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTable"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableFreshness"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableOptions"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableRefreshMode"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableReset"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableRefresh"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableResume"
+"org.apache.flink.sql.parser.ddl.SqlAlterMaterializedTableSuspend"
 "org.apache.flink.sql.parser.ddl.SqlAlterTable"
 "org.apache.flink.sql.parser.ddl.SqlAlterTable.AlterTableContext"
 "org.apache.flink.sql.parser.ddl.SqlAlterTableAdd"
@@ -191,6 +199,9 @@
 "STATISTICS"
 "STOP"
 "STRING"
+"SUSPEND"
+"REFRESH"
+"RESUME"
 "TABLES"
 "TIMESTAMP_LTZ"
 "TRY_CAST"
@@ -581,6 +592,7 @@
 "SqlShowCreate()"
 "SqlReplaceTable()"
 "SqlRichDescribeTable()"
+"SqlAlterMaterializedTable()"
 "SqlAlterTable()"
 "SqlAlterView()"
 "SqlShowModules()"
diff --git 
a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl 
b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
index b52f41aa951..95509e7b8da 100644
--- a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
+++ b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
@@ -1779,6 +1779,100 @@ SqlCreate SqlCreateMaterializedTable(Span s, boolean 
replace, boolean isTemporar
 }
 }
 
+/**
+* Parses alter materialized table.
+*/
+SqlAlterMaterializedTable SqlAlterMaterializedTable() :
+{
+SqlParserPos startPos;
+SqlIdentifier tableIdentifier;
+SqlNodeList propertyList = SqlNodeList.EMPTY;
+SqlNodeList propertyKeyList = SqlNodeList.EMPTY;
+SqlNodeList partSpec = SqlNodeList.EMPTY;
+SqlNode freshness = null;
+}
+{
+   { startPos = getPos();}
+tableIdentifier = CompoundIdentifier()
+(
+
+{
+return new SqlAlterMaterializedTableSuspend(startPos, 
tableIdentifier);
+}
+|
+
+[  propertyList = TableProperties() ]
+{
+return new SqlAlterMaterializedTableResume(
+startPos,
+tableIdentifier,
+propertyList);
+}
+|
+
+[  {
+partSpec = new SqlNodeList(getPos());
+PartitionSpecCommaList(partSpec);
+}
+]
+{
+return new SqlAlterMat

Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-04-28 Thread Ron Liu
Hi, Lorenzo

> I have a question there: how can the gateway update the refreshHandler in
the Catalog before getting it from the scheduler?

The refreshHandler in CatalogMateriazedTable is null before getting it from
the scheduler, you can look at the CatalogMaterializedTable.Builder[1] for
more details.

> You have a typo here: WorkflowScheudler -> WorkflowScheduler :)

Fix it now, thanks very much.

> For the operations part, I still think that the FLIP would benefit from
providing a specific pattern for operations. You could either propose a
command pattern [1] or a visitor pattern (where the scheduler visits the
operation to get relevant info) [2] for those operations at your choice.

Thank you for your input, I find it very useful. I tried to understand your
thinking through code and implemented the following pseudo code using the
visitor design pattern:
1. first defined WorkflowOperationVisitor, providing several overloaded
visit methods.

public interface WorkflowOperationVisitor {

 T visit(CreateWorkflowOperation
createWorkflowOperation);

void visit(ModifyWorkflowOperation operation);
}

2. then in the WorkflowOperation add the accept method.

@PublicEvolving
public interface WorkflowOperation {

void accept(WorkflowOperationVisitor visitor);
}


3. in the WorkflowScheduler call the implementation class of
WorkflowOperationVisitor, complete the corresponding operations.

I recognize this design pattern purely from a code design point of view,
but from the point of our specific scenario:
1. For CreateWorkflowOperation, the visit method needs to return
RefreshHandler, for ModifyWorkflowOperation, such as suspend and resume,
the visit method doesn't need to return RefreshHandler. parameter,
currently for different WorkflowOperation, WorkflowOperationVisitor#accept
can't be unified, so I think visitor may not be applicable here.

2. In addition, I think using the visitor pattern will add complexity to
the WorkflowScheduler implementer, which needs to implement one more
interface WorkflowOperationVisitor, this interface is not for the engine to
use, so I don't see any benefit from this design at the moment.

3. furthermore, I think the current does not see the benefits of the time,
simpler instead of better, similar to the design of
CatalogModificationEvent[2] and CatalogModificationListener[3], the
developer only needs instanceof judgment.

To summarize, I don't think there is a need to introduce command or visitor
pattern at present.

> About the REST API, I will wait for your offline discussion :)

After discussing with Shengkai offline, there is no need for this REST API
to support multiple tables to be refreshed at the same time, so it would be
more appropriate to put the materialized table identifier in the path of
the URL, thanks for the suggestion.

[1]
https://github.com/apache/flink/blob/e412402ca4dfc438e28fb990dc53ea7809430aee/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java#L264
[2]
https://github.com/apache/flink/blob/b1544e4e513d2b75b350c20dbb1c17a8232c22fd/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/listener/CatalogModificationEvent.java#L28
[3]
https://github.com/apache/flink/blob/b1544e4e513d2b75b350c20dbb1c17a8232c22fd/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/listener/CatalogModificationListener.java#L31

Best,
Ron

Ron Liu  于2024年4月28日周日 23:53写道:

> Hi, Shengkai
>
> Thanks for your feedback and suggestion, it looks very useful for this
> proposal, regarding your question I made the following optimization:
>
> > *WorkflowScheduler*
> > 1. How to get the exception details if `modifyRefreshWorkflow` fails?
> > 2. Could you give us an example about how to configure the scheduler?
>
> 1. Added a new WorkflowException, WorkflowScheduler's related method
> signature will throw WorkflowException, when creating or modifying Workflow
> encountered an exception, so that the framework will sense and deal with it.
>
> 2. Added a new Configuration section, introduced a new Option, and gave an
> example of how to define the Scheduler in flink-conf.yaml.
>
> > *SQL Gateway*
> > 1. SqlGatewayService requires Session as the input, but the REST API
> doesn't need any Session information.
> > 2. Use "-" instead of "_" in the REST URI and camel case for fields in
> request/response
> > 3. Do we need scheduleTime and scheduleTimeFormat together?
>
> 1. If it is designed as a synchronous API, it may lead to network jitter,
> thread resource exhaustion and other problems, which I have not considered
> before. The asynchronous API, although increasing the cost of use for the
> user, is friendly to the SqlGatewayService, as well as the Client thread
> resources. In summary as discussed offline, so I also tend to think that
> all

Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-04-28 Thread Ron Liu
Hi, Shengkai

Thanks for your feedback and suggestion, it looks very useful for this
proposal, regarding your question I made the following optimization:

> *WorkflowScheduler*
> 1. How to get the exception details if `modifyRefreshWorkflow` fails?
> 2. Could you give us an example about how to configure the scheduler?

1. Added a new WorkflowException, WorkflowScheduler's related method
signature will throw WorkflowException, when creating or modifying Workflow
encountered an exception, so that the framework will sense and deal with it.

2. Added a new Configuration section, introduced a new Option, and gave an
example of how to define the Scheduler in flink-conf.yaml.

> *SQL Gateway*
> 1. SqlGatewayService requires Session as the input, but the REST API
doesn't need any Session information.
> 2. Use "-" instead of "_" in the REST URI and camel case for fields in
request/response
> 3. Do we need scheduleTime and scheduleTimeFormat together?

1. If it is designed as a synchronous API, it may lead to network jitter,
thread resource exhaustion and other problems, which I have not considered
before. The asynchronous API, although increasing the cost of use for the
user, is friendly to the SqlGatewayService, as well as the Client thread
resources. In summary as discussed offline, so I also tend to think that
all APIs of SqlGateway should be unified, and all should be asynchronous
APIs, and bound to session. I have updated the REST API section in FLIP.

2. thanks for the reminder, it has been updated

3. After rethinking, I think it can indeed be simpler, there is no need to
pass in a custom time format, scheduleTime can be unified to the SQL
standard timestamp format: '-MM-dd HH:mm:ss', it is able to satisfy the
time related needs of materialized table.

Based on your feedback, I have optimized and updated the FLIP related
section.

Best,
Ron


Shengkai Fang  于2024年4月28日周日 15:47写道:

> Hi, Liu.
>
> Thanks for your proposal. I have some questions about the FLIP:
>
> *WorkflowScheduler*
>
> 1. How to get the exception details if `modifyRefreshWorkflow` fails?
> 2. Could you give us an example about how to configure the scheduler?
>
> *SQL Gateway*
>
> 1. SqlGatewayService requires Session as the input, but the REST API
> doesn't need any Session information.
>
> From the perspective of a gateway developer, I tend to unify the API of the
> SQL gateway, binding all concepts to the session. On the one hand, this
> approach allows us to reduce maintenance and understanding costs, as we
> only need to maintain one set of architecture to complete basic concepts.
> On the other hand, the benefits of an asynchronous architecture are
> evident: we maintain state on the server side. If the request is a long
> connection, even in the face of network layer jitter, we can still find the
> original result through session and operation handles.
>
> Using asynchronous APIs may increase the development cost for users, but
> from a platform perspective, if a request remains in a blocking state for a
> long time, it also becomes a burden on the platform's JVM. This is because
> thread switching and maintenance require certain resources.
>
> 2. Use "-" instead of "_" in the REST URI and camel case for fields in
> request/response
>
> Please follow the Flink REST Design.
>
> 3. Do we need scheduleTime and scheduleTimeFormat together?
>
> I think we can use SQL timestamp format or ISO timestamp format. It is not
> necessary to pass time in any specific format.
>
> https://en.wikipedia.org/wiki/ISO_8601
>
> Best,
> Shengkai
>


  1   2   3   4   5   6   7   8   9   10   >