[jira] [Updated] (SPARK-48555) Support Column type for several SQL functions in scala and python

2024-06-06 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48555:

Priority: Major  (was: Minor)

> Support Column type for several SQL functions in scala and python
> -
>
> Key: SPARK-48555
> URL: https://issues.apache.org/jira/browse/SPARK-48555
> Project: Spark
>  Issue Type: New Feature
>  Components: Connect, PySpark, Spark Core
>Affects Versions: 3.5.1
>Reporter: Ron Serruya
>Priority: Major
>
> Currently, several SQL functions accept both native types and Columns, but 
> only accept native types in their scala/python APIs:
> * array_remove (works in SQL, scala, not in python)
> * array_position(works in SQL, scala, not in python)
> * map_contains_key (works in SQL, scala, not in python)
> * substring (works only in SQL)
> For example, this is possible in SQL:
> {code:python}
> spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)")
> {code}
> But not in python:
> {code:python}
> df.select(F.array_remove(F.col("col1"), F.col("col2"))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Logical replication type- WAL recovery fails and changes the size of wal segment in archivedir

2024-06-06 Thread Ron Johnson
On Wed, Jun 5, 2024 at 6:26 AM Laurenz Albe 
wrote:

> On Wed, 2024-06-05 at 06:36 +, Meera Nair wrote:
> > 2024-06-05 11:41:32.369 IST [54369] LOG:  restored log file
> "00050001006A" from archive
> > 2024-06-05 11:41:33.112 IST [54369] LOG:  restored log file
> "00050001006B" from archive
> > cp: cannot stat ‘/home/pgsql/wmaster/00050001006C’: No such
> file or directory
> > 2024-06-05 11:41:33.167 IST [54369] LOG:  redo done at 1/6B000100
> > 2024-06-05 11:41:33.172 IST [54369] FATAL:  archive file
> "00050001006B" has wrong size: 0 instead of 16777216
> > 2024-06-05 11:41:33.173 IST [54367] LOG:  startup process (PID 54369)
> exited with exit code 1
> > 2024-06-05 11:41:33.173 IST [54367] LOG:  terminating any other active
> server processes
> > 2024-06-05 11:41:33.174 IST [54375] FATAL:  archive command was
> terminated by signal 3: Quit
> > 2024-06-05 11:41:33.174 IST [54375] DETAIL:  The failed archive command
> was: cp pg_wal/00050001006B
> /home/pgsql/wmaster/00050001006B
> > 2024-06-05 11:41:33.175 IST [54367] LOG:  archiver process (PID 54375)
> exited with exit code 1
> > 2024-06-05 11:41:33.177 IST [54367] LOG:  database system is shut down
> >
> > Here ‘/home/pgsql/wmaster’ is my archivedir (the folder where WAL
> segments are restored from)
> >
> > Before attempting start, size of
> > 00050001006B file was 16 MB.
> > After failing to detect 00050001006C, there is a FATAL error
> saying wrong size for 00050001006B
> > Now the size of 00050001006B is observed as 2 MB. Size of
> all other WAL segments remain 16 MB.
> >
> > -rw--- 1 postgres postgres  2359296 Jun  5 11:34
> 00050001006B
>
> That looks like you have "archive_mode = always", and "archive_command"
> writes
> back to the archive.  Don't do that.
>

In fact, don't write your own PITR backup process.  Use something like
PgBackRest or BarMan.


Re: Can't Remote connection by IpV6

2024-06-06 Thread Ron Johnson
On Thu, Jun 6, 2024 at 11:03 AM Adrian Klaver 
wrote:

> On 6/6/24 07:46, Marcelo Marloch wrote:
> > Hi everyone, is it possible to remote connect through IpV6? IpV4 works
> > fine but I cant connect through V6
> >
> > postgresql.conf is to listen all address and pg_hba.conf is set with
> > host all all :: md5 i've tried ::/0 and ::0/0 but had no success
>
> Is the firewall open for IPv6 connections to the Postgres port?
>

netcat (comes with nmap) is great for this.  There's a Windows client, too.


[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

2024-06-06 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48091:

Description: 
When using an `explode` function, and `transform` function in the same select 
statement, aliases used inside the transformed column are ignored.

This behavior only happens using the pyspark API and the scala API, but not 
when using the SQL API

 
{code:java}
from pyspark.sql import functions as F

# Create the df
df = spark.createDataFrame([
{"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
]){code}
Good case, where all aliases are used

 
{code:java}
df.select(
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema() 

root
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true){code}
Bad case, when using explode, the alises inside the transformed column is 
ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
instead of `some_alias`

 

 
{code:java}
df.select(
F.explode("array1").alias("exploded"),
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- x_17: long (nullable = true)
 |||-- id: long (nullable = true) {code}
 

 {code:scala}
import org.apache.spark.sql.functions._
var df2 = df.select(array(lit(1), lit(2), lit(3)).as("my_array"), array(lit(1), 
lit(2), lit(3)).as("my_array2"))

df2.select(
  explode($"my_array").as("exploded"),
  transform($"my_array2", (x) => struct(x.as("data"))).as("my_struct")
).printSchema
{code}


{noformat}
root
 |-- exploded: integer (nullable = false)
 |-- my_struct: array (nullable = false)
 ||-- element: struct (containsNull = false)
 |||-- x_2: integer (nullable = false)
{noformat}


 

When using the SQL API instead, it works fine
{code:java}
spark.sql(
"""
select explode(array1) as exploded, transform(array2, x-> struct(x as 
some_alias, id as second_alias)) as array2 from {df}
""", df=df
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true) {code}
 

Workaround: for now, using F.named_struct can be used as a workaround

  was:
When using an `explode` function, and `transform` function in the same select 
statement, aliases used inside the transformed column are ignored.

This behaviour only happens using the pyspark API, and not when using the SQL 
API

 
{code:java}
from pyspark.sql import functions as F

# Create the df
df = spark.createDataFrame([
{"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
]){code}
Good case, where all aliases are used

 
{code:java}
df.select(
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema() 

root
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- some_alias: long (nullable = true)
 |||-- second_alias: long (nullable = true){code}
Bad case, when using explode, the alises inside the transformed column is 
ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
instead of `some_alias`

 

 
{code:java}
df.select(
F.explode("array1").alias("exploded"),
F.transform(
'array2',
lambda x: F.struct(x.alias("some_alias"), 
F.col("id").alias("second_alias"))
).alias("new_array2")
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- new_array2: array (nullable = true)
 ||-- element: struct (containsNull = false)
 |||-- x_17: long (nullable = true)
 |||-- id: long (nullable = true) {code}
 

 

 

When using the SQL API instead, it works fine
{code:java}
spark.sql(
"""
select explode(array1) as exploded, transform(array2, x-> struct(x as 
some_alias, id as second_alias)) as array2 from {df}
""", df=df
).printSchema()

root
 |-- exploded: string (nullable = true)
 |-- array2: array (n

[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

2024-06-06 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48091:

Environment: Scala 2.12.15, Python 3.10, 3.12, OSX 14.4 and Databricks DBR 
13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1   (was: Python 3.10, 3.12, OSX 14.4 and 
Databricks DBR 13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1)

> Using `explode` together with `transform` in the same select statement causes 
> aliases in the transformed column to be ignored
> -
>
> Key: SPARK-48091
> URL: https://issues.apache.org/jira/browse/SPARK-48091
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0, 3.5.0, 3.5.1
> Environment: Scala 2.12.15, Python 3.10, 3.12, OSX 14.4 and 
> Databricks DBR 13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1 
>Reporter: Ron Serruya
>Priority: Minor
>  Labels: alias
>
> When using an `explode` function, and `transform` function in the same select 
> statement, aliases used inside the transformed column are ignored.
> This behavior only happens using the pyspark API and the scala API, but not 
> when using the SQL API
>  
> {code:java}
> from pyspark.sql import functions as F
> # Create the df
> df = spark.createDataFrame([
> {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
> ]){code}
> Good case, where all aliases are used
>  
> {code:java}
> df.select(
> F.transform(
> 'array2',
> lambda x: F.struct(x.alias("some_alias"), 
> F.col("id").alias("second_alias"))
> ).alias("new_array2")
> ).printSchema() 
> root
>  |-- new_array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- some_alias: long (nullable = true)
>  |||-- second_alias: long (nullable = true){code}
> Bad case, when using explode, the alises inside the transformed column is 
> ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
> instead of `some_alias`
>  
>  
> {code:java}
> df.select(
> F.explode("array1").alias("exploded"),
> F.transform(
> 'array2',
> lambda x: F.struct(x.alias("some_alias"), 
> F.col("id").alias("second_alias"))
> ).alias("new_array2")
> ).printSchema()
> root
>  |-- exploded: string (nullable = true)
>  |-- new_array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- x_17: long (nullable = true)
>  |||-- id: long (nullable = true) {code}
>  
>  {code:scala}
> import org.apache.spark.sql.functions._
> var df2 = df.select(array(lit(1), lit(2), lit(3)).as("my_array"), 
> array(lit(1), lit(2), lit(3)).as("my_array2"))
> df2.select(
>   explode($"my_array").as("exploded"),
>   transform($"my_array2", (x) => struct(x.as("data"))).as("my_struct")
> ).printSchema
> {code}
> {noformat}
> root
>  |-- exploded: integer (nullable = false)
>  |-- my_struct: array (nullable = false)
>  ||-- element: struct (containsNull = false)
>  |||-- x_2: integer (nullable = false)
> {noformat}
>  
> When using the SQL API instead, it works fine
> {code:java}
> spark.sql(
> """
> select explode(array1) as exploded, transform(array2, x-> struct(x as 
> some_alias, id as second_alias)) as array2 from {df}
> """, df=df
> ).printSchema()
> root
>  |-- exploded: string (nullable = true)
>  |-- array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- some_alias: long (nullable = true)
>  |||-- second_alias: long (nullable = true) {code}
>  
> Workaround: for now, using F.named_struct can be used as a workaround



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

2024-06-06 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48091:

Component/s: Spark Core
 (was: PySpark)

> Using `explode` together with `transform` in the same select statement causes 
> aliases in the transformed column to be ignored
> -
>
> Key: SPARK-48091
> URL: https://issues.apache.org/jira/browse/SPARK-48091
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0, 3.5.0, 3.5.1
> Environment: Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, 
> 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1
>Reporter: Ron Serruya
>Priority: Minor
>  Labels: alias
>
> When using an `explode` function, and `transform` function in the same select 
> statement, aliases used inside the transformed column are ignored.
> This behaviour only happens using the pyspark API, and not when using the SQL 
> API
>  
> {code:java}
> from pyspark.sql import functions as F
> # Create the df
> df = spark.createDataFrame([
> {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
> ]){code}
> Good case, where all aliases are used
>  
> {code:java}
> df.select(
> F.transform(
> 'array2',
> lambda x: F.struct(x.alias("some_alias"), 
> F.col("id").alias("second_alias"))
> ).alias("new_array2")
> ).printSchema() 
> root
>  |-- new_array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- some_alias: long (nullable = true)
>  |||-- second_alias: long (nullable = true){code}
> Bad case, when using explode, the alises inside the transformed column is 
> ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
> instead of `some_alias`
>  
>  
> {code:java}
> df.select(
> F.explode("array1").alias("exploded"),
> F.transform(
> 'array2',
> lambda x: F.struct(x.alias("some_alias"), 
> F.col("id").alias("second_alias"))
> ).alias("new_array2")
> ).printSchema()
> root
>  |-- exploded: string (nullable = true)
>  |-- new_array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- x_17: long (nullable = true)
>  |||-- id: long (nullable = true) {code}
>  
>  
>  
> When using the SQL API instead, it works fine
> {code:java}
> spark.sql(
> """
> select explode(array1) as exploded, transform(array2, x-> struct(x as 
> some_alias, id as second_alias)) as array2 from {df}
> """, df=df
> ).printSchema()
> root
>  |-- exploded: string (nullable = true)
>  |-- array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- some_alias: long (nullable = true)
>  |||-- second_alias: long (nullable = true) {code}
>  
> Workaround: for now, using F.named_struct can be used as a workaround



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

2024-06-06 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48091:

Labels: alias  (was: PySpark alias)

> Using `explode` together with `transform` in the same select statement causes 
> aliases in the transformed column to be ignored
> -
>
> Key: SPARK-48091
> URL: https://issues.apache.org/jira/browse/SPARK-48091
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.4.0, 3.5.0, 3.5.1
> Environment: Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, 
> 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1
>Reporter: Ron Serruya
>Priority: Minor
>  Labels: alias
>
> When using an `explode` function, and `transform` function in the same select 
> statement, aliases used inside the transformed column are ignored.
> This behaviour only happens using the pyspark API, and not when using the SQL 
> API
>  
> {code:java}
> from pyspark.sql import functions as F
> # Create the df
> df = spark.createDataFrame([
> {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
> ]){code}
> Good case, where all aliases are used
>  
> {code:java}
> df.select(
> F.transform(
> 'array2',
> lambda x: F.struct(x.alias("some_alias"), 
> F.col("id").alias("second_alias"))
> ).alias("new_array2")
> ).printSchema() 
> root
>  |-- new_array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- some_alias: long (nullable = true)
>  |||-- second_alias: long (nullable = true){code}
> Bad case, when using explode, the alises inside the transformed column is 
> ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used 
> instead of `some_alias`
>  
>  
> {code:java}
> df.select(
> F.explode("array1").alias("exploded"),
> F.transform(
> 'array2',
> lambda x: F.struct(x.alias("some_alias"), 
> F.col("id").alias("second_alias"))
> ).alias("new_array2")
> ).printSchema()
> root
>  |-- exploded: string (nullable = true)
>  |-- new_array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- x_17: long (nullable = true)
>  |||-- id: long (nullable = true) {code}
>  
>  
>  
> When using the SQL API instead, it works fine
> {code:java}
> spark.sql(
> """
> select explode(array1) as exploded, transform(array2, x-> struct(x as 
> some_alias, id as second_alias)) as array2 from {df}
> """, df=df
> ).printSchema()
> root
>  |-- exploded: string (nullable = true)
>  |-- array2: array (nullable = true)
>  ||-- element: struct (containsNull = false)
>  |||-- some_alias: long (nullable = true)
>  |||-- second_alias: long (nullable = true) {code}
>  
> Workaround: for now, using F.named_struct can be used as a workaround



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[Int-area] Re: Reverse Traceroute Alternative

2024-06-06 Thread Ron Bonica

Authors,

Just a reminder regarding the one significant issue that was raised during our 
phone call

When a reverse traceroute messages reaches its destination (i.e., the 
initiating node), what prevents it from being delivered to an application?


  Ron



Juniper Business Use Only
___
Int-area mailing list -- int-area@ietf.org
To unsubscribe send an email to int-area-le...@ietf.org


[jira] [Created] (SPARK-48555) Support Column type for several SQL functions in scala and python

2024-06-06 Thread Ron Serruya (Jira)
Ron Serruya created SPARK-48555:
---

 Summary: Support Column type for several SQL functions in scala 
and python
 Key: SPARK-48555
 URL: https://issues.apache.org/jira/browse/SPARK-48555
 Project: Spark
  Issue Type: New Feature
  Components: Connect, PySpark, Spark Core
Affects Versions: 3.5.1
Reporter: Ron Serruya


Currently, several SQL functions accept both native types and Columns, but only 
accept native types in their scala/python APIs:

* array_remove (works in SQL, scala, not in python)
* array_position(works in SQL, scala, not in python)
* map_contains_key (works in SQL, scala, not in python)
* substring (works only in SQL)

For example, this is possible in SQL:

{code:python}
spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)")
{code}

{code:python}
df.select(F.array_remove(F.col("col1"), F.col("col2"))
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-48555) Support Column type for several SQL functions in scala and python

2024-06-06 Thread Ron Serruya (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Serruya updated SPARK-48555:

Description: 
Currently, several SQL functions accept both native types and Columns, but only 
accept native types in their scala/python APIs:

* array_remove (works in SQL, scala, not in python)
* array_position(works in SQL, scala, not in python)
* map_contains_key (works in SQL, scala, not in python)
* substring (works only in SQL)

For example, this is possible in SQL:

{code:python}
spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)")
{code}

But not in python:
{code:python}
df.select(F.array_remove(F.col("col1"), F.col("col2"))
{code}

  was:
Currently, several SQL functions accept both native types and Columns, but only 
accept native types in their scala/python APIs:

* array_remove (works in SQL, scala, not in python)
* array_position(works in SQL, scala, not in python)
* map_contains_key (works in SQL, scala, not in python)
* substring (works only in SQL)

For example, this is possible in SQL:

{code:python}
spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)")
{code}

{code:python}
df.select(F.array_remove(F.col("col1"), F.col("col2"))
{code}


> Support Column type for several SQL functions in scala and python
> -
>
> Key: SPARK-48555
> URL: https://issues.apache.org/jira/browse/SPARK-48555
> Project: Spark
>  Issue Type: New Feature
>  Components: Connect, PySpark, Spark Core
>Affects Versions: 3.5.1
>Reporter: Ron Serruya
>Priority: Minor
>
> Currently, several SQL functions accept both native types and Columns, but 
> only accept native types in their scala/python APIs:
> * array_remove (works in SQL, scala, not in python)
> * array_position(works in SQL, scala, not in python)
> * map_contains_key (works in SQL, scala, not in python)
> * substring (works only in SQL)
> For example, this is possible in SQL:
> {code:python}
> spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)")
> {code}
> But not in python:
> {code:python}
> df.select(F.array_remove(F.col("col1"), F.col("col2"))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



(flink) branch master updated (9708f9fd657 -> f462926ad9c)

2024-06-06 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 9708f9fd657 [FLINK-35501] Use common IO thread pool for RocksDB data 
transfer
 add 42289bd2c69 [FLINK-35201][table] Support the execution of drop 
materialized table in full refresh mode
 add c862fa60119 [FLINK-35201][table] Enhance function names in 
MaterializedTableStatementITCase for better readability
 add f462926ad9c [FLINK-35201][table] Remove unnecessary logs in 
MaterializedTableManager

No new revisions were added by this update.

Summary of changes:
 .../MaterializedTableManager.java  | 133 +++--
 .../scheduler/EmbeddedQuartzScheduler.java |   5 -
 .../AbstractMaterializedTableStatementITCase.java  |  23 ++--
 .../service/MaterializedTableStatementITCase.java  | 110 +++--
 .../workflow/EmbeddedSchedulerRelatedITCase.java   |  27 +
 5 files changed, 182 insertions(+), 116 deletions(-)



Re: Heading Level Access In Safari Browser

2024-06-06 Thread Ron Canazzi

Hi Richard,

This is on the iPhone. I don't understand this arrows and shift key 
business.



On 5/31/2024 12:57 PM, Richard Turner wrote:
When on a web site or in your html file, turn on quicknav if it isn't 
on using left+right arrows together.
Then, press VO+q to turn on single letter quicknav.  I have the VO 
command as control+Options so control+Options+q.
Then, you can use the singel numbers or h for the next heading, shift 
plus h for previous or even shift+1 for previous heading level 1, etc.

HTH,

Richard, USA

“Grandma always told us, “Be careful when you pray for patience. God 
stores it on the other side of Hell and you will have to go through 
Hell to get it.”


-- Cedrick Bridgeforth

My web site: https://www.turner42.com/




On May 31, 2024, at 9:18 AM, Mario Eiland  wrote:

Use the rotor while in the Safari app and look for headings. Once 
you hear headings then flick down with one finger and that should 
take you from heading to heading. To go up flick up.
If you can't find the heading option in the rotor then you must add 
it in the VoiceOver rotor settings.


Good luck!

-Original Message-
From: viphone@googlegroups.com  On Behalf 
Of Ron Canazzi

Sent: Friday, May 31, 2024 8:42 AM
To: ViPhone List 
Subject: Heading Level Access In Safari Browser

Hi Group,

I finally was able to change some settings in Safari Browser on the 
iPhone to get it to display HTML files that are stored locally on the 
iPhone.  I  created my modified Dice Football Game Play Sheet by 
using headings to more quickly navigate from play list to play list. 
 I have the lists separated into running plays, kicking plays, 
passing plays and conversions at level one and the various list of 
items such as short pass, long pass and screen pass for the passing 
plays and the running plays such as left end run, right tackle play 
and reverse at heading level two.


Is there any way to navigate by heading levels using a quick number 
scheme such as is done on Windows desktops with quick key number 
navigation such as number one for heading level one and number two 
for heading level two on the iPhone?


Thanks for any help.

--
Signature:
For a nation to admit it has done grievous wrongs and will strive to 
correct them for the betterment of all is no vice; For a nation to 
claim it has always been great, needs no improvement  and to cling to 
its past achievements is no virtue!


--
The following information is important for all members of the V 
iPhone list.


If you have any questions or concerns about the running of this list, 
or if you feel that a member's post is inappropriate, please contact 
the owners or moderators directly rather than posting on the list itself.


Your V iPhone list moderator is Mark Taylor.  Mark can be reached at: 
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara 
at caraqu...@caraquinn.com


The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
---
You received this message because you are subscribed to the Google 
Groups "VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/7a3d2c9c-6deb-8621-6d2a-105199764add%40roadrunner.com.


--
The following information is important for all members of the V 
iPhone list.


If you have any questions or concerns about the running of this list, 
or if you feel that a member's post is inappropriate, please contact 
the owners or moderators directly rather than posting on the list itself.


Your V iPhone list moderator is Mark Taylor.  Mark can be reached at: 
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara 
at caraqu...@caraquinn.com


The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
---
You received this message because you are subscribed to the Google 
Groups "VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/07b201dab376%24268febd0%2473afc370%24%40gmail.com.

--
The following information is important for all members of the V iPhone 
list.


If you have any questions or concerns about the running of this list, 
or if you feel that a member's post is inappropriate, please contact 
the owners or moderators directly rather than posting on the list itself.


Your V iPhone list moderator is Mark Taylor. Mark can be reached at: 
mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com


The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
---
You received this message because you are subscribed to the Google 
Groups "VIPhone" group.
To unsubscribe from this group 

(flink) 02/03: [FLINK-35200][table] Support the execution of suspend, resume materialized table in full refresh mode

2024-06-04 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9b51711d00a2e1bd93f5a474b9c99b542aaf27cf
Author: Feng Jin 
AuthorDate: Sat Jun 1 23:43:54 2024 +0800

[FLINK-35200][table] Support the execution of suspend, resume materialized 
table in full refresh mode

This closes #24877
---
 .../MaterializedTableManager.java  | 308 ++---
 .../AbstractMaterializedTableStatementITCase.java  |  12 +-
 ...GatewayRestEndpointMaterializedTableITCase.java |  10 +-
 .../service/MaterializedTableStatementITCase.java  | 130 -
 4 files changed, 337 insertions(+), 123 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
index eeb6b5109e3..ea2a56e2010 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -56,6 +56,9 @@ import 
org.apache.flink.table.refresh.RefreshHandlerSerializer;
 import org.apache.flink.table.types.logical.LogicalTypeFamily;
 import org.apache.flink.table.workflow.CreatePeriodicRefreshWorkflow;
 import org.apache.flink.table.workflow.CreateRefreshWorkflow;
+import org.apache.flink.table.workflow.ModifyRefreshWorkflow;
+import org.apache.flink.table.workflow.ResumeRefreshWorkflow;
+import org.apache.flink.table.workflow.SuspendRefreshWorkflow;
 import org.apache.flink.table.workflow.WorkflowScheduler;
 
 import org.slf4j.Logger;
@@ -173,10 +176,10 @@ public class MaterializedTableManager {
 CatalogMaterializedTable materializedTable =
 createMaterializedTableOperation.getCatalogMaterializedTable();
 if (CatalogMaterializedTable.RefreshMode.CONTINUOUS == 
materializedTable.getRefreshMode()) {
-createMaterializedInContinuousMode(
+createMaterializedTableInContinuousMode(
 operationExecutor, handle, 
createMaterializedTableOperation);
 } else {
-createMaterializedInFullMode(
+createMaterializedTableInFullMode(
 operationExecutor, handle, 
createMaterializedTableOperation);
 }
 // Just return ok for unify different refresh job info of continuous 
and full mode, user
@@ -184,7 +187,7 @@ public class MaterializedTableManager {
 return ResultFetcher.fromTableResult(handle, TABLE_RESULT_OK, false);
 }
 
-private void createMaterializedInContinuousMode(
+private void createMaterializedTableInContinuousMode(
 OperationExecutor operationExecutor,
 OperationHandle handle,
 CreateMaterializedTableOperation createMaterializedTableOperation) 
{
@@ -207,17 +210,21 @@ public class MaterializedTableManager {
 } catch (Exception e) {
 // drop materialized table while submit flink streaming job occur 
exception. Thus, weak
 // atomicity is guaranteed
-LOG.warn(
-"Submit continuous refresh job occur exception, drop 
materialized table {}.",
-materializedTableIdentifier,
-e);
 operationExecutor.callExecutableOperation(
 handle, new 
DropMaterializedTableOperation(materializedTableIdentifier, true));
-throw e;
+LOG.error(
+"Submit continuous refresh job for materialized table {} 
occur exception.",
+materializedTableIdentifier,
+e);
+throw new SqlExecutionException(
+String.format(
+"Submit continuous refresh job for materialized 
table %s occur exception.",
+materializedTableIdentifier),
+e);
 }
 }
 
-private void createMaterializedInFullMode(
+private void createMaterializedTableInFullMode(
 OperationExecutor operationExecutor,
 OperationHandle handle,
 CreateMaterializedTableOperation createMaterializedTableOperation) 
{
@@ -258,12 +265,13 @@ public class MaterializedTableManager {
 handle,
 materializedTableIdentifier,
 catalogMaterializedTable,
+CatalogMaterializedTable.RefreshStatus.ACTIVATED,
 refreshHandler.asSummaryString(),
 serializedRefreshHandler);
 } catch (Exception e) {
 // drop materialized table while create refresh workflo

(flink) branch master updated (62f9de806ac -> 2e158fe300f)

2024-06-04 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 62f9de806ac fixup! [FLINK-35351][checkpoint] Fix fail during restore 
from unaligned checkpoint with custom partitioner
 new 8d1e043b0c4 [FLINK-35200][table] Add dynamic options for 
ResumeRefreshWorkflow
 new 9b51711d00a [FLINK-35200][table] Support the execution of suspend, 
resume materialized table in full refresh mode
 new 2e158fe300f [FLINK-35200][table] Fix missing clusterInfo in 
materialized table refresh rest API return value

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../ResumeEmbeddedSchedulerWorkflowHandler.java|  17 +-
 .../ResumeEmbeddedSchedulerWorkflowHeaders.java|  42 ++-
 ...esumeEmbeddedSchedulerWorkflowRequestBody.java} |  24 +-
 .../MaterializedTableManager.java  | 341 +++--
 .../table/gateway/service/utils/Constants.java |   1 +
 .../workflow/EmbeddedWorkflowScheduler.java|  17 +-
 .../scheduler/EmbeddedQuartzScheduler.java |  50 ++-
 .../AbstractMaterializedTableStatementITCase.java  |  12 +-
 ...GatewayRestEndpointMaterializedTableITCase.java |  96 --
 .../service/MaterializedTableStatementITCase.java  | 130 +++-
 .../workflow/EmbeddedSchedulerRelatedITCase.java   |  14 +-
 .../resources/sql_gateway_rest_api_v3.snapshot |   8 +-
 .../table/workflow/ResumeRefreshWorkflow.java  |  11 +-
 13 files changed, 601 insertions(+), 162 deletions(-)
 copy 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/{EmbeddedSchedulerWorkflowRequestBody.java
 => ResumeEmbeddedSchedulerWorkflowRequestBody.java} (73%)



(flink) 01/03: [FLINK-35200][table] Add dynamic options for ResumeRefreshWorkflow

2024-06-04 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8d1e043b0c4277582b8862c2bc3314631eec4a7b
Author: Feng Jin 
AuthorDate: Sat Jun 1 23:43:07 2024 +0800

[FLINK-35200][table] Add dynamic options for ResumeRefreshWorkflow

This closes #24877
---
 .../ResumeEmbeddedSchedulerWorkflowHandler.java| 17 --
 .../ResumeEmbeddedSchedulerWorkflowHeaders.java| 42 -
 ...ResumeEmbeddedSchedulerWorkflowRequestBody.java | 71 ++
 .../workflow/EmbeddedWorkflowScheduler.java| 17 --
 .../scheduler/EmbeddedQuartzScheduler.java | 50 ++-
 .../workflow/EmbeddedSchedulerRelatedITCase.java   | 14 -
 .../resources/sql_gateway_rest_api_v3.snapshot |  8 ++-
 .../table/workflow/ResumeRefreshWorkflow.java  | 11 +++-
 8 files changed, 212 insertions(+), 18 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java
index 4d0979946b8..d5030367839 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java
@@ -25,7 +25,7 @@ import 
org.apache.flink.runtime.rest.messages.EmptyResponseBody;
 import org.apache.flink.runtime.rest.messages.MessageHeaders;
 import org.apache.flink.table.gateway.api.SqlGatewayService;
 import 
org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler;
-import 
org.apache.flink.table.gateway.rest.message.materializedtable.scheduler.EmbeddedSchedulerWorkflowRequestBody;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.scheduler.ResumeEmbeddedSchedulerWorkflowRequestBody;
 import org.apache.flink.table.gateway.rest.util.SqlGatewayRestAPIVersion;
 import 
org.apache.flink.table.gateway.workflow.scheduler.EmbeddedQuartzScheduler;
 
@@ -34,13 +34,16 @@ import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpResponseSt
 import javax.annotation.Nonnull;
 import javax.annotation.Nullable;
 
+import java.util.Collections;
 import java.util.Map;
 import java.util.concurrent.CompletableFuture;
 
 /** Handler to resume workflow in embedded scheduler. */
 public class ResumeEmbeddedSchedulerWorkflowHandler
 extends AbstractSqlGatewayRestHandler<
-EmbeddedSchedulerWorkflowRequestBody, EmptyResponseBody, 
EmptyMessageParameters> {
+ResumeEmbeddedSchedulerWorkflowRequestBody,
+EmptyResponseBody,
+EmptyMessageParameters> {
 
 private final EmbeddedQuartzScheduler quartzScheduler;
 
@@ -49,7 +52,7 @@ public class ResumeEmbeddedSchedulerWorkflowHandler
 EmbeddedQuartzScheduler quartzScheduler,
 Map responseHeaders,
 MessageHeaders<
-EmbeddedSchedulerWorkflowRequestBody,
+ResumeEmbeddedSchedulerWorkflowRequestBody,
 EmptyResponseBody,
 EmptyMessageParameters>
 messageHeaders) {
@@ -60,12 +63,16 @@ public class ResumeEmbeddedSchedulerWorkflowHandler
 @Override
 protected CompletableFuture handleRequest(
 @Nullable SqlGatewayRestAPIVersion version,
-@Nonnull HandlerRequest 
request)
+@Nonnull 
HandlerRequest request)
 throws RestHandlerException {
 String workflowName = request.getRequestBody().getWorkflowName();
 String workflowGroup = request.getRequestBody().getWorkflowGroup();
+Map dynamicOptions = 
request.getRequestBody().getDynamicOptions();
 try {
-quartzScheduler.resumeScheduleWorkflow(workflowName, 
workflowGroup);
+quartzScheduler.resumeScheduleWorkflow(
+workflowName,
+workflowGroup,
+dynamicOptions == null ? Collections.emptyMap() : 
dynamicOptions);
 return 
CompletableFuture.completedFuture(EmptyResponseBody.getInstance());
 } catch (Exception e) {
 throw new RestHandlerException(
diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java
index dface14468c..cc5

(flink) 03/03: [FLINK-35200][table] Fix missing clusterInfo in materialized table refresh rest API return value

2024-06-04 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2e158fe300f6e93ba9b3d600e0237237ac0b2131
Author: Feng Jin 
AuthorDate: Tue Jun 4 01:56:08 2024 +0800

[FLINK-35200][table] Fix missing clusterInfo in materialized table refresh 
rest API return value

This closes #24877
---
 .../MaterializedTableManager.java  | 35 -
 .../table/gateway/service/utils/Constants.java |  1 +
 ...GatewayRestEndpointMaterializedTableITCase.java | 86 ++
 3 files changed, 104 insertions(+), 18 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
index ea2a56e2010..4c35e211e0d 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -23,15 +23,20 @@ import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.common.JobStatus;
 import org.apache.flink.configuration.CheckpointingOptions;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.api.DataTypes;
 import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.api.config.TableConfigOptions;
 import org.apache.flink.table.catalog.CatalogMaterializedTable;
+import org.apache.flink.table.catalog.Column;
 import org.apache.flink.table.catalog.ObjectIdentifier;
 import org.apache.flink.table.catalog.ResolvedCatalogBaseTable;
 import org.apache.flink.table.catalog.ResolvedCatalogMaterializedTable;
 import org.apache.flink.table.catalog.ResolvedSchema;
 import org.apache.flink.table.catalog.TableChange;
+import org.apache.flink.table.data.GenericMapData;
+import org.apache.flink.table.data.GenericRowData;
 import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
 import org.apache.flink.table.factories.WorkflowSchedulerFactoryUtil;
 import org.apache.flink.table.gateway.api.operation.OperationHandle;
 import org.apache.flink.table.gateway.api.results.ResultSet;
@@ -94,6 +99,8 @@ import static 
org.apache.flink.table.api.internal.TableResultInternal.TABLE_RESU
 import static 
org.apache.flink.table.catalog.CatalogBaseTable.TableKind.MATERIALIZED_TABLE;
 import static 
org.apache.flink.table.factories.WorkflowSchedulerFactoryUtil.WORKFLOW_SCHEDULER_PREFIX;
 import static 
org.apache.flink.table.gateway.api.endpoint.SqlGatewayEndpointFactoryUtils.getEndpointConfig;
+import static 
org.apache.flink.table.gateway.service.utils.Constants.CLUSTER_INFO;
+import static org.apache.flink.table.gateway.service.utils.Constants.JOB_ID;
 import static org.apache.flink.table.utils.DateTimeUtils.formatTimestampString;
 import static 
org.apache.flink.table.utils.IntervalFreshnessUtils.convertFreshnessToCron;
 
@@ -594,11 +601,33 @@ public class MaterializedTableManager {
 dynamicOptions);
 
 try {
-LOG.debug(
-"Begin to manually refreshing the materialization table 
{}, statement: {}",
+LOG.info(
+"Begin to manually refreshing the materialized table {}, 
statement: {}",
 materializedTableIdentifier,
 insertStatement);
-return operationExecutor.executeStatement(handle, customConfig, 
insertStatement);
+ResultFetcher resultFetcher =
+operationExecutor.executeStatement(handle, customConfig, 
insertStatement);
+
+List results = fetchAllResults(resultFetcher);
+String jobId = results.get(0).getString(0).toString();
+String executeTarget =
+
operationExecutor.getSessionContext().getSessionConf().get(TARGET);
+Map clusterInfo = new HashMap<>();
+clusterInfo.put(
+StringData.fromString(TARGET.key()), 
StringData.fromString(executeTarget));
+// TODO get clusterId
+
+return ResultFetcher.fromResults(
+handle,
+ResolvedSchema.of(
+Column.physical(JOB_ID, DataTypes.STRING()),
+Column.physical(
+CLUSTER_INFO,
+DataTypes.MAP(DataTypes.STRING(), 
DataTypes.STRING(,
+Collections.singletonList(
+GenericRowData.of(
+StringData.fromString(jobId),
+new 

Re: Purpose of pg_dump tar archive format?

2024-06-04 Thread Ron Johnson
On Tue, Jun 4, 2024 at 3:47 PM Gavin Roy  wrote:

>
> On Tue, Jun 4, 2024 at 3:15 PM Ron Johnson 
> wrote:
>
>>
>> But why tar instead of custom? That was part of my original question.
>>
>
> I've found it pretty useful for programmatically accessing data in a dump
> for large databases outside of the normal pg_dump/pg_restore workflow. You
> don't have to seek through one large binary file to get to the data section
> to get at the data.
>

Interesting.  Please explain, though, since a big tarball _is_ "one large
binary file" that you have to sequentially scan.  (I don't know the
internal structure of custom format files, and whether they have file
pointers to each table.)

Is it because you need individual .dat "COPY" files for something other
than loading into PG tables (since pg_restore --table= does that, too),
and directory format archives can be inconvenient?


[jira] [Commented] (OLINGO-1624) Serialization performance regression in Olingo 5

2024-06-04 Thread Ron Passerini (Jira)


[ 
https://issues.apache.org/jira/browse/OLINGO-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852208#comment-17852208
 ] 

Ron Passerini commented on OLINGO-1624:
---

I've attached a patch for 5.0 that will 
 # Still handle the fix for OLINGO-1167
 # Remove the performance problem referenced in this Jira

 

> Serialization performance regression in Olingo 5
> 
>
> Key: OLINGO-1624
> URL: https://issues.apache.org/jira/browse/OLINGO-1624
> Project: Olingo
>  Issue Type: Bug
>  Components: odata4-commons
>Affects Versions: (Java) V4 4.10.0, Version (Java) V4 5.0.0
>Reporter: Florent Albert
>Priority: Major
> Attachments: 
> 0001-OLINGO-1624-Fix-performance-issue-for-resolving-EdmP.patch
>
>
> Olingo 4.10 (via OLINGO-1167) introduced a performance regression. Commit 
> [https://github.com/apache/olingo-odata4/commit/ce5028d24f220ad0f60b5ac023c10e7b88b7c806]
>   now makes resolution of EdmPrimitiveTypeKind create and suppress an 
> exception for any non primitive type.
> Construction in EdmTypeInfo in 4.10 and 5.0 is very expensive and causes 
> severe performance degradation on large datasets. For the same dataset, 
> ODataJsonSerializer.getEdmProperty() spends <200 ms in Olingo 4.9 and ~3000 
> ms in Olingo 5 (15x slower).
> This same issue was originally reported in in Olingo 4.2 and fixed in 4.7 
> (via OLINGO-1357 and 
> [https://github.com/apache/olingo-odata4/pull/51/files|https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Folingo-odata4%2Fpull%2F51%2Ffiles=05%7C02%7Cfalbert%40ptc.com%7Cd24ae4d9097c4fcf037c08dc80c242c1%7Cb9921086ff774d0d828acb3381f678e2%7C0%7C0%7C638526819046368587%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=Y5ae4MIeiqxXLXbwJICWVMy0vQgfOohocPVmDqo1vlo%3D=0]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OLINGO-1624) Serialization performance regression in Olingo 5

2024-06-04 Thread Ron Passerini (Jira)


 [ 
https://issues.apache.org/jira/browse/OLINGO-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Passerini updated OLINGO-1624:
--
Attachment: 0001-OLINGO-1624-Fix-performance-issue-for-resolving-EdmP.patch

> Serialization performance regression in Olingo 5
> 
>
> Key: OLINGO-1624
> URL: https://issues.apache.org/jira/browse/OLINGO-1624
> Project: Olingo
>  Issue Type: Bug
>  Components: odata4-commons
>Affects Versions: (Java) V4 4.10.0, Version (Java) V4 5.0.0
>Reporter: Florent Albert
>Priority: Major
> Attachments: 
> 0001-OLINGO-1624-Fix-performance-issue-for-resolving-EdmP.patch
>
>
> Olingo 4.10 (via OLINGO-1167) introduced a performance regression. Commit 
> [https://github.com/apache/olingo-odata4/commit/ce5028d24f220ad0f60b5ac023c10e7b88b7c806]
>   now makes resolution of EdmPrimitiveTypeKind create and suppress an 
> exception for any non primitive type.
> Construction in EdmTypeInfo in 4.10 and 5.0 is very expensive and causes 
> severe performance degradation on large datasets. For the same dataset, 
> ODataJsonSerializer.getEdmProperty() spends <200 ms in Olingo 4.9 and ~3000 
> ms in Olingo 5 (15x slower).
> This same issue was originally reported in in Olingo 4.2 and fixed in 4.7 
> (via OLINGO-1357 and 
> [https://github.com/apache/olingo-odata4/pull/51/files|https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Folingo-odata4%2Fpull%2F51%2Ffiles=05%7C02%7Cfalbert%40ptc.com%7Cd24ae4d9097c4fcf037c08dc80c242c1%7Cb9921086ff774d0d828acb3381f678e2%7C0%7C0%7C638526819046368587%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=Y5ae4MIeiqxXLXbwJICWVMy0vQgfOohocPVmDqo1vlo%3D=0]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OLINGO-1625) The serializers have performance issues when Entities contain very large numbers of Properties

2024-06-04 Thread Ron Passerini (Jira)


 [ 
https://issues.apache.org/jira/browse/OLINGO-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Passerini updated OLINGO-1625:
--
  Flags: Patch
Description: 
I've implemented an OData service that serves up some large datasets in a 
streaming fashion. Some of those datasets have large numbers of fields (over 
1,000). When I requested one of them which was around 350M in size, it took way 
longer than expected.

I profiled the request in IntelliJ's profiler and found that over 75% of the 
CPU cycles were spent in String.equals() comparing column names in the 
serializers. This is because there is an O(N^2) issue that for every column 
selected (in my case all of them) it will iterate across the entire list of 
entity properties looking for the one with the same name.

I have already implemented a fix whereby before doing the property 
serialization, the serializer builds a hash map of property-name-to-property, 
making the resulting algorithm O(N) with the number of properties being 
serialized.

After profiling the change, again in IntelliJ's profiler, the String.equals() 
which was over 75% before, is now under 1%.

I will be creating a patch and attaching it momentarily.

  was:
I've implemented an OData service that serves up some large datasets in a 
streaming fashion. Some of those datasets have large numbers of fields (over 
1,000). When I requested one of them which was around 350M in size, it took way 
longer than expected.

I profiled the request in IntelliJ's profiler and found that over 75% of the 
CPU cycles were spent in String.equals() comparing column names in the 
serializers. This is because there is an O(N^2) issue that for every column 
selected (in my case all of them) it will iterate across the entire list of 
entity properties looking for the one with the same name.

I have already implemented a fix whereby before doing the property 
serialization, the serializer builds a hash map of property-name-to-property, 
making the resulting algorithm O(N) with the number of properties being 
serialized.

After profiling the change, again in IntelliJ's profiler, the String.equals() 
which was over 75% before, is now under 1%.

I will be creating a patch and attaching it momentarily.

 

 


Patch now attached.

 

> The serializers have performance issues when Entities contain very large 
> numbers of Properties
> --
>
> Key: OLINGO-1625
> URL: https://issues.apache.org/jira/browse/OLINGO-1625
> Project: Olingo
>  Issue Type: Bug
>  Components: odata4-server
>Affects Versions: Version (Java) V4 5.0.0
>Reporter: Ron Passerini
>Priority: Major
>  Labels: json, performance, serialization, xml
> Fix For: (Java) V4 5.0.1
>
> Attachments: 
> 0001-OLINGO-1625-Fix-performance-problem-in-serializers-f.patch
>
>
> I've implemented an OData service that serves up some large datasets in a 
> streaming fashion. Some of those datasets have large numbers of fields (over 
> 1,000). When I requested one of them which was around 350M in size, it took 
> way longer than expected.
> I profiled the request in IntelliJ's profiler and found that over 75% of the 
> CPU cycles were spent in String.equals() comparing column names in the 
> serializers. This is because there is an O(N^2) issue that for every column 
> selected (in my case all of them) it will iterate across the entire list of 
> entity properties looking for the one with the same name.
> I have already implemented a fix whereby before doing the property 
> serialization, the serializer builds a hash map of property-name-to-property, 
> making the resulting algorithm O(N) with the number of properties being 
> serialized.
> After profiling the change, again in IntelliJ's profiler, the String.equals() 
> which was over 75% before, is now under 1%.
> I will be creating a patch and attaching it momentarily.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OLINGO-1625) The serializers have performance issues when Entities contain very large numbers of Properties

2024-06-04 Thread Ron Passerini (Jira)


 [ 
https://issues.apache.org/jira/browse/OLINGO-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Passerini updated OLINGO-1625:
--
Attachment: 0001-OLINGO-1625-Fix-performance-problem-in-serializers-f.patch

> The serializers have performance issues when Entities contain very large 
> numbers of Properties
> --
>
> Key: OLINGO-1625
> URL: https://issues.apache.org/jira/browse/OLINGO-1625
> Project: Olingo
>  Issue Type: Bug
>  Components: odata4-server
>Affects Versions: Version (Java) V4 5.0.0
>Reporter: Ron Passerini
>Priority: Major
>  Labels: json, performance, serialization, xml
> Fix For: (Java) V4 5.0.1
>
> Attachments: 
> 0001-OLINGO-1625-Fix-performance-problem-in-serializers-f.patch
>
>
> I've implemented an OData service that serves up some large datasets in a 
> streaming fashion. Some of those datasets have large numbers of fields (over 
> 1,000). When I requested one of them which was around 350M in size, it took 
> way longer than expected.
> I profiled the request in IntelliJ's profiler and found that over 75% of the 
> CPU cycles were spent in String.equals() comparing column names in the 
> serializers. This is because there is an O(N^2) issue that for every column 
> selected (in my case all of them) it will iterate across the entire list of 
> entity properties looking for the one with the same name.
> I have already implemented a fix whereby before doing the property 
> serialization, the serializer builds a hash map of property-name-to-property, 
> making the resulting algorithm O(N) with the number of properties being 
> serialized.
> After profiling the change, again in IntelliJ's profiler, the String.equals() 
> which was over 75% before, is now under 1%.
> I will be creating a patch and attaching it momentarily.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Purpose of pg_dump tar archive format?

2024-06-04 Thread Ron Johnson
On Tue, Jun 4, 2024 at 2:55 PM Rob Sargent  wrote:

>
>
> On 6/4/24 11:40, Shaheed Haque wrote:
> >
> > We use it. I bet lots of others do too.
> >
> >
>
> Of course.  There are lots of small, real, useful databases in the wild.
>

But why tar instead of custom? That was part of my original question.


[jira] [Created] (OLINGO-1625) The serializers have performance issues when Entities contain very large numbers of Properties

2024-06-04 Thread Ron Passerini (Jira)
Ron Passerini created OLINGO-1625:
-

 Summary: The serializers have performance issues when Entities 
contain very large numbers of Properties
 Key: OLINGO-1625
 URL: https://issues.apache.org/jira/browse/OLINGO-1625
 Project: Olingo
  Issue Type: Bug
  Components: odata4-server
Affects Versions: Version (Java) V4 5.0.0
Reporter: Ron Passerini
 Fix For: (Java) V4 5.0.1


I've implemented an OData service that serves up some large datasets in a 
streaming fashion. Some of those datasets have large numbers of fields (over 
1,000). When I requested one of them which was around 350M in size, it took way 
longer than expected.

I profiled the request in IntelliJ's profiler and found that over 75% of the 
CPU cycles were spent in String.equals() comparing column names in the 
serializers. This is because there is an O(N^2) issue that for every column 
selected (in my case all of them) it will iterate across the entire list of 
entity properties looking for the one with the same name.

I have already implemented a fix whereby before doing the property 
serialization, the serializer builds a hash map of property-name-to-property, 
making the resulting algorithm O(N) with the number of properties being 
serialized.

After profiling the change, again in IntelliJ's profiler, the String.equals() 
which was over 75% before, is now under 1%.

I will be creating a patch and attaching it momentarily.

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Purpose of pg_dump tar archive format?

2024-06-04 Thread Ron Johnson
On Tue, Jun 4, 2024 at 10:43 AM Adrian Klaver 
wrote:

> On 6/4/24 05:13, Ron Johnson wrote:
> > It doesn't support compression nor restore reordering like the custom
> > format, so I'm having trouble seeing why it still exists (at least
> > without a doc warning that it's obsolete).
>
> pg_dump -d test -U postgres -Ft  | gzip --stdout > test.tgz
>

Who's got meaningful databases that small anymore?

And if you've got meaningfully sized databases, open port 5432 and move
them using pg_dump.


Purpose of pg_dump tar archive format?

2024-06-04 Thread Ron Johnson
It doesn't support compression nor restore reordering like the custom
format, so I'm having trouble seeing why it still exists (at least without
a doc warning that it's obsolete).


Re: Postgresql 16.3 Out Of Memory

2024-06-03 Thread Ron Johnson
On Mon, Jun 3, 2024 at 9:12 AM Greg Sabino Mullane 
wrote:

> On Mon, Jun 3, 2024 at 6:19 AM Radu Radutiu  wrote:
>
>> Do you have any idea how to further debug the problem?
>>
>
> Putting aside the issue of non-reclaimed memory for now, can you show us
> the actual query? The explain analyze you provided shows it doing an awful
> lot of joins and then returning 14+ million rows to the client. Surely the
> client does not need that many rows?
>

And the query cost is really high.  "Did you ANALYZE the instance after
conversion?" was my first question.


Re: RFR: 8330846: Add stacks of mounted virtual threads to the HotSpot thread dump [v6]

2024-06-03 Thread Ron Pressler
On Mon, 3 Jun 2024 11:26:27 GMT, Inigo Mediavilla Saiz  wrote:

>> Print the stack traces of mounted virtual threads when calling `jcmd  
>> Thread.print`.
>
> Inigo Mediavilla Saiz has updated the pull request incrementally with one 
> additional commit since the last revision:
> 
>   Add indentation for virtual thread stack

About the output format:

1. The text Carrying virtual thread #N should appear, as it does, in the header 
of the output for the platform thread.
2. The stack for the mounted virtual thread should appear, indented *after* the 
stack of the platform thread, with the header `Mounted virtual thread #N`.

-

PR Comment: https://git.openjdk.org/jdk/pull/19482#issuecomment-2145162783


[kdenlive] [Bug 487950] New: Title dialog "+X" and "+Y" fields max out at 5000

2024-06-03 Thread Ron
https://bugs.kde.org/show_bug.cgi?id=487950

Bug ID: 487950
   Summary: Title dialog "+X" and "+Y" fields max out at 5000
Classification: Applications
   Product: kdenlive
   Version: 24.05.0
  Platform: Appimage
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: User Interface
  Assignee: j...@kdenlive.org
  Reporter: kdenlive-b...@contact.dot-oz.net
  Target Milestone: ---

Hi,

Occasionally I use animated titles to create scrolling credits and the like,
and occasionally those lists are long.  There doesn't seem to be any problem
with making them arbitrarily large in any direction - but the displayed X,Y
coordinate for content maxes out at 5000.

You can place it beyond that by dragging and dropping, but you can't tweak or
see the exact placement point using those input dialog fields, which makes
precise placement harder than it needs to be.

For the same reason it would be nice if the guide lines expanded to cover all
the space used by title elements, not just the visible viewport area, and if
the increments of the snap-to grid were configurable.

Cheers,
Ron

-- 
You are receiving this mail because:
You are watching all bug changes.

[kdenlive] [Bug 487947] New: Subtitle .srt files are not autosaved with the project file

2024-06-03 Thread Ron
https://bugs.kde.org/show_bug.cgi?id=487947

Bug ID: 487947
   Summary: Subtitle .srt files are not autosaved with the project
file
Classification: Applications
   Product: kdenlive
   Version: 24.02.2
  Platform: Appimage
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: User Interface
  Assignee: j...@kdenlive.org
  Reporter: kdenlive-b...@contact.dot-oz.net
  Target Milestone: ---

Hi!

This might arguably be a feature request - but:

On the (getting rarer! :) occasions when some action crashes kdenlive, being
able to rely on it having autosaved all but the last few moments of work, and
able to reliably restore them when you restart, makes those crashes *much* less
frustrating than they otherwise might be.

But I discovered recently that changes to the subtitles are *not* saved and do
not get restored.  Any changes to them since the last manual save will be lost.

It would be nice if they get automatically backed up along with the project
files.

Thanks!
Ron

-- 
You are receiving this mail because:
You are watching all bug changes.

(flink) branch master updated (f3a3f926c6c -> e4fa72d9e48)

2024-06-01 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from f3a3f926c6c [FLINK-35483][runtime] Fix unstable BatchJobRecoveryTest.
 new 309e3246e02 [FLINK-35199][table] Remove dynamic options and add 
initialization configuration to CreatePeriodicRefreshWorkflow
 new e4fa72d9e48 [FLINK-35199][table] Support the execution of create 
materialized table in full refresh mode

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../table/client/gateway/SingleSessionManager.java |   1 +
 .../table/gateway/rest/SqlGatewayRestEndpoint.java |   6 +
 .../CreateEmbeddedSchedulerWorkflowHandler.java|   5 +-
 ...CreateEmbeddedSchedulerWorkflowRequestBody.java |  15 +-
 .../gateway/service/context/SessionContext.java|  36 ++-
 .../MaterializedTableManager.java  | 248 +
 .../service/operation/OperationExecutor.java   |  27 ++-
 .../table/gateway/service/session/Session.java |   4 +
 .../service/session/SessionManagerImpl.java|   1 +
 .../workflow/EmbeddedWorkflowScheduler.java|   2 +-
 .../flink/table/gateway/workflow/WorkflowInfo.java |  12 +-
 .../scheduler/EmbeddedQuartzScheduler.java | 248 -
 .../AbstractMaterializedTableStatementITCase.java  |  47 +++-
 ...GatewayRestEndpointMaterializedTableITCase.java |   8 -
 .../rest/util/SqlGatewayRestEndpointExtension.java |   4 +
 .../service/MaterializedTableStatementITCase.java  | 106 +++--
 .../gateway/workflow/QuartzSchedulerUtilsTest.java |  11 +-
 .../resources/sql_gateway_rest_api_v3.snapshot |   2 +-
 .../workflow/CreatePeriodicRefreshWorkflow.java|  10 +-
 19 files changed, 679 insertions(+), 114 deletions(-)



(flink) 01/02: [FLINK-35199][table] Remove dynamic options and add initialization configuration to CreatePeriodicRefreshWorkflow

2024-06-01 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 309e3246e0232a0a363aa44ab6d5524133f8f548
Author: Feng Jin 
AuthorDate: Fri May 31 11:41:12 2024 +0800

[FLINK-35199][table] Remove dynamic options and add initialization 
configuration to CreatePeriodicRefreshWorkflow
---
 .../scheduler/CreateEmbeddedSchedulerWorkflowHandler.java |  5 +++--
 .../CreateEmbeddedSchedulerWorkflowRequestBody.java   | 15 +++
 .../table/gateway/workflow/EmbeddedWorkflowScheduler.java |  2 +-
 .../apache/flink/table/gateway/workflow/WorkflowInfo.java | 12 +++-
 .../table/gateway/workflow/QuartzSchedulerUtilsTest.java  | 11 ---
 .../src/test/resources/sql_gateway_rest_api_v3.snapshot   |  2 +-
 .../table/workflow/CreatePeriodicRefreshWorkflow.java | 10 +-
 7 files changed, 36 insertions(+), 21 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java
index b52094a39e6..9a7071ab935 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java
@@ -72,14 +72,15 @@ public class CreateEmbeddedSchedulerWorkflowHandler
 String materializedTableIdentifier =
 request.getRequestBody().getMaterializedTableIdentifier();
 String cronExpression = request.getRequestBody().getCronExpression();
-Map dynamicOptions = 
request.getRequestBody().getDynamicOptions();
+Map initConfig = 
request.getRequestBody().getInitConfig();
 Map executionConfig = 
request.getRequestBody().getExecutionConfig();
 String customScheduleTime = 
request.getRequestBody().getCustomScheduleTime();
 String restEndpointURL = request.getRequestBody().getRestEndpointUrl();
 WorkflowInfo workflowInfo =
 new WorkflowInfo(
 materializedTableIdentifier,
-dynamicOptions == null ? Collections.emptyMap() : 
dynamicOptions,
+Collections.emptyMap(),
+initConfig == null ? Collections.emptyMap() : 
initConfig,
 executionConfig == null ? Collections.emptyMap() : 
executionConfig,
 customScheduleTime,
 restEndpointURL);
diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java
index e0628933560..d0ebf3201ba 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java
@@ -34,7 +34,7 @@ public class CreateEmbeddedSchedulerWorkflowRequestBody 
implements RequestBody {
 
 private static final String FIELD_NAME_MATERIALIZED_TABLE = 
"materializedTableIdentifier";
 private static final String FIELD_NAME_CRON_EXPRESSION = "cronExpression";
-private static final String FIELD_NAME_DYNAMIC_OPTIONS = "dynamicOptions";
+private static final String FIELD_NAME_INIT_CONFIG = "initConfig";
 private static final String FIELD_NAME_EXECUTION_CONFIG = 
"executionConfig";
 private static final String FIELD_NAME_SCHEDULE_TIME = 
"customScheduleTime";
 private static final String FIELD_NAME_REST_ENDPOINT_URL = 
"restEndpointUrl";
@@ -45,9 +45,8 @@ public class CreateEmbeddedSchedulerWorkflowRequestBody 
implements RequestBody {
 @JsonProperty(FIELD_NAME_CRON_EXPRESSION)
 private final String cronExpression;
 
-@JsonProperty(FIELD_NAME_DYNAMIC_OPTIONS)
-@Nullable
-private final Map dynamicOptions;
+@JsonProperty(FIELD_NAME_INIT_CONFIG)
+private final Map initConfig;
 
 @JsonProperty(FIELD_NAME_EXECUTION_CONFIG)
 @Nullable
@@ -63,14 +62,14 @@ public class CreateEmbeddedSchedulerWorkflowRequestBody 
implements RequestBody {
 public CreateEmbeddedSchedulerWorkflowRequestBody(
 @JsonProperty(FIELD_NAME_MATERIALIZED_

(flink) 02/02: [FLINK-35199][table] Support the execution of create materialized table in full refresh mode

2024-06-01 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e4fa72d9e480664656818395741c37a9995f9334
Author: Feng Jin 
AuthorDate: Fri May 31 11:52:18 2024 +0800

[FLINK-35199][table] Support the execution of create materialized table in 
full refresh mode
---
 .../table/client/gateway/SingleSessionManager.java |   1 +
 .../table/gateway/rest/SqlGatewayRestEndpoint.java |   6 +
 .../gateway/service/context/SessionContext.java|  36 ++-
 .../MaterializedTableManager.java  | 248 +
 .../service/operation/OperationExecutor.java   |  27 ++-
 .../table/gateway/service/session/Session.java |   4 +
 .../service/session/SessionManagerImpl.java|   1 +
 .../scheduler/EmbeddedQuartzScheduler.java | 248 -
 .../AbstractMaterializedTableStatementITCase.java  |  47 +++-
 ...GatewayRestEndpointMaterializedTableITCase.java |   8 -
 .../rest/util/SqlGatewayRestEndpointExtension.java |   4 +
 .../service/MaterializedTableStatementITCase.java  | 106 +++--
 12 files changed, 643 insertions(+), 93 deletions(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java
index 27b1ccaa484..9c7e7dee0bb 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java
@@ -96,6 +96,7 @@ public class SingleSessionManager implements SessionManager {
 sessionHandle,
 environment,
 operationExecutorService));
+session.open();
 return session;
 }
 
diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java
index 2e24b967850..2fa462ade85 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.table.gateway.rest;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.java.tuple.Tuple2;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.rest.RestServerEndpoint;
@@ -83,6 +84,11 @@ public class SqlGatewayRestEndpoint extends 
RestServerEndpoint implements SqlGat
 quartzScheduler = new EmbeddedQuartzScheduler();
 }
 
+@VisibleForTesting
+public EmbeddedQuartzScheduler getQuartzScheduler() {
+return quartzScheduler;
+}
+
 @Override
 protected List> 
initializeHandlers(
 CompletableFuture localAddressFuture) {
diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java
index fa9ae05220f..cf1597ecea9 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java
@@ -38,6 +38,7 @@ import 
org.apache.flink.table.gateway.api.endpoint.EndpointVersion;
 import org.apache.flink.table.gateway.api.session.SessionEnvironment;
 import org.apache.flink.table.gateway.api.session.SessionHandle;
 import org.apache.flink.table.gateway.api.utils.SqlGatewayException;
+import 
org.apache.flink.table.gateway.service.materializedtable.MaterializedTableManager;
 import org.apache.flink.table.gateway.service.operation.OperationExecutor;
 import org.apache.flink.table.gateway.service.operation.OperationManager;
 import org.apache.flink.table.gateway.service.utils.SqlExecutionException;
@@ -237,6 +238,18 @@ public class SessionContext {
 statementSetOperations.add(operation);
 }
 
+public void open() {
+try {
+sessionState.materializedTableManager.open();
+} catch (Exception e) {
+LOG.error(
+String.format(
+"Failed to open the materialized table manager for 
the session %s.",
+sessionId),
+e);
+}
+}
+
 // 

 
 /** Close resources, e.g. catalogs. */
@@ -268,6 +281,15 @@ public class S

Re: ERROR: found xmin from before relfrozenxid; MultiXactid does no longer exist -- apparent wraparound

2024-06-01 Thread Ron Johnson
On Fri, May 31, 2024 at 1:25 PM Alanoly Andrews  wrote:

> Yes, and I know that upgrading the Postgres version is the stock answer
> for situations like this. The upgrade is in the works.
>

*Patching *was the solution.  It takes *five minutes*.
Here's how I did it (since our RHEL systems are blocked from the Internet,
and I had to manually d/l the relevant RPMs):
$ sudo -iu postgres pg_ctl stop -wt -mfast
$ sudo yum install PG96.24_RHEL6/*rpm
$ sudo -iu postgres pg_ctl start -wt

You'll have a bit of effort finding the PG10 repository, since it's EOL,
but it can be found.


[kdenlive] [Bug 485356] External proxy preset, error when setting multiple profiles

2024-05-31 Thread Ron
https://bugs.kde.org/show_bug.cgi?id=485356

Ron  changed:

   What|Removed |Added

Version|git-master  |24.05.0
   Platform|Debian stable   |Appimage

-- 
You are receiving this mail because:
You are watching all bug changes.

Heading Level Access In Safari Browser

2024-05-31 Thread Ron Canazzi

Hi Group,

I finally was able to change some settings in Safari Browser on the 
iPhone to get it to display HTML files that are stored locally on the 
iPhone.  I  created my modified Dice Football Game Play Sheet by using 
headings to more quickly navigate from play list to play list.  I have 
the lists separated into running plays, kicking plays, passing plays and 
conversions at level one and the various list of items such as short 
pass, long pass and screen pass for the passing plays and the running 
plays such as left end run, right tackle play and reverse at heading 
level two.


Is there any way to navigate by heading levels using a quick number 
scheme such as is done on Windows desktops with quick key number 
navigation such as number one for heading level one and number two for 
heading level two on the iPhone?


Thanks for any help.

--
Signature:
For a nation to admit it has done grievous wrongs and will strive to correct 
them for the betterment of all is no vice;
For a nation to claim it has always been great, needs no improvement  and to 
cling to its past achievements is no virtue!

--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups "VIPhone" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/7a3d2c9c-6deb-8621-6d2a-105199764add%40roadrunner.com.


[kdenlive] [Bug 485356] External proxy preset, error when setting multiple profiles

2024-05-31 Thread Ron
https://bugs.kde.org/show_bug.cgi?id=485356

--- Comment #1 from Ron  ---
Hi,

Just a followup to this now that the external proxy editing dialog has been
enabled in 24.05.0.

The bug I noted here is still present in the 24.05.0 appimage.  I can see it
manifest if I just
open that dialog and then flick between the various preset options.

If you select the GoPro or Insta option (or any with multiple profiles) then
select a different
profile, then flick back to the GoPro or Insta one (without closing the
dialog), you'll see that
each time you go back to the multiple profile preset, the options in it get
shuffled around
and appear in the wrong fields.

Cheers,
Ron

-- 
You are receiving this mail because:
You are watching all bug changes.

(flink) branch master updated (ce0b61f376b -> 2c35e48addf)

2024-05-29 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from ce0b61f376b [FLINK-35351][checkpoint] Clean up and unify code for the 
custom partitioner test case
 add bc14d551e04 [FLINK-35195][test/test-filesystem] test-filesystem 
support partition.fields option
 add 2c35e48addf [FLINK-35348][table] Introduce refresh materialized table 
rest api

No new revisions were added by this update.

Summary of changes:
 .../file/table/FileSystemTableFactory.java |   2 +-
 .../flink/table/gateway/api/SqlGatewayService.java |  28 ++
 .../gateway/api/utils/MockedSqlGatewayService.java |  14 +
 .../table/gateway/rest/SqlGatewayRestEndpoint.java |  15 +
 .../RefreshMaterializedTableHandler.java   |  95 
 .../RefreshMaterializedTableHeaders.java   |  96 
 .../MaterializedTableIdentifierPathParameter.java  |  46 ++
 .../RefreshMaterializedTableParameters.java|  56 +++
 .../RefreshMaterializedTableRequestBody.java   |  99 
 .../RefreshMaterializedTableResponseBody.java  |  43 ++
 .../gateway/service/SqlGatewayServiceImpl.java |  31 ++
 .../MaterializedTableManager.java  | 127 -
 .../service/operation/OperationExecutor.java   |  24 +
 .../AbstractMaterializedTableStatementITCase.java  | 339 +
 ...GatewayRestEndpointMaterializedTableITCase.java | 187 +++
 .../service/MaterializedTableStatementITCase.java  | 535 +++--
 .../MaterializedTableManagerTest.java  |  77 ++-
 .../resources/sql_gateway_rest_api_v3.snapshot |  57 +++
 .../api/config/MaterializedTableConfigOptions.java |   2 +
 .../file/testutils/TestFileSystemTableFactory.java |  16 +
 .../testutils/TestFileSystemTableFactoryTest.java  |   3 +
 21 files changed, 1602 insertions(+), 290 deletions(-)
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/RefreshMaterializedTableHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/RefreshMaterializedTableHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/MaterializedTableIdentifierPathParameter.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableParameters.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableRequestBody.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableResponseBody.java
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/AbstractMaterializedTableStatementITCase.java
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpointMaterializedTableITCase.java



[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-28 Thread Ron Cohen
Hi Doug,

This draft intends to be a standard track RFC.

Can an enterprise-profile compliant TC modify the source IP address of event 
messages?

Is an enterprise-profile compliant time-transmitter (i.e. Master or Boundary 
clocks) required to support configuration of clock-id to ip-address mappings?

Thanks
Ron

From: Doug Arnold 
Sent: Tuesday, May 28, 2024 4:52 PM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments

Hello Ron,

The enterprise profile draft does not state that TCs MUST modify the source 
addresses of PTP event messages. Nor does it state that TCs MUST NOT modify the 
source addresses.  It is merely pointing out that, in the field, a PTP instance 
can receive PTP event messages with either the source address of the parent 
clock or the source address of a TC in the communication path.  I think that 
this is critically important information for implementors of PTP capable 
devices and should remain in the draft.

I personally prefer TC implementations that do not modify the source address, 
as that is more helpful for people deploying and maintaining PTP networks.  
However, some TC vendors have told me that they don't do that because they 
believe that it violates the standards of the transport network (IP and/or 
Ethernet).  From a layer model architecture point of view, they have a point:

PTP

UDP

IP

Ethernet

Any packet payload sent up to the PTP layer, modified, sent back down the stack 
and retransmitted would be a new packet and a new frame.

Regards,
Doug


From: Ron Cohen mailto:r...@marvell.com>>
Sent: Sunday, May 26, 2024 7:44 AM
To: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: RE: Enterprise Profile: Support for Non standard TCs


Hi Doug,



Thanks for the reference. This note was added in the 2019 version, and I 
believe requires further discussion/clarifications, but I would like to keep 
the focus on the UDP/IP encapsulation, which is the one required by the 
Enterprise profile.



"All messages, including PTP messages, shall be transmitted and received in 
conformance with the standards governing the transport, network, and physical 
layers of the communication paths used."



An IEEE-1588 compliant TC supporting UDP/IP encapsulation must either modify 
the source-IP address of event messages or must not modify the address. Annex E 
of 1588-2019 is the normative specification of this encapsulation.

If an E2E TC changes the source IPv4 address of an event message, it must 
re-calculate the IPv4 header checksum as well. This is an important 
consideration in HW implementations. Update of the IPv4 header checksum is not 
mentioned in Annex E (or anywhere else in the spec). My point is that it is not 
specified in Annex-E because a TC must not modify the IP header fields 
protected by the IPv4 header checksum.



AFAIK, the IEEE-1588-2019 standard does not specify the need for Clock-ID to 
delay-resp mapping to support UDP/IP encapsulation either, for the same reason; 
it is not required for standard E2E TC implementations.



If we are not in agreement what is the mandatory behavior of Annex-E TC with 
regards to source IP address, I suggest to first ratify it with other members 
of the WG / with other established TC vendors before moving forward with the 
draft.



Best,

Ron



From: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>
Sent: Friday, May 24, 2024 12:40 AM
To: Ron Cohen mailto:r...@marvell.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org>
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs



Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments



Hi Ron,



I excluded NATs because I don't think that they are common in networks where 
enterprise profile PTP is used. So I just didn't want to address them,



I wouldn't say the same about TCs.  Some TC implementations do change the 
source address, and some don't.  I've seen both kinds at PTP plugfests.  That 
is why the language in the draft says TCs might change the source. address.  I 
think that this is important for network operators to know.  That is why I want 
that statement in there.



Technically speaking TCs do not forward frames/packets containing PTP event 
messages.  Instead, they take them up the PTP layer, alter them, sed them back 
down to the data link or network layers and then transmit new frames/packets.  
That is officially true even in 1-step cut-through when the implementation 
combines all of these steps. At the PTP layer we call this retransmission, but 
that is not how it is viewed 

Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-28 Thread Ron / BCLUG

Ron / BCLUG wrote on 2024-05-27 18:10:

you'll love both the runit and s6 init
systems.


That's great, I didn't know they ran startup stuff in parallel.

Is it achieved through "script_name &" or something else?


Answering myself, runit looks kinda nifty according to this:

https://en.wikipedia.org/wiki/Runit


It's actually init + services management, which is nice.


And originally from daemontools, by DJB (Daniel Bernstein), who's quite 
a wizard and has written an impressive number of core utilities (qmail, 
djbdns, etc.).



Kinda sounds like Lennart Poettering, come to think of it.



So, yeah, it looks nice, for sure.

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


(flink) 02/02: [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode

2024-05-28 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 49f22254a78d554ac49810058c209297331129cd
Author: fengli 
AuthorDate: Mon May 27 20:54:39 2024 +0800

[FLINK-35425][table-common] Support convert freshness to cron expression in 
full refresh mode
---
 .../flink/table/utils/IntervalFreshnessUtils.java  | 74 
 .../table/utils/IntervalFreshnessUtilsTest.java| 80 +-
 .../SqlCreateMaterializedTableConverter.java   |  6 ++
 ...erializedTableNodeToOperationConverterTest.java |  9 +++
 4 files changed, 168 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
index 121200098ec..cd58bff4d91 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
@@ -31,6 +31,15 @@ import java.time.Duration;
 @Internal
 public class IntervalFreshnessUtils {
 
+private static final String SECOND_CRON_EXPRESSION_TEMPLATE = "0/%s * * * 
* ? *";
+private static final String MINUTE_CRON_EXPRESSION_TEMPLATE = "0 0/%s * * 
* ? *";
+private static final String HOUR_CRON_EXPRESSION_TEMPLATE = "0 0 0/%s * * 
? *";
+private static final String ONE_DAY_CRON_EXPRESSION_TEMPLATE = "0 0 0 * * 
? *";
+
+private static final long SECOND_CRON_UPPER_BOUND = 60;
+private static final long MINUTE_CRON_UPPER_BOUND = 60;
+private static final long HOUR_CRON_UPPER_BOUND = 24;
+
 private IntervalFreshnessUtils() {}
 
 @VisibleForTesting
@@ -69,4 +78,69 @@ public class IntervalFreshnessUtils {
 intervalFreshness.getTimeUnit()));
 }
 }
+
+/**
+ * This is an util method that is used to convert the freshness of 
materialized table to cron
+ * expression in full refresh mode. Since freshness and cron expression 
cannot be converted
+ * equivalently, there are currently only a limited patterns of freshness 
that can be converted
+ * to cron expression.
+ */
+public static String convertFreshnessToCron(IntervalFreshness 
intervalFreshness) {
+switch (intervalFreshness.getTimeUnit()) {
+case SECOND:
+return validateAndConvertCron(
+intervalFreshness,
+SECOND_CRON_UPPER_BOUND,
+SECOND_CRON_EXPRESSION_TEMPLATE);
+case MINUTE:
+return validateAndConvertCron(
+intervalFreshness,
+MINUTE_CRON_UPPER_BOUND,
+MINUTE_CRON_EXPRESSION_TEMPLATE);
+case HOUR:
+return validateAndConvertCron(
+intervalFreshness, HOUR_CRON_UPPER_BOUND, 
HOUR_CRON_EXPRESSION_TEMPLATE);
+case DAY:
+return validateAndConvertDayCron(intervalFreshness);
+default:
+throw new ValidationException(
+String.format(
+"Unknown freshness time unit: %s.",
+intervalFreshness.getTimeUnit()));
+}
+}
+
+private static String validateAndConvertCron(
+IntervalFreshness intervalFreshness, long cronUpperBound, String 
cronTemplate) {
+long interval = Long.parseLong(intervalFreshness.getInterval());
+IntervalFreshness.TimeUnit timeUnit = intervalFreshness.getTimeUnit();
+// Freshness must be less than cronUpperBound for corresponding time 
unit when convert it
+// to cron expression
+if (interval >= cronUpperBound) {
+throw new ValidationException(
+String.format(
+"In full refresh mode, freshness must be less than 
%s when the time unit is %s.",
+cronUpperBound, timeUnit));
+}
+// Freshness must be factors of cronUpperBound for corresponding time 
unit
+if (cronUpperBound % interval != 0) {
+throw new ValidationException(
+String.format(
+"In full refresh mode, only freshness that are 
factors of %s are currently supported when the time unit is %s.",
+cronUpperBound, timeUnit));
+}
+
+return String.format(cronTemplate, interval);
+}
+
+private static String validateAndConvertDayCron(IntervalFreshness 
intervalFreshness) {
+// Since the number of days in each month is different, only one day 
of freshness is
+  

(flink) 01/02: [FLINK-35425][table-common] Introduce IntervalFreshness to support materialized table full refresh mode

2024-05-28 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 61a68bc9dc74926775dd546af64fe176782f70ba
Author: fengli 
AuthorDate: Fri May 24 12:24:49 2024 +0800

[FLINK-35425][table-common] Introduce IntervalFreshness to support 
materialized table full refresh mode
---
 .../catalog/CatalogBaseTableResolutionTest.java|  10 +-
 .../table/catalog/CatalogMaterializedTable.java|  19 +++-
 .../flink/table/catalog/CatalogPropertiesUtil.java |  20 +++-
 .../catalog/DefaultCatalogMaterializedTable.java   |   7 +-
 .../flink/table/catalog/IntervalFreshness.java | 104 +
 .../catalog/ResolvedCatalogMaterializedTable.java  |   5 +-
 .../flink/table/utils/IntervalFreshnessUtils.java  |  72 ++
 .../table/utils/IntervalFreshnessUtilsTest.java|  67 +
 .../SqlCreateMaterializedTableConverter.java   |   9 +-
 .../planner/utils/MaterializedTableUtils.java  |  16 ++--
 ...erializedTableNodeToOperationConverterTest.java |   4 +-
 .../catalog/TestFileSystemCatalogTest.java |   6 +-
 12 files changed, 302 insertions(+), 37 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
index 72a22c22935..a9436ac21df 100644
--- 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
+++ 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java
@@ -38,7 +38,6 @@ import org.junit.jupiter.api.Test;
 
 import javax.annotation.Nullable;
 
-import java.time.Duration;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
@@ -235,8 +234,8 @@ class CatalogBaseTableResolutionTest {
 
 assertThat(resolvedCatalogMaterializedTable.getResolvedSchema())
 .isEqualTo(RESOLVED_MATERIALIZED_TABLE_SCHEMA);
-assertThat(resolvedCatalogMaterializedTable.getFreshness())
-.isEqualTo(Duration.ofSeconds(30));
+assertThat(resolvedCatalogMaterializedTable.getDefinitionFreshness())
+.isEqualTo(IntervalFreshness.ofSecond("30"));
 assertThat(resolvedCatalogMaterializedTable.getDefinitionQuery())
 .isEqualTo(DEFINITION_QUERY);
 assertThat(resolvedCatalogMaterializedTable.getLogicalRefreshMode())
@@ -424,7 +423,8 @@ class CatalogBaseTableResolutionTest {
 properties.put("schema.3.comment", "");
 properties.put("schema.primary-key.name", "primary_constraint");
 properties.put("schema.primary-key.columns", "id");
-properties.put("freshness", "PT30S");
+properties.put("freshness-interval", "30");
+properties.put("freshness-unit", "SECOND");
 properties.put("logical-refresh-mode", "CONTINUOUS");
 properties.put("refresh-mode", "CONTINUOUS");
 properties.put("refresh-status", "INITIALIZING");
@@ -454,7 +454,7 @@ class CatalogBaseTableResolutionTest {
 .partitionKeys(partitionKeys)
 .options(Collections.emptyMap())
 .definitionQuery(definitionQuery)
-.freshness(Duration.ofSeconds(30))
+.freshness(IntervalFreshness.ofSecond("30"))
 
.logicalRefreshMode(CatalogMaterializedTable.LogicalRefreshMode.AUTOMATIC)
 .refreshMode(CatalogMaterializedTable.RefreshMode.CONTINUOUS)
 
.refreshStatus(CatalogMaterializedTable.RefreshStatus.INITIALIZING)
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
index 51856cc859e..1b41ed0ddb9 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java
@@ -30,6 +30,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Optional;
 
+import static 
org.apache.flink.table.utils.IntervalFreshnessUtils.convertFreshnessToDuration;
+
 /**
  * Represents the unresolved metadata of a materialized table in a {@link 
Catalog}.
  *
@@ -113,9 +115,18 @@ public interface CatalogMaterializedTable extends 
CatalogBaseTable {
 String getDefinitionQuery();
 
 /**
- * Get the freshness of materialized table which is used to determine the 
physical refresh mode.
+ 

(flink) branch master updated (6c417719972 -> 49f22254a78)

2024-05-28 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 6c417719972 [hotfix] Fix modification conflict between FLINK-35465 and 
FLINK-35359
 new 61a68bc9dc7 [FLINK-35425][table-common] Introduce IntervalFreshness to 
support materialized table full refresh mode
 new 49f22254a78 [FLINK-35425][table-common] Support convert freshness to 
cron expression in full refresh mode

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../catalog/CatalogBaseTableResolutionTest.java|  10 +-
 .../table/catalog/CatalogMaterializedTable.java|  19 ++-
 .../flink/table/catalog/CatalogPropertiesUtil.java |  20 ++-
 .../catalog/DefaultCatalogMaterializedTable.java   |   7 +-
 .../flink/table/catalog/IntervalFreshness.java | 104 +++
 .../catalog/ResolvedCatalogMaterializedTable.java  |   5 +-
 .../flink/table/utils/IntervalFreshnessUtils.java  | 146 +
 .../table/utils/IntervalFreshnessUtilsTest.java| 145 
 .../SqlCreateMaterializedTableConverter.java   |  15 ++-
 .../planner/utils/MaterializedTableUtils.java  |  16 ++-
 ...erializedTableNodeToOperationConverterTest.java |  13 +-
 .../catalog/TestFileSystemCatalogTest.java |   6 +-
 12 files changed, 469 insertions(+), 37 deletions(-)
 create mode 100644 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java
 create mode 100644 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/IntervalFreshnessUtilsTest.java



Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-27 05:24:


If you like parallelism,


It is a compelling idea...



you'll love both the runit and s6 init
systems.


That's great, I didn't know they ran startup stuff in parallel.

Is it achieved through "script_name &" or something else?



Try em, you'll like em.


Not really keen on swapping out everything just for an init system, when 
I already have parallelism and a whole bunch more.



Plus, as mentioned elsewhere, having suffered through OS/2 vs Windows, 
and early days of Linux, I prefer to stick to more mainstream stuff 
these days.



But again, I think it's great those init systems use parallelism and am 
curious how they do it.



Thanks!

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-27 14:43:


There's a lot of cross-over with servers and software between the FLOSS
families.

>>

How would you know if you don't run FreeBSD or OpenBSD?


Because I'm not stupid?

I mean, that's a really dumb question; are you disputing the overlap 
between Linux and BSD systems?



Things like Nick's shell scripting presentations have lots of overlap 
between FLOSS systems and are immensely enjoyable and informative.





When the questions are BSD specific, I don't say anything.


You just told me how OpenBSD didn't have tools similar to systemd when
you have no working experience of OpenBSD tools such as hostctl,
smtpctl, sysctl, rcctl etc.


I *asked* if it was possible to get a list like the example I gave.

You answered "I have logs".

Without specific tools to parse the logs, that means "no".


Are you intentionally misunderstanding that? Are you not disclosing such 
tools for some reason?  Do you have reading comprehension difficulties?



It was a simple question that wasn't inflammatory, just a "how to ...?" 
question.


Maybe there is a tool that generates just such a listing in the BSD 
world. That'd be great, I'd like to hear about it.



But you're clearly the wrong person to engage with on such things.


rb



___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-27 12:09:

  This is a list devoted to helping people with *BSD systems. If you 
have no intention of using it, why are you even here?


There's a lot of cross-over with servers and software between the FLOSS 
families.


I try to contribute answers to questions (like your inventory management 
one, for example) when no one else chimes in.



When the questions are BSD specific, I don't say anything.

But I don't disparage BSD, I have no problem with it (them?).


Also, someone needs to challenge the "Linux is becoming a poor 
implementation of Window!!1!" comments.



Hope that helps.

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-27 10:59:


The boot time was so slow that it was obvious.


If different processes were involved in the boot sequence, that just may 
have an effect on time-to-desktop, but since it's left unaddressed, I 
guess we'll never know.




Is something like that even possible on non-systemd machines?


I have log files in /var/log on OpenBSD.


So the answer is "no".

Having log files and processing them to extract startup times,
sequencing, etc. is quite different.


Have you even installed OpenBSD or FreeBSD? Have you ever used *BSD 
for longer than one day?


Installed, yes. Used, no.

I used OS/2 back in the day and have experienced the hassle of using
niche software that isn't well supported and do not wish to subject
myself to that again unless there's a compelling reason.

It was a bit of a hassle initially using Linux as a daily driver, but
it's gotten so much better in the past 10-ish years.


rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-27 Thread Ron / BCLUG

Jonathan Drews wrote on 2024-05-23 19:54:


I don't know what the cause was but I could never get scanning (xsane)
to work on either Linux Mint or Kubuntu.


Scanning has been a solved problem in Linux for a decade or two, so it's 
hard to know what went wrong, nor what purpose is served bringing it up 
really.




One of the claims about systemd is that it would provide faster boot
up. However, my Devuan Linux boots faster than either KdeNeon or
Kubuntu or Linux Mint. All three were installed on the same T480
laptop, which now runs Devuan.


All running KDE?  With the exact same packages installed? Seems like 
apples to oranges without that info.


What times did you measure for them?


Parallelism is pretty much always going to be faster, all else being equal.


General question for the list: how does one diagnose which process(es) 
slow down booting up on non-systemd hosts?


I run `systemd-analyze blame` and get a nice list like this:

1min 10.758s plocate-updatedb.service
 31.549s apt-daily.service
 31.283s apt-daily-upgrade.service
 15.423s fstrim.service
  7.902s dev-loop2.device
  6.296s snapd.service
  4.059s systemd-networkd-wait-online.service
  3.830s systemd-udev-settle.service
  3.819s smartmontools.service
  3.433s zfs-import-cache.service
  2.432s postfix@-.service

I can see at a glance exactly what is going on with my boot sequence timing.

Is something like that even possible on non-systemd machines?







Finally there is the xz exploit, which has a writeup:
https://marc.info/?l=openbsd-misc=171179460913574=2

it leads in with a quote to remember -

"This dependency existed not because of a deliberate design decision
by the developers of OpenSSH, but because of a kludge added by some
Linux distributions to integrate the tool with the operating
system's newfangled orchestration service, systemd."


"kludge". "newfangled".

That's quite the biased take on it, not worth the time it took to read it.


The xz exploit was a nation-state attack targeting sshd via xz-utils as 
a vector, then pivoting via systemd's dynamic linking of xz.


Everyone knows that if one is targeted by nation-state actors, it's 
pretty much game over.


Defenders need 100% success, attackers only need 1 success.



As for systemd linking to xz-utils, everyone realizes that log files get 
compressed, I hope?



When software statically links libraries, people complain because:

* multiple versions statically linked "waste disk space"
* with dynamic linking, a vulnerability only needs one library to be 
patched for all apps to be patched


The flip side is, one compromised library and lots of apps are 
vulnerable, I guess.


There isn't really a Right Answer™ to statically vs dynamically linking.


Anyway, systemd had a patch committed that would statically link 
xz-utils, just waiting for distributions to bundle it, when the xz-utils 
hack happened. FWIW.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


(flink) branch master updated (4b342da6d14 -> 90e2d6cfeea)

2024-05-26 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 4b342da6d14 [FLINK-35426][table-planner] Change the distribution of 
DynamicFilteringDataCollector to Broadcast
 add 90e2d6cfeea [FLINK-35342][table] Fix the unstable 
MaterializedTableStatementITCase test due to wrong job status check logic

No new revisions were added by this update.

Summary of changes:
 .../gateway/service/MaterializedTableStatementITCase.java  | 10 ++
 1 file changed, 10 insertions(+)



[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-26 Thread Ron Cohen
Hi Doug,

Thanks for the reference. This note was added in the 2019 version, and I 
believe requires further discussion/clarifications, but I would like to keep 
the focus on the UDP/IP encapsulation, which is the one required by the 
Enterprise profile.

"All messages, including PTP messages, shall be transmitted and received in 
conformance with the standards governing the transport, network, and physical 
layers of the communication paths used."

An IEEE-1588 compliant TC supporting UDP/IP encapsulation must either modify 
the source-IP address of event messages or must not modify the address. Annex E 
of 1588-2019 is the normative specification of this encapsulation.
If an E2E TC changes the source IPv4 address of an event message, it must 
re-calculate the IPv4 header checksum as well. This is an important 
consideration in HW implementations. Update of the IPv4 header checksum is not 
mentioned in Annex E (or anywhere else in the spec). My point is that it is not 
specified in Annex-E because a TC must not modify the IP header fields 
protected by the IPv4 header checksum.

AFAIK, the IEEE-1588-2019 standard does not specify the need for Clock-ID to 
delay-resp mapping to support UDP/IP encapsulation either, for the same reason; 
it is not required for standard E2E TC implementations.

If we are not in agreement what is the mandatory behavior of Annex-E TC with 
regards to source IP address, I suggest to first ratify it with other members 
of the WG / with other established TC vendors before moving forward with the 
draft.

Best,
Ron

From: Doug Arnold 
Sent: Friday, May 24, 2024 12:40 AM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments
________
Hi Ron,

I excluded NATs because I don't think that they are common in networks where 
enterprise profile PTP is used. So I just didn't want to address them,

I wouldn't say the same about TCs.  Some TC implementations do change the 
source address, and some don't.  I've seen both kinds at PTP plugfests.  That 
is why the language in the draft says TCs might change the source. address.  I 
think that this is important for network operators to know.  That is why I want 
that statement in there.

Technically speaking TCs do not forward frames/packets containing PTP event 
messages.  Instead, they take them up the PTP layer, alter them, sed them back 
down to the data link or network layers and then transmit new frames/packets.  
That is officially true even in 1-step cut-through when the implementation 
combines all of these steps. At the PTP layer we call this retransmission, but 
that is not how it is viewed by the layers below.  IEEE 802.1Q is explicit 
about this, and the IEEE 802.1 working group sent a message to the 1588 WG 
asking us to point this out in the 2019 edition of 1588.

IEEE 1588-2019 subclause 7.3.1 starts with these two paragraphs:
"All messages, including PTP messages, shall be transmitted and received in 
conformance with the
standards governing the transport, network, and physical layers of the 
communication paths used.

NOTE-As an example, consider IEEE 1588 PTP Instances, specifically including 
Transparent Clocks, running on
IEEE 802.1Q communication paths. Suppose we have two Boundary Clocks separated 
by a Transparent Clock. The
Transparent Clock entity (the PTP stack running above the MAC layer) is 
required to insert the appropriate MAC
address of the Transparent Clock into the sourceAddress field of the Ethernet 
header for ALL messages it transmits.
Other communication protocols can have similar requirements."

Regards,
Doug
________
From: Ron Cohen mailto:r...@marvell.com>>
Sent: Wednesday, May 22, 2024 11:57 PM
To: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: RE: Enterprise Profile: Support for Non standard TCs


Hi Doug,



The draft states that deployments with NAT are out of scope of the document.

"In IPv4 networks some clocks might be hidden behind a NAT, which hides their 
IP addresses from the rest of the network. Note also that the use of NATs may 
place limitations on the topology of PTP networks, depending on the port 
forwarding scheme employed. Details of implementing PTP with NATs are out of 
cope of this document."

A PTP TC that is a bridge per 802.1q or an IPv4/6 router must not change the 
source IP address of PTP delay requests.



I've been working with TC solutions for more than 10 years. Both 1-step PTP TCs 
in HW (as well as 2-step in HW+SW) and none modified the source IP address of 
E2E delay requests, when working as either a bridge or router.

This is the case for the products of the company I currently work for as well.



My input 

Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-25 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-25 01:25:


That being said I don't think it calls for a full boycott of Linux,

>

Thanks Kyle. Like you, I don't think systemd calls for a boycott on
Linux, and I hadn't intended to imply it.


Cheers Steve, Kyle, et al.


I just wanted to say, despite the spirited debates, it's absolutely 
wonderful that there's top-notch software available for free that we all 
get to use in the way we choose to use it.



Thanks to all the contributors, and thanks to the list for giving us all 
a place to chat, rant, rave, and in the end, discuss our topics of interest!



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-24 Thread Ron / BCLUG

Kyle Willett wrote on 2024-05-23 21:31:


One piece of software can't be that good at so many different tasks!


I'm not sure that logic holds up:

"Fedora can't be that good at so many different tasks"

"Linux kernel can't be that good at so many different tasks"




GNU utilities - contains logging tools, mcron cron job implementation, 
grub, ...




That's not really an apples-to-apples comparison, but packaging a bunch 
of tools under one moniker isn't uncommon.





> sudo replacement with run0

What did you think about the discussion (was it on this list?) about 
suid and the inherent risks with the (allegedly) spotty implementations 
of that vis-á-vis sudo?


It was over my head, but there were issues raised and sudo CVEs patches 
mentioned.


Someone with very deep knowledge of the topic proposed the run0 vs sudo 
and had some valid-looking reasons for doing so.




Now, granted systemd utils show up in a *lot* of places, giving valid 
reason to be curious about why.


On the other hand, a services management system probably should handle a 
lot of different functionality.



And, some of those new utilities have great features, i.e.:

* show me all log messages from postfix from 2 boots ago *only*

* show me all the "cron" jobs in order of when they next launch, the 
time elapsed since last launch,...


* show me a list of all services that start at boot time and how long 
they took to become active (wow, I just noticed it took 30.566s for 
apt-daily-upgrade.service to come up)







Admittedly, I'm not a fan of resolvectl and some other stuff, and more 
often than not use cron, not timers.



Cheers,

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


(flink) branch master updated (0737220959f -> 71e6746727a)

2024-05-23 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 0737220959f [FLINK-35216] Support for RETURNING clause of JSON_QUERY
 add 0ec6302cff4 [FLINK-35347][table-common] Introduce RefreshWorkflow 
related implementation to support full refresh mode for materialized table
 add 62b8fee5208 [FLINK-35347][table] Introduce embedded scheduler to 
support full refresh mode for materialized table
 add 71e6746727a [FLINK-35347][table] Introduce EmbeddedWorkflowScheduler 
plugin based on embedded scheduler

No new revisions were added by this update.

Summary of changes:
 flink-table/flink-sql-gateway/pom.xml  |  26 ++
 .../table/gateway/rest/SqlGatewayRestEndpoint.java |  60 ++-
 .../CreateEmbeddedSchedulerWorkflowHandler.java|  98 
 .../DeleteEmbeddedSchedulerWorkflowHandler.java|  75 +++
 .../ResumeEmbeddedSchedulerWorkflowHandler.java|  75 +++
 .../SuspendEmbeddedSchedulerWorkflowHandler.java   |  75 +++
 .../AbstractEmbeddedSchedulerWorkflowHeaders.java  |  63 +++
 .../CreateEmbeddedSchedulerWorkflowHeaders.java}   |  65 ++-
 .../DeleteEmbeddedSchedulerWorkflowHeaders.java|  50 ++
 .../ResumeEmbeddedSchedulerWorkflowHeaders.java|  50 ++
 .../SuspendEmbeddedSchedulerWorkflowHeaders.java   |  50 ++
 .../header/session/ConfigureSessionHeaders.java|   4 +-
 .../header/statement/CompleteStatementHeaders.java |   4 +-
 ...CreateEmbeddedSchedulerWorkflowRequestBody.java | 105 +
 ...reateEmbeddedSchedulerWorkflowResponseBody.java |  53 +++
 .../EmbeddedSchedulerWorkflowRequestBody.java  |  55 +++
 .../rest/util/SqlGatewayRestAPIVersion.java|   5 +-
 .../gateway/workflow/EmbeddedRefreshHandler.java   |  84 
 .../workflow/EmbeddedRefreshHandlerSerializer.java |  45 ++
 .../workflow/EmbeddedWorkflowScheduler.java| 235 ++
 .../workflow/EmbeddedWorkflowSchedulerFactory.java |  67 +++
 .../flink/table/gateway/workflow/WorkflowInfo.java | 125 +
 .../scheduler/EmbeddedQuartzScheduler.java | 229 +
 .../workflow/scheduler/QuartzSchedulerUtils.java   | 125 +
 .../workflow/scheduler/SchedulerException.java}|  14 +-
 .../src/main/resources/META-INF/NOTICE |   9 +
 .../org.apache.flink.table.factories.Factory   |   1 +
 .../table/gateway/rest/RestAPIITCaseBase.java  |   6 +-
 .../rest/util/TestingSqlGatewayRestEndpoint.java   |   4 +-
 .../workflow/EmbeddedRefreshHandlerTest.java}  |  28 +-
 .../workflow/EmbeddedSchedulerRelatedITCase.java   | 350 ++
 .../gateway/workflow/QuartzSchedulerUtilsTest.java |  83 
 .../resources/sql_gateway_rest_api_v3.snapshot | 519 +
 .../table/refresh/ContinuousRefreshHandler.java|   2 +
 .../workflow/CreatePeriodicRefreshWorkflow.java|  85 
 ...owException.java => ResumeRefreshWorkflow.java} |  19 +-
 ...wException.java => SuspendRefreshWorkflow.java} |  19 +-
 .../flink/table/workflow/WorkflowException.java|   5 +-
 flink-table/pom.xml|   1 +
 39 files changed, 2887 insertions(+), 81 deletions(-)
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHandler.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/AbstractEmbeddedSchedulerWorkflowHeaders.java
 copy 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/{statement/CompleteStatementHeaders.java
 => materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHeaders.java} 
(51%)
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHeaders.java
 create mode 100644 
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/material

Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-23 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-23 18:06:


I'll address his central point, which is that systemd has many
benefits. My rebuttal is that nobody needs that kind of complexity.


Computers are complex, imagine that.



Most systemd features can and have been done better and simpler other
ways.


Asserts facts not in evidence; show your evidence.

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: PG 12.2 ERROR: cannot freeze committed xmax

2024-05-23 Thread Ron Johnson
On Thu, May 23, 2024 at 9:41 AM bruno da silva  wrote:

> Hello,
> I have a deployment with PG 12.2 reporting ERROR: cannot freeze committed
> xmax
> using Red Hat Enterprise Linux 8.9.
>
> What is the recommended to find any bug fixes that the version 12.2 had
> that could have caused this error.
>

https://www.postgresql.org/docs/release/

You're missing *four years* of bug fixes.

Could this error be caused by OS/Hardware related issues?
>

 Four years of bug fixes is more likely the answer.


Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-23 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-23 02:53:


LibreOffice' reason for existence is to interact with MS Office
documents. If it can't do that, why use it?


The blame for poor interaction lies with Microsoft, 100%.

Also, another major reason for LibreOffice is to have a full featured 
office suite that is *not* Microsoft Office, one that runs natively on 
all OSs. Which it has succeeded at nicely.




 From my perspective, LibreOffice suffers from the same problem now
afflicting most Linux distributions: Trying to be easy for Windows
people. Systemd


But systemd has absolutely nothing to do with being easy for Windows 
people. It exists to provide a services lifecycle management system.


Just because you dislike both of them does not mean the two are related 
somehow.




Once again, I'll link "The Tragedy of systemd" by Benno Rice, FreeBSD 
developer.  I'm still waiting for an anti-systemd person to address one 
single point he raised:


The presentation at linux.conf.au:


https://www.youtube.com/watch?v=o_AIw9bGogo


Specifically, "The arguments against systemd that people tend to 
advance", starting with "it violates the Unix philosophy":



https://youtu.be/o_AIw9bGogo?si=0xJ0-JpXGEBGpW0K=1040



The slide show (note its domain):

> 
https://papers.freebsd.org/2018/bsdcan/rice-The_Tragedy_of_systemd.files/rice-The_Tragedy_of_systemd.pdf



___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] LO backups & OO [was OT: is there any office package (especially spreadsheet) that lets me choose a PEN color]

2024-05-23 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-23 08:07:


LO vs OO (topic 1)

I was pissed when I was told I "had" to convert from OO to LO.

LO was buggy (see below)

later found a friend stayed with OO and was happy.

any opinions on which of LO  and OO  (or others) do the best job on reading in 
XL or other formats?


LibreOffice is the project where all the devs forked Open Office (an 
Oracle product at the time).


It's been refactored and received rapid development updates.

OnlyOffice was abandoned by Oracle to the Apache foundation and 
virtually no one works on it other than a few commits by IBM employees 
(IBM has distributed OO in the past so wanted it to survive in some form).



LibreOffice beats Open Office by every metric available.

I can't speak to "XL" (XLS?) format specifically, but assume nothing has 
changed in that regard in 10 years for OO.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Off line HTML Documents and Safari Browser

2024-05-23 Thread Ron Canazzi

Hi Group,

I have a document that I downloaded and sent to my iPhone via e-mail 
attachment.  I saved it to files. When I try to open it, Voice Dream 
Reader grabs it and opens it. How can I allow Safari Browser to open it?


--
Signature:
For a nation to admit it has done grievous wrongs and will strive to correct 
them for the betterment of all is no vice;
For a nation to claim it has always been great, needs no improvement  and to 
cling to its past achievements is no virtue!

--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups "VIPhone" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/27ed8dc7-f09d-4893-b567-7c4ad29561c7%40roadrunner.com.


Re: [Semibug] internal number storage in libre office calc [was: LibreOffice is summing incorrectly]

2024-05-23 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 22:55:


OK, my spreadsheet is only 1.3 MB


1.3MB is miniscule in relation to any disk in the past 10 (20?) years.

What's your time worth?




single precision calculation is faster too.


How many microseconds could you save and how much time are you willing 
to invest into that?




(3) perhaps I could have all integers (100 times bigger than the desired
number), and shift the decimal while displaying, but if I have to learn
to type in everything, that is a LOT more work (mostly i now enter
numbers like 5 or 3 or 2.1 or .12 so typing 500, 300, 210 and 12 would
be more typing) and an even longer learning curve...


Yeah, it's going to cause errors in data entry, just to potentially 
avoid rounding errors of fractions of cents.




Probably LibreOffice and GnuCash are suitable for your needs (I've never 
looked at GnuCash).


Your file sizes without fiddling could still almost fit on a 3½" floppy. 
Or, eleventeen gazillion of them on a fingernail-sized media card. 
You're okay there too.



Sounds like things are actually fine and trying to optimize away via 
single vs double precision just ain't worth it.




Good luck,

rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-23 Thread Ron / BCLUG

Steve Litt wrote on 2024-05-22 23:26:


This command can be run in 3 seconds


Ctrl+S == saved, 0.3 seconds.

Haven't personally experienced much instability with LO.

Certainly would *not* advise against using it.



> LibreOffice is notorious for randomly, summarily and permanently
> changing styles.

I have had an issue with styles in the past, but that was in documents 
that were opened by other office suite apps too, so I never knew who to 
blame:


a) me?
b) LO?
c) OnlyOffice?
d) All of the above?
e) Something else (10 years of upgrades between edits)?


Maybe it was LO if it's a known issue.

Everything was recoverable.


rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] re tracking changes in libre office spreadsheet [was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color]

2024-05-23 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 23:46:


Perhaps version tracking would help with this?

All changes can be tracked and reviewed.


ok, found this page:
https://itsfoss.com/libreoffice-version-control/

it says

click on edit/track changes/record.

--done

click on view/toolbars/track changes


That's wrong, at least for v7.3.7.


Try Edit > Track Changes > Manage


A dialogue pops up with a list of all changes since recording started.

One can click through the list to highlight individual changes, and 
Accept or Reject them.



It's pretty nice.




I am on version 7.3.7.2, is that too old?


It works in this version (since about version 4.0 in 2013), but the link 
you found has (currently) invalid info.




when I do a general search for what is the current "libre office spreadsheet", 
I get libre office (overall) is 24.2, so clearly a different series of numbers.


As of February, they're going with the year.month format, which is kinda 
nice, once one knows what's going on.




rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


Re: [Semibug] LibreOffice is summing incorrectly

2024-05-22 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 14:27:


I typed 4.73 into a cell, it was actually stored that way


LibreOffice Calc (which uses 64-bit double-precision numbers 
internally)


https://help.libreoffice.org/latest/en-US/text/scalc/01/calculation_accuracy.html


never have more than 5 (actually 4.5, meaning 199.99 to .01) 
significant digits, so single precision could make the spreadsheet a 
LOT smaller.


Since all numbers are stored as 64 bit double-precision, it doesn't look
like there's a way to reduce spreadsheet size by fiddling with storage.



If I define a cell as ONLY a date OR a time, will it insist on
storing it the internal clock format, which requires double
precision?


Yes. From the link above:

internally, any time is a fraction of a day, 12:00 (noon) being 
represented as 0.5.




Or better yet, store (the non date/time) as 100x integers, meaning
.02 would be stored as 2, and 5 would be stored as 500.


If you input numbers as n*100 and display them back as n÷100, that might
work nicely. I've heard of that technique used in financial transactions.



For displaying, I would like to DISPLAY in non-scientific format,
but with a limited number of significant digits


Number formatting supports custom formats, so that's do-able.



Inherent Accuracy Problem

LibreOffice Calc, just like most other spreadsheet software, uses
floating-point math capabilities available on hardware. Given that
most contemporary hardware uses binary floating-point arithmetic with
limited precision defined in IEEE 754 standard, many decimal numbers
- including as simple as 0.1 - cannot be precisely represented in
LibreOffice Calc (which uses 64-bit double-precision numbers
internally).



That link is pretty interesting, and I didn't realize time formats were 
susceptible to rounding issues; I expected them to be stored in Unix 
epoch format.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-22 Thread Ron Cohen
Hi Doug,

The draft states that deployments with NAT are out of scope of the document.

"In IPv4 networks some clocks might be hidden behind a NAT, which hides their 
IP addresses from the rest of the network. Note also that the use of NATs may 
place limitations on the topology of PTP networks, depending on the port 
forwarding scheme employed. Details of implementing PTP with NATs are out of 
cope of this document."
A PTP TC that is a bridge per 802.1q or an IPv4/6 router must not change the 
source IP address of PTP delay requests.

I've been working with TC solutions for more than 10 years. Both 1-step PTP TCs 
in HW (as well as 2-step in HW+SW) and none modified the source IP address of 
E2E delay requests, when working as either a bridge or router.
This is the case for the products of the company I currently work for as well.

My input is that per my understanding the following is not true for standard 
TCs:

"This is important since Transparent Clocks will treat PTP messages that are 
altered at the PTP application layer as new IP packets and new Layer 2 frames 
when the PTP messages are retranmitted."

And with NAT services out of scope, this part should be removed in my opinion 
too:

"In PTP Networks that contain Transparent Clocks, timeTransmitters might 
receive Delay Request messages that no longer contains the IP Addresses of the 
timeReceivers. This is because Transparent Clocks might replace the IP address 
of Delay Requests with their own IP address after updating the Correction 
Fields. For this deployment scenario timeTransmitters will need to have 
configured tables of timeReceivers' IP addresses and associated Clock 
Identities in order to send Delay Responses to the correct PTP Nodes"

I don't have further new input beyond that.

Best,
Ron

From: Doug Arnold 
Sent: Thursday, May 23, 2024 12:05 AM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments
____
Hello Ron,

For Ethernet - IEEE 802.1Q, I can't remember the RFCs for IPv4 and IPv6 but you 
can look them up.

Here is the thing. I understand from a network layer model perspective a TC 
should not change the payload for a frame/packet and just forward it.  However, 
there is no other way to do a cut-through 1-step TC. I pointed that out to the 
folks in IEEE 802.1 but they ignored me.  I know for a fact that multiple 
companies' implementations of TCs do not replace the source address before 
retransmitting.  I don't blame them.  The standards are preventing a valuable 
use case just to preserve the purity of their layer model.  I would be 
surprised if 1588 is the only technology that needs to change message fields on 
the fly in a cut through switch.

Regards,
Doug
____
From: Ron Cohen mailto:r...@marvell.com>>
Sent: Wednesday, May 22, 2024 2:58 AM
To: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: RE: Enterprise Profile: Support for Non standard TCs


Hi Doug,



TC are not supposed to change source IP address of delay requests.



If the TC is a layer2 switch/bridge, it must not modify the source MAC address 
while forwarding and must never touch the layer3 addresses.

If the TC is a layer3 IP router, it must not modify the source IP address while 
forwarding and must change the source MAC address to the MAC address of its 
egress port.



If the TC is a layer4 device, e.g., a NAT device, it modifies the source IP 
address of messages as it is its functionality. It may be the case that such 
functionality is required in the enterprise. My point is that it is far from 
obvious and the draft needs to elaborate why it's needed.



>> This is required by the standards that specify the transport networks.

I would appreciate if you point to the relevant standards.



The draft states that additional support is required for this deployment 
scenario:

"For this deployment scenario timeTransmitters will need to have configured 
tables of timeReceivers' IP addresses and associated Clock Identities in order 
to send Delay Responses to the correct PTP Nodes"



These tables would be part of the IEEE1588 spec if this TC behavior was 
standard. It is not trivial to add support for these tables in HW, if you want 
to support scale and speed.



Best,

Ron



From: Doug Arnold 
mailto:doug.arn...@meinberg-usa.com>>
Sent: Wednesday, May 22, 2024 12:36 AM
To: Ron Cohen mailto:r...@marvell.com>>; 
tictoc@ietf.org<mailto:tictoc@ietf.org>
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs



Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments

___

Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color

2024-05-22 Thread Ron / BCLUG

CAREY SCHUG wrote on 2024-05-22 15:34:


I would like to choose a PEN color, e.g. red, no matter what I enter or
change, no matter where, it will be in the pen color.


As Carl mentioned, LibreOffice supports colourizing text.



so I can make a group of changes, then go back and verify them, when
confirmed, change everything to black.


Perhaps version tracking would help with this?

All changes can be tracked and reviewed.


Also, there's the ability to add comments, which might help with the 
review process.



rb

___
Semibug mailing list
Semibug@lists.nycbug.org
https://lists.nycbug.org:8443/mailman/listinfo/semibug


[Int-area] ICMP Considerations

2024-05-22 Thread Ron Bonica
Folks,

Over the years, I have written several forwarding plane documents that mention 
ICMP. During the review of these documents, people have raised issues of the 
like the following:


  *
shouldn't we mention that ICMP message delivery is not reliable?
  *
shouldn't we mention that ICMP messages are rate limited?
  *
How is the ICMP message processed at its destination?

In each of these documents, I have added an ICMP considerations section to 
address these issues. Rather than repeating that text in every document we 
write in the future, I have abstracted it into a separate document.

If anyone would like to contribute to this document, it can be found at 
https://github.com/ronbonica/ICMP

Please send a private email if you are interested in contributing to the 
document.


Ron

[https://opengraph.githubassets.com/7f935b93000c8251bb7045f430e9243476e1d8f5fc79ffb6b46dab706b6dc4de/ronbonica/ICMP]<https://github.com/ronbonica/ICMP>
GitHub - ronbonica/ICMP: ICMP Inherited Wisdom 
Draft<https://github.com/ronbonica/ICMP>
ICMP Inherited Wisdom Draft. Contribute to ronbonica/ICMP development by 
creating an account on GitHub.
github.com




Juniper Business Use Only
___
Int-area mailing list -- int-area@ietf.org
To unsubscribe send an email to int-area-le...@ietf.org


Re: search_path and SET ROLE

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 2:02 PM Isaac Morland 
wrote:

> On Wed, 22 May 2024 at 13:48, Ron Johnson  wrote:
>
> As a superuser administrator, I need to be able to see ALL tables in ALL
>> schemas when running "\dt", not just the ones in "$user" and public.  And I
>> need it to act consistently across all the systems.
>>
>
> \dt *.*
>

Also shows information_schema, pg_catalog, and pg_toast.  I can adjust to
that, though.


> But I am skeptical how often you really want this in a real database with
> more than a few tables. Surely \dn+ followed by \dt [schemaname].* for a
> few strategically chosen [schemaname] would be more useful?
>

More than you'd think.  I'm always looking up the definition of this table
or that table (mostly for indices and keys), and I never remember which
schema they're in.


Re: search_path wildcard?

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 1:58 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > That would be a helpful feature for administrators, when there are
> multiple
> > schemas in multiple databases, on multiple servers: superusers get ALTER
> > ROLE foo SET SEARCH_PATH  = '*'; and they're done with it.
>
> ... and they're pwned within five minutes by any user with the wits
> to create a trojan-horse function or operator.  Generally speaking,
> you want admins to run with a minimal search path not a maximal one.
>

Missing tables when running "\t" is a bigger hassle.


Re: search_path wildcard?

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 12:53 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, May 22, 2024, 10:36 Ron Johnson  wrote:
>
>> This doesn't work, and I've found nothing similar:
>> ALTER ROLE foo SET SEARCH_PATH  = '*';
>>
>
> Correct, you cannot do that.
>

That would be a helpful feature for administrators, when there are multiple
schemas in multiple databases, on multiple servers: superusers get ALTER
ROLE foo SET SEARCH_PATH  = '*'; and they're done with it.


Re: search_path and SET ROLE

2024-05-22 Thread Ron Johnson
On Wed, May 22, 2024 at 1:10 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > It seems that the search_path of the role that you SET ROLE to does not
> > become the new search_path.
>
> It does for me:
>
> regression=# create role r1;
> CREATE ROLE
> regression=# create schema r1 authorization r1;
> CREATE SCHEMA
> regression=# select current_schemas(true), current_user;
>current_schemas   | current_user
> -+--
>  {pg_catalog,public} | postgres
> (1 row)
>
> regression=# set role r1;
> SET
> regression=> select current_schemas(true), current_user;
> current_schemas | current_user
> +--
>  {pg_catalog,r1,public} | r1
> (1 row)
>
> regression=> show search_path ;
>search_path
> -
>  "$user", public
> (1 row)
>
> The fine manual says that $user tracks the result of
> CURRENT_USER, and at least in this example it's doing that.
> (I hasten to add that I would not swear there are no
> bugs in this area.)
>
> > Am I missing something, or is that PG's behavior?
>
> I bet what you missed is granting (at least) USAGE on the
> schema to that role.  PG will silently ignore unreadable
> schemas when computing the effective search path.
>

There are multiple schemata in (sometimes) multiple databases on (many)
multiple servers.

As a superuser administrator, I need to be able to see ALL tables in ALL
schemas when running "\dt", not just the ones in "$user" and public.  And I
need it to act consistently across all the systems.

(Heck, none of our schemas are named the same as roles.)

This would be useful for account maintenance:

CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;
ALTER ROLE dbagrp SET search_path = public, dba, sch1, sch2, sch3, sch4;
CREATE USER joe IN GROUP dbagrp INHERIT PASSWORD = 'linenoise';

Then, as user joe:
SHOW search_path;
   search_path
-
 "$user", public
(1 row)
SET ROLE dbagrp RELOAD SESSION; -- note the new clause
SHOW search_path;
   search_path
---
public , dba, sch1, sch2, sch3, sch4
(1 row)

When a new DBA comes on board, add him/her to dbagrp, and they
automagically have everything  that dbagrp has.
Now, each dba must individually be given a search_path.  If you forget, or
forget to add some schemas, etc, mistakes ger made and time is wasted.


search_path wildcard?

2024-05-22 Thread Ron Johnson
This doesn't work, and I've found nothing similar:
ALTER ROLE foo SET SEARCH_PATH  = '*';

Is there a single SQL statement which will generate a search path based
on information_schema.schemata, or do I have to write an anonymous DO
procedure?
SELECT schema_name FROM information_schema.schemata WHERE schema_name !=
'information_schema' AND schema_name NOT LIKE 'pg_%';


search_path and SET ROLE

2024-05-22 Thread Ron Johnson
PG 9.6.24 (Soon, I swear!)

It seems that the search_path of the role that you SET ROLE to does not
become the new search_path.

Am I missing something, or is that PG's behavior?

AS USER postgres


$ psql -h 10.143.170.52 -Xac "CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;"
CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;
CREATE ROLE

$ psql -h 10.143.170.52 -Xac "CREATE USER rjohnson IN GROUP dbagrp INHERIT;"
CREATE USER rjohnson IN GROUP dbagrp INHERIT;
CREATE ROLE

[postgres@FISPMONDB001 ~]$ psql -h 10.143.170.52 -Xac "CREATE USER
\"11026270\" IN GROUP dbagrp INHERIT PASSWORD '${NewPass}' VALID UNTIL
'2024-06-30 23:59:59';"
CREATE USER "11026270" IN GROUP dbagrp INHERIT PASSWORD 'linenoise' VALID
UNTIL '2024-06-30 23:59:59';
CREATE ROLE

$ psql -h 10.143.170.52 -Xac "ALTER ROLE dbagrp set search_path = dbagrp,
public, dba, cds, tms;"
ALTER ROLE dbagrp set search_path = dbagrp, public, dba, cds, tms;
ALTER ROLE

AS USER rjohnson


[rjohnson@fpslbxcdsdbppg1 ~]$ psql -dCDSLBXW
psql (9.6.24)
Type "help" for help.

CDSLBXW=> SET ROLE dbagrp;
SET
CDSLBXW=#
CDSLBXW=# SHOW SEARCH_PATH;
   search_path
-
 "$user", public
(1 row)


Back to user postgres
=

$ psql -h 10.143.170.52 -Xac "ALTER ROLE rjohnson set search_path = dbagrp,
public, dba, cds, tms;"
ALTER ROLE rjohnson set search_path = dbagrp, public, dba, cds, tms;
ALTER ROLE

Back to user rjohnson
=

[rjohnson@fpslbxcdsdbppg1 ~]$ psql -dCDSLBXW
psql (9.6.24)
Type "help" for help.

CDSLBXW=>
CDSLBXW=> SET ROLE dbagrp;
SET

CDSLBXW=# SHOW SEARCH_PATH;
  search_path
---
 dbagrp, public, dba, cds, tms
(1 row)


Re: DFSort query

2024-05-22 Thread Ron Thomas
My Apologies Kolusu for wrong details inputted  

Thanks Much for the sample job and it is working good for my requirement.

Regards
 Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


[TICTOC]Re: Enterprise Profile: Support for Non standard TCs

2024-05-22 Thread Ron Cohen
Hi Doug,

TC are not supposed to change source IP address of delay requests.

If the TC is a layer2 switch/bridge, it must not modify the source MAC address 
while forwarding and must never touch the layer3 addresses.
If the TC is a layer3 IP router, it must not modify the source IP address while 
forwarding and must change the source MAC address to the MAC address of its 
egress port.

If the TC is a layer4 device, e.g., a NAT device, it modifies the source IP 
address of messages as it is its functionality. It may be the case that such 
functionality is required in the enterprise. My point is that it is far from 
obvious and the draft needs to elaborate why it's needed.

>> This is required by the standards that specify the transport networks.
I would appreciate if you point to the relevant standards.

The draft states that additional support is required for this deployment 
scenario:
"For this deployment scenario timeTransmitters will need to have configured 
tables of timeReceivers' IP addresses and associated Clock Identities in order 
to send Delay Responses to the correct PTP Nodes"

These tables would be part of the IEEE1588 spec if this TC behavior was 
standard. It is not trivial to add support for these tables in HW, if you want 
to support scale and speed.

Best,
Ron

From: Doug Arnold 
Sent: Wednesday, May 22, 2024 12:36 AM
To: Ron Cohen ; tictoc@ietf.org
Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs

Prioritize security for external emails: Confirm sender and content safety 
before clicking links or opening attachments
________
Hello Ron,

Yes.  A TC is required to change the source address of a message at least for 
Ethernet and IP mappings.  This is not an IEEE 1588 decision.  This is required 
by the standards that specify the transport networks. Ethernet (IEEE 802.1Q) 
IPv4 and IPv6.  A TC effectively changes the payload of the messages from the 
point of view of L2 and L3, so it is a new frame and new packet to those 
layers.  I think that IPv4 has an option to alter a message in-route, but the 
node is supposed to zero out the source address.

Regards,
Doug

________
From: Ron Cohen mailto:r...@marvell.com>>
Sent: Tuesday, May 7, 2024 12:43 PM
To: tictoc@ietf.org<mailto:tictoc@ietf.org> 
mailto:tictoc@ietf.org>>
Subject: [TICTOC]Enterprise Profile: Support for Non standard TCs


Hi,



I'm late to the game here. I apologize in advance if this has already been 
discussed and decided:



I can't figure out why the profile needs to support non-standard TCs, or what 
seems to be a strange combination of a NAT+TC devices:



"In PTP Networks that contain Transparent Clocks, timeTransmitters

   might receive Delay Request messages that no longer contains the IP

   Addresses of the timeReceivers.  This is because Transparent Clocks

   might replace the IP address of Delay Requests with their own IP

   address after updating the Correction Fields.  For this deployment

   scenario timeTransmitters will need to have configured tables of

   timeReceivers' IP addresses and associated Clock Identities in order

   to send Delay Responses to the correct PTP Nodes."



Is a standard TC allowed to change the source IP address of messages?



There should be a strong reason to require support for such devices in a 
standard profile.



Best,

Ron



/*

*  Ron Cohen

*  Email: r...@marvell.com<mailto:r...@marvell.com>

*  Mobile: +972.54.5751506

*/


___
TICTOC mailing list -- tictoc@ietf.org
To unsubscribe send an email to tictoc-le...@ietf.org


[TICTOC]Enterprise Profile: Support for Non standard TCs

2024-05-21 Thread Ron Cohen
Hi,

I'm late to the game here. I apologize in advance if this has already been 
discussed and decided:

I can't figure out why the profile needs to support non-standard TCs, or what 
seems to be a strange combination of a NAT+TC devices:

"In PTP Networks that contain Transparent Clocks, timeTransmitters
   might receive Delay Request messages that no longer contains the IP
   Addresses of the timeReceivers.  This is because Transparent Clocks
   might replace the IP address of Delay Requests with their own IP
   address after updating the Correction Fields.  For this deployment
   scenario timeTransmitters will need to have configured tables of
   timeReceivers' IP addresses and associated Clock Identities in order
   to send Delay Responses to the correct PTP Nodes."

Is a standard TC allowed to change the source IP address of messages?

There should be a strong reason to require support for such devices in a 
standard profile.

Best,
Ron

/*
*  Ron Cohen
*  Email: r...@marvell.com<mailto:r...@marvell.com>
*  Mobile: +972.54.5751506
*/

___
TICTOC mailing list -- tictoc@ietf.org
To unsubscribe send an email to tictoc-le...@ietf.org


DFSort query

2024-05-21 Thread Ron Thomas
Hi All-

In the below Data we need to extract with in the cross ref nbr , if  seq Nbr  
=1  get Pacct_NBR and its related acct nbrs from the set 

In the below dataset for cross ref nbr = 24538 we have 2 sets of data  and 
24531 we have 1 set .



Acct _NBR   Pacct_NBR   LAST_CHANGE_TS  CROSS_REF_NBR   SEQ_NBR
600392811   1762220138659   2024-04-18-10.38.09.570030  24538   1
505756281   1500013748790   2024-04-18-10.38.09.570030  24538   2
593830611500013748790   2024-04-18-10.38.09.570030  24538   3
592670711500013748790   2024-04-18-10.38.09.570030  24538   4
505756281   1500013748790   2024-01-15-08.05.14.038792  24538   1
593830611500013748790   2024-01-15-08.05.14.038792  24538   2
592670711500013748790   2024-01-15-08.05.14.038792  24538   3
600392811   1762220138659   2024-01-15-08.05.14.038792  24538   4
600392561   1762220138631   2024-01-15-08.05.14.038792  24531   1

Output 

Acct _NBR   Pacct_NBR
600392811   1762220138659
505756281   1762220138659
593830611762220138659
592670711762220138659
505756281   1500013748790
593830611500013748790
592670711500013748790
600392811   1500013748790
600392561   1762220138631

Data size
Acct _NBR 10 bytes
Pacct_NBR 15 bytes
LAST_CHANGE_TS 20 bytes
CROSS_REF_NBR  5 bytes
SEQ_NBR 2 bytes

Could someone please let me know how we can build this data using dfsort ?

Regards
Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: pg_dump and not MVCC-safe commands

2024-05-20 Thread Ron Johnson
On Mon, May 20, 2024 at 11:54 AM Christophe Pettus  wrote:

>
>
> > On May 20, 2024, at 08:49, PetSerAl  wrote:
> > Basically, you need application cooperation to make
> > consistent live database backup.
>
> If it is critical that you have a completely consistent backup as of a
> particular point in time, and you are not concerned about restoring to a
> different processor architecture, pg_basebackup is a superior solution to
> pg_dump.
>

Single-threaded, and thus dreadfully slow.  I'll stick with PgBackRest.


Re: [DISCUSSION] FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-19 Thread Ron Liu
Hi, Lincoln

>  2. Regarding the options in HashAggCodeGenerator, since this new feature
has gone
through a couple of release cycles and could be considered for
PublicEvolving now,
cc @Ron Liu   WDYT?

Thanks for cc'ing me,  +1 for public these options now.

Best,
Ron

Benchao Li  于2024年5月20日周一 13:08写道:

> I agree with Lincoln about the experimental features.
>
> Some of these configurations do not even have proper implementation,
> take 'table.exec.range-sort.enabled' as an example, there was a
> discussion[1] about it before.
>
> [1] https://lists.apache.org/thread/q5h3obx36pf9po28r0jzmwnmvtyjmwdr
>
> Lincoln Lee  于2024年5月20日周一 12:01写道:
> >
> > Hi Jane,
> >
> > Thanks for the proposal!
> >
> > +1 for the changes except for these annotated as experimental ones.
> >
> > For the options annotated as experimental,
> >
> > +1 for the moving of IncrementalAggregateRule & RelNodeBlock.
> >
> > For the rest of the options, there are some suggestions:
> >
> > 1. for the batch related parameters, it's recommended to either delete
> > them (leaving the necessary defaults value in place) or leave them as
> they
> > are. Including:
> > FlinkRelMdRowCount
> > FlinkRexUtil
> > BatchPhysicalSortRule
> > JoinDeriveNullFilterRule
> > BatchPhysicalJoinRuleBase
> > BatchPhysicalSortMergeJoinRule
> >
> > What I understand about the history of these options is that they were
> once
> > used for fine
> > tuning for tpc testing, and the current flink planner no longer relies on
> > these internal
> > options when testing tpc[1]. In addition, these options are too obscure
> for
> > SQL users,
> > and some of them are actually magic numbers.
> >
> > 2. Regarding the options in HashAggCodeGenerator, since this new feature
> > has gone
> > through a couple of release cycles and could be considered for
> > PublicEvolving now,
> > cc @Ron Liu   WDYT?
> >
> > 3. Regarding WindowEmitStrategy, IIUC it is currently unsupported on TVF
> > window, so
> > it's recommended to keep it untouched for now and follow up in
> > FLINK-29692[2]. cc @Xuyang 
> >
> > [1]
> >
> https://github.com/ververica/flink-sql-benchmark/blob/master/tools/common/flink-conf.yaml
> > [2] https://issues.apache.org/jira/browse/FLINK-29692
> >
> >
> > Best,
> > Lincoln Lee
> >
> >
> > Yubin Li  于2024年5月17日周五 10:49写道:
> >
> > > Hi Jane,
> > >
> > > Thank Jane for driving this proposal !
> > >
> > > This makes sense for users, +1 for that.
> > >
> > > Best,
> > > Yubin
> > >
> > > On Thu, May 16, 2024 at 3:17 PM Jark Wu  wrote:
> > > >
> > > > Hi Jane,
> > > >
> > > > Thanks for the proposal. +1 from my side.
> > > >
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > > On Thu, 16 May 2024 at 10:28, Xuannan Su 
> wrote:
> > > >
> > > > > Hi Jane,
> > > > >
> > > > > Thanks for driving this effort! And +1 for the proposed changes.
> > > > >
> > > > > I have one comment on the migration plan.
> > > > >
> > > > > For options to be moved to another module/package, I think we have
> to
> > > > > mark the old option deprecated in 1.20 for it to be removed in 2.0,
> > > > > according to the API compatibility guarantees[1]. We can introduce
> the
> > > > > new option in 1.20 with the same option key in the intended class.
> > > > > WDYT?
> > > > >
> > > > > Best,
> > > > > Xuannan
> > > > >
> > > > > [1]
> > > > >
> > >
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/#api-compatibility-guarantees
> > > > >
> > > > >
> > > > >
> > > > > On Wed, May 15, 2024 at 6:20 PM Jane Chan 
> > > wrote:
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to start a discussion on FLIP-457: Improve Table/SQL
> > > > > Configuration
> > > > > > for Flink 2.0 [1]. This FLIP revisited all Table/SQL
> configurations
> > > to
> > > > > > improve user-friendliness and maintainability as Flink moves
> toward
> > > 2.0.
> > > > > >
> > > > > > I am looking forward to your feedback.
> > > > > >
> > > > > > Best regards,
> > > > > > Jane
> > > > > >
> > > > > > [1]
> > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992
> > > > >
> > >
>
>
>
> --
>
> Best,
> Benchao Li
>


Which iPhone App Allows The Creation Of level 1 2 and so on of Headings that is accessible with VoiceOver?

2024-05-18 Thread Ron Canazzi

Hi Group,

I want to make a document as part of a game I am creating that will 
quickly allow me to access aspects of the game via a document using 
header navigation. I tried using Microsoft Word for PC and moving it to 
the iPhone but the header navigation seemed broken. Only a few words 
would appear on each line which Ms Word had created a heading.


Which app on the iPhone itself would be used to create various levels of 
headings that would be accessible with VoiceOver?


--
Signature:
For a nation to admit it has done grievous wrongs and will strive to correct 
them for the betterment of all is no vice;
For a nation to claim it has always been great, needs no improvement  and to 
cling to its past achievements is no virtue!

--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups "VIPhone" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/d9a737e0-dec2-b5e1-e6db-3d91fadc5096%40roadrunner.com.


[cctalk] Re: Mylar/Sponge Keyboard Repair Kits

2024-05-17 Thread Ron Pool via cctalk
TexElec makes and sells replacement "foam and foil" discs for those keyboards.  
See https://texelec.com/product/foam-capacitive-pads-keytronic/ .  They are 
usually shown as on backorder.  The one time I ordered a set, they were on 
backorder and arrived a few weeks after I placed the order.  I wouldn't 
recommend waiting for them to be in stock before ordering as that might require 
a VERY long wait.

-- Ron Pool

-Original Message-
From: Marvin Johnston via cctalk  
Sent: Friday, May 17, 2024 5:49 AM
To: cctalk@classiccmp.org
Cc: Marvin Johnston 
Subject: [cctalk] Mylar/Sponge Keyboard Repair Kits

I've got a couple of keyboards where the sponge has disintegrated to the 
point they no longer work. The latest one is a Vector 3 keyboard and I 
would love to get it fixed.

Can repair kits still be purchased and/or are the instructions for 
making those sponge/mylar pieces available?

Thanks!

Marvin





(flink) branch master updated: [FLINK-35346][table-common] Introduce workflow scheduler interface for materialized table

2024-05-16 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 1378979f02e [FLINK-35346][table-common] Introduce workflow scheduler 
interface for materialized table
1378979f02e is described below

commit 1378979f02eed55bbf3f91b08ec166d55b2c42a6
Author: Ron 
AuthorDate: Thu May 16 19:41:54 2024 +0800

[FLINK-35346][table-common] Introduce workflow scheduler interface for 
materialized table

[FLINK-35346][table-common] Introduce workflow scheduler interface for 
materialized table

This closes #24767
---
 .../apache/flink/table/factories/FactoryUtil.java  |   9 +-
 .../table/factories/WorkflowSchedulerFactory.java  |  56 +++
 .../factories/WorkflowSchedulerFactoryUtil.java| 156 ++
 .../table/workflow/CreateRefreshWorkflow.java  |  29 
 .../table/workflow/DeleteRefreshWorkflow.java  |  48 ++
 .../table/workflow/ModifyRefreshWorkflow.java  |  40 +
 .../flink/table/workflow/RefreshWorkflow.java  |  34 
 .../flink/table/workflow/WorkflowException.java|  37 +
 .../flink/table/workflow/WorkflowScheduler.java|  91 +++
 .../workflow/TestWorkflowSchedulerFactory.java | 175 +
 .../workflow/WorkflowSchedulerFactoryUtilTest.java | 107 +
 .../org.apache.flink.table.factories.Factory   |   1 +
 12 files changed, 782 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
index d8d6d7e9000..5d66b23c3d8 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
@@ -167,6 +167,13 @@ public final class FactoryUtil {
 + "tasks to advance their watermarks 
without the need to wait for "
 + "watermarks from this source while it is 
idle.");
 
+public static final ConfigOption WORKFLOW_SCHEDULER_TYPE =
+ConfigOptions.key("workflow-scheduler.type")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"Specify the workflow scheduler type that is used 
for materialized table.");
+
 /**
  * Suffix for keys of {@link ConfigOption} in case a connector requires 
multiple formats (e.g.
  * for both key and value).
@@ -903,7 +910,7 @@ public final class FactoryUtil {
 return loadResults;
 }
 
-private static String stringifyOption(String key, String value) {
+public static String stringifyOption(String key, String value) {
 if (GlobalConfiguration.isSensitive(key)) {
 value = HIDDEN_CONTENT;
 }
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java
new file mode 100644
index 000..72e144f7d19
--- /dev/null
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.workflow.WorkflowScheduler;
+
+import java.util.Map;
+
+/**
+ * A factory to create a {@link WorkflowScheduler} instance.
+ *
+ * See {@link Factory} for more information about the general design of a 
factory.
+ */
+@PublicEvolving
+public interface WorkflowSchedulerFactory extends Factory {
+
+/** Create a workflow scheduler instance which interacts with external 
scheduler service. */
+ 

Re: [EVDL] 46 Pure EVs for sale, Teslas competition.

2024-05-15 Thread Ron Solberg via EV
t seems that since 2017, Tesla has gone into reverse on their original 
master plan.

So let China take the lead/heat for pushing out ICE cars. In five or ten years 
the ICE folks can adjust/catch up. No bailout needed. The pressure on Musk is 
reduced and Optimus can go to Mars and or drive Robotaxis, a win-win except for 
the carbon problem.

Ron Solberg

> On May 14, 2024, at 7:22 PM, EV List Lackey via EV  wrote:
> 
> On 14 May 2024 at 10:35, Rush via EV wrote:
> 
>> I think that anybody having any knowledge of how a business is conducted
>> would say that 'yes, profit is a good thing'.
> 
> Let's restore the context:
> 
>> AND still make a hefty profit on each car
> 
> As I understood it, and someone correct me if this is wrong, the original 
> Tesla "master plan" was to get to mass market EVs.  They'd start with 
> building luxury EVs for rich people, and use the presumably *hefty* profits 
> from that venture to design and build EVs for the rest of us.
> 
> That plan was written a long time ago - maybe 2008?  Again, someone please 
> help me out here.
> 
> The Model 3 was introduced 7 years ago, in 2017.  That was real progress 
> toward affordable EVs, 9 years on from the master plan's inception.  Not 
> bad.
> 
> Is that master plan still their guide?  If so, what progress have they made 
> on it since?
> 
> Not the Model Y (2020).  It's more expensive.
> 
> I'm pretty sure it's not the Cybertruck (2023), either.
> 
> It seems that since 2017, Tesla has gone into reverse on their original 
> master plan.
> 
> Their recent investor call suggested pretty strongly that they're going to 
> start using their EV profits less to develop EVs, and more to develop AI, 
> autonomy software, and robotaxis.
> 
> Their recent layoffs seem to confirm that direction.
> 
> What do you think of this?
> 
> Is it a good thing?
> 
> Is it likely to be permanent, or is it just another Elon Musk shot-from-the-
> hip that he'll change next month or next year?
> 
> David Roden, EVDL moderator & general lackey
> 
> To reach me, don't reply to this message; I won't get it.  Use my 
> offlist address here : http://evdl.org/help/index.html#supt
> 
> = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = 
> 
> If economists wished to study the horse, they wouldn't go and look at 
> horses. They'd sit in their studies and say to themselves, "What would 
> I do if I were a horse?"
> 
>  -- Ely Devons
> 
> = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = 
> 
> ___
> Address messages to ev@lists.evdl.org
> No other addresses in TO and CC fields
> HELP: http://www.evdl.org/help/
> 

___
Address messages to ev@lists.evdl.org
No other addresses in TO and CC fields
HELP: http://www.evdl.org/help/



(flink) 01/04: [FLINK-35193][table] Support drop materialized table syntax

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8551ef39e0387f723a72299cc73aaaf827cf74bf
Author: Feng Jin 
AuthorDate: Mon May 13 20:06:41 2024 +0800

[FLINK-35193][table] Support drop materialized table syntax
---
 .../src/main/codegen/data/Parser.tdd   |  1 +
 .../src/main/codegen/includes/parserImpls.ftl  | 30 ++
 .../sql/parser/ddl/SqlDropMaterializedTable.java   | 68 ++
 .../flink/sql/parser/utils/ParserResource.java |  3 +
 .../MaterializedTableStatementParserTest.java  | 25 
 5 files changed, 127 insertions(+)

diff --git a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd 
b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
index 81b3412954c..883b6aec1b2 100644
--- a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
+++ b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
@@ -76,6 +76,7 @@
 "org.apache.flink.sql.parser.ddl.SqlDropCatalog"
 "org.apache.flink.sql.parser.ddl.SqlDropDatabase"
 "org.apache.flink.sql.parser.ddl.SqlDropFunction"
+"org.apache.flink.sql.parser.ddl.SqlDropMaterializedTable"
 "org.apache.flink.sql.parser.ddl.SqlDropPartitions"
 
"org.apache.flink.sql.parser.ddl.SqlDropPartitions.AlterTableDropPartitionsContext"
 "org.apache.flink.sql.parser.ddl.SqlDropTable"
diff --git 
a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl 
b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
index bdc97818914..b2a5ea02d0f 100644
--- a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
+++ b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
@@ -1801,6 +1801,34 @@ SqlCreate SqlCreateMaterializedTable(Span s, boolean 
replace, boolean isTemporar
 }
 }
 
+/**
+* Parses a DROP MATERIALIZED TABLE statement.
+*/
+SqlDrop SqlDropMaterializedTable(Span s, boolean replace, boolean isTemporary) 
:
+{
+SqlIdentifier tableName = null;
+boolean ifExists = false;
+}
+{
+
+ {
+ if (isTemporary) {
+ throw SqlUtil.newContextException(
+ getPos(),
+ 
ParserResource.RESOURCE.dropTemporaryMaterializedTableUnsupported());
+ }
+ }
+ 
+
+ifExists = IfExistsOpt()
+
+tableName = CompoundIdentifier()
+
+{
+return new SqlDropMaterializedTable(s.pos(), tableName, ifExists);
+}
+}
+
 /**
 * Parses alter materialized table.
 */
@@ -2427,6 +2455,8 @@ SqlDrop SqlDropExtended(Span s, boolean replace) :
 (
 drop = SqlDropCatalog(s, replace)
 |
+drop = SqlDropMaterializedTable(s, replace, isTemporary)
+|
 drop = SqlDropTable(s, replace, isTemporary)
 |
 drop = SqlDropView(s, replace, isTemporary)
diff --git 
a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java
 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java
new file mode 100644
index 000..ec9439fb13a
--- /dev/null
+++ 
b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl;
+
+import org.apache.calcite.sql.SqlDrop;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlSpecialOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.util.ImmutableNullableList;
+
+import java.util.List;
+
+/** DROP MATERIALIZED TABLE DDL sql call. */
+public class SqlDropMaterializedTable extends SqlDrop {
+
+private static final SqlOperator OPERATOR =
+new SqlSpecialOperator("DROP MATERIALIZED TABLE", 
SqlKind.DRO

(flink) 03/04: [FLINK-35193][table] Support execution of drop materialized table

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 51b744bca1bdf53385152ed237f2950525046488
Author: Feng Jin 
AuthorDate: Mon May 13 20:08:38 2024 +0800

[FLINK-35193][table] Support execution of drop materialized table
---
 .../MaterializedTableManager.java  | 115 +-
 .../service/operation/OperationExecutor.java   |   9 +
 .../service/MaterializedTableStatementITCase.java  | 241 ++---
 .../apache/flink/table/catalog/CatalogManager.java |   4 +-
 4 files changed, 328 insertions(+), 41 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
index b4ba12b8755..a51b1885c98 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -20,6 +20,7 @@ package 
org.apache.flink.table.gateway.service.materializedtable;
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.JobStatus;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.catalog.CatalogMaterializedTable;
@@ -34,6 +35,7 @@ import org.apache.flink.table.gateway.api.results.ResultSet;
 import org.apache.flink.table.gateway.service.operation.OperationExecutor;
 import org.apache.flink.table.gateway.service.result.ResultFetcher;
 import org.apache.flink.table.gateway.service.utils.SqlExecutionException;
+import org.apache.flink.table.operations.command.DescribeJobOperation;
 import org.apache.flink.table.operations.command.StopJobOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableChangeOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableRefreshOperation;
@@ -93,6 +95,9 @@ public class MaterializedTableManager {
 } else if (op instanceof AlterMaterializedTableResumeOperation) {
 return callAlterMaterializedTableResume(
 operationExecutor, handle, 
(AlterMaterializedTableResumeOperation) op);
+} else if (op instanceof DropMaterializedTableOperation) {
+return callDropMaterializedTableOperation(
+operationExecutor, handle, 
(DropMaterializedTableOperation) op);
 }
 
 throw new SqlExecutionException(
@@ -146,8 +151,7 @@ public class MaterializedTableManager {
 materializedTableIdentifier,
 e);
 operationExecutor.callExecutableOperation(
-handle,
-new 
DropMaterializedTableOperation(materializedTableIdentifier, true, false));
+handle, new 
DropMaterializedTableOperation(materializedTableIdentifier, true));
 throw e;
 }
 }
@@ -170,7 +174,8 @@ public class MaterializedTableManager {
 materializedTable.getSerializedRefreshHandler(),
 
operationExecutor.getSessionContext().getUserClassloader());
 
-String savepointPath = stopJobWithSavepoint(operationExecutor, handle, 
refreshHandler);
+String savepointPath =
+stopJobWithSavepoint(operationExecutor, handle, 
refreshHandler.getJobId());
 
 ContinuousRefreshHandler updateRefreshHandler =
 new ContinuousRefreshHandler(
@@ -183,9 +188,12 @@ public class MaterializedTableManager {
 CatalogMaterializedTable.RefreshStatus.SUSPENDED,
 
materializedTable.getRefreshHandlerDescription().orElse(null),
 serializeContinuousHandler(updateRefreshHandler));
+List tableChanges = new ArrayList<>();
+tableChanges.add(
+
TableChange.modifyRefreshStatus(CatalogMaterializedTable.RefreshStatus.ACTIVATED));
 AlterMaterializedTableChangeOperation 
alterMaterializedTableChangeOperation =
 new AlterMaterializedTableChangeOperation(
-tableIdentifier, Collections.emptyList(), 
updatedMaterializedTable);
+tableIdentifier, tableChanges, 
updatedMaterializedTable);
 
 operationExecutor.callExecutableOperation(handle, 
alterMaterializedTableChangeOperation);
 
@@ -284,8 +292,7 @@ public class MaterializedTableManager {
 // drop materialized table while submit flink streaming job occur 
exception. Thus, weak
 // atomicity is guar

(flink) 04/04: [FLINK-35342][table] Fix MaterializedTableStatementITCase test can check for wrong status

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 94d861b08fef1e350d80a3f5f0f63168d327bc64
Author: Feng Jin 
AuthorDate: Tue May 14 11:18:40 2024 +0800

[FLINK-35342][table] Fix MaterializedTableStatementITCase test can check 
for wrong status
---
 .../service/MaterializedTableStatementITCase.java| 20 +++-
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
 
b/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
index 105c51ea597..dd7d25e124c 100644
--- 
a/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
+++ 
b/flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java
@@ -272,7 +272,7 @@ public class MaterializedTableStatementITCase {
 waitUntilAllTasksAreRunning(
 restClusterClient, 
JobID.fromHexString(activeRefreshHandler.getJobId()));
 
-// check the background job is running
+// verify the background job is running
 String describeJobDDL = String.format("DESCRIBE JOB '%s'", 
activeRefreshHandler.getJobId());
 OperationHandle describeJobHandle =
 service.executeStatement(sessionHandle, describeJobDDL, -1, 
new Configuration());
@@ -653,7 +653,7 @@ public class MaterializedTableStatementITCase {
 assertThat(suspendMaterializedTable.getRefreshStatus())
 .isEqualTo(CatalogMaterializedTable.RefreshStatus.SUSPENDED);
 
-// check background job is stopped
+// verify background job is stopped
 byte[] refreshHandler = 
suspendMaterializedTable.getSerializedRefreshHandler();
 ContinuousRefreshHandler suspendRefreshHandler =
 ContinuousRefreshHandlerSerializer.INSTANCE.deserialize(
@@ -667,7 +667,7 @@ public class MaterializedTableStatementITCase {
 List jobResults = fetchAllResults(service, sessionHandle, 
describeJobHandle);
 
assertThat(jobResults.get(0).getString(2).toString()).isEqualTo("FINISHED");
 
-// check savepoint is created
+// verify savepoint is created
 assertThat(suspendRefreshHandler.getRestorePath()).isNotEmpty();
 String actualSavepointPath = 
suspendRefreshHandler.getRestorePath().get();
 
@@ -692,7 +692,17 @@ public class MaterializedTableStatementITCase {
 assertThat(resumedCatalogMaterializedTable.getRefreshStatus())
 .isEqualTo(CatalogMaterializedTable.RefreshStatus.ACTIVATED);
 
-// check background job is running
+waitUntilAllTasksAreRunning(
+restClusterClient,
+JobID.fromHexString(
+ContinuousRefreshHandlerSerializer.INSTANCE
+.deserialize(
+resumedCatalogMaterializedTable
+.getSerializedRefreshHandler(),
+getClass().getClassLoader())
+.getJobId()));
+
+// verify background job is running
 refreshHandler = 
resumedCatalogMaterializedTable.getSerializedRefreshHandler();
 ContinuousRefreshHandler resumeRefreshHandler =
 ContinuousRefreshHandlerSerializer.INSTANCE.deserialize(
@@ -706,7 +716,7 @@ public class MaterializedTableStatementITCase {
 jobResults = fetchAllResults(service, sessionHandle, 
describeResumeJobHandle);
 
assertThat(jobResults.get(0).getString(2).toString()).isEqualTo("RUNNING");
 
-// check resumed job is restored from savepoint
+// verify resumed job is restored from savepoint
 Optional actualRestorePath =
 getJobRestoreSavepointPath(restClusterClient, resumeJobId);
 assertThat(actualRestorePath).isNotEmpty();



(flink) 02/04: [FLINK-35193][table] Support convert drop materialized table node to operation

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit fd333941553c68c36e1460102ab023f80a5b1362
Author: Feng Jin 
AuthorDate: Mon May 13 20:07:39 2024 +0800

[FLINK-35193][table] Support convert drop materialized table node to 
operation
---
 .../DropMaterializedTableOperation.java|  6 ++--
 .../SqlDropMaterializedTableConverter.java | 41 ++
 .../operations/converters/SqlNodeConverters.java   |  1 +
 ...erializedTableNodeToOperationConverterTest.java | 21 +++
 4 files changed, 65 insertions(+), 4 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
index e5eee557bfc..46dd86ad96b 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/DropMaterializedTableOperation.java
@@ -33,9 +33,8 @@ import java.util.Map;
 public class DropMaterializedTableOperation extends DropTableOperation
 implements MaterializedTableOperation {
 
-public DropMaterializedTableOperation(
-ObjectIdentifier tableIdentifier, boolean ifExists, boolean 
isTemporary) {
-super(tableIdentifier, ifExists, isTemporary);
+public DropMaterializedTableOperation(ObjectIdentifier tableIdentifier, 
boolean ifExists) {
+super(tableIdentifier, ifExists, false);
 }
 
 @Override
@@ -43,7 +42,6 @@ public class DropMaterializedTableOperation extends 
DropTableOperation
 Map params = new LinkedHashMap<>();
 params.put("identifier", getTableIdentifier());
 params.put("IfExists", isIfExists());
-params.put("isTemporary", isTemporary());
 
 return OperationUtils.formatWithChildren(
 "DROP MATERIALIZED TABLE",
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlDropMaterializedTableConverter.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlDropMaterializedTableConverter.java
new file mode 100644
index 000..6501dc0c453
--- /dev/null
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlDropMaterializedTableConverter.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.operations.converters;
+
+import org.apache.flink.sql.parser.ddl.SqlDropMaterializedTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.UnresolvedIdentifier;
+import org.apache.flink.table.operations.Operation;
+import 
org.apache.flink.table.operations.materializedtable.DropMaterializedTableOperation;
+
+/** A converter for {@link SqlDropMaterializedTable}. */
+public class SqlDropMaterializedTableConverter
+implements SqlNodeConverter {
+@Override
+public Operation convertSqlNode(
+SqlDropMaterializedTable sqlDropMaterializedTable, ConvertContext 
context) {
+UnresolvedIdentifier unresolvedIdentifier =
+
UnresolvedIdentifier.of(sqlDropMaterializedTable.fullTableName());
+ObjectIdentifier identifier =
+
context.getCatalogManager().qualifyIdentifier(unresolvedIdentifier);
+// Currently we don't support temporary materialized table, so 
isTemporary is always false
+return new DropMaterializedTableOperation(
+identifier, sqlDropMaterializedTable.getIfExists());
+}
+}
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlNodeConverters.java
 
b/flink-table/flink-table-planner/

(flink) branch master updated (65d31e26534 -> 94d861b08fe)

2024-05-14 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 65d31e26534 [FLINK-33986][runtime] Extend ShuffleMaster to support 
snapshot and restore state.
 new 8551ef39e03 [FLINK-35193][table] Support drop materialized table syntax
 new fd333941553 [FLINK-35193][table] Support convert drop materialized 
table node to operation
 new 51b744bca1b [FLINK-35193][table] Support execution of drop 
materialized table
 new 94d861b08fe [FLINK-35342][table] Fix MaterializedTableStatementITCase 
test can check for wrong status

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../MaterializedTableManager.java  | 115 -
 .../service/operation/OperationExecutor.java   |   9 +
 .../service/MaterializedTableStatementITCase.java  | 261 ++---
 .../src/main/codegen/data/Parser.tdd   |   1 +
 .../src/main/codegen/includes/parserImpls.ftl  |  30 +++
 ...pCatalog.java => SqlDropMaterializedTable.java} |  40 ++--
 .../flink/sql/parser/utils/ParserResource.java |   3 +
 .../MaterializedTableStatementParserTest.java  |  25 ++
 .../apache/flink/table/catalog/CatalogManager.java |   4 +-
 .../DropMaterializedTableOperation.java|   6 +-
 ...java => SqlDropMaterializedTableConverter.java} |  20 +-
 .../operations/converters/SqlNodeConverters.java   |   1 +
 ...erializedTableNodeToOperationConverterTest.java |  21 ++
 13 files changed, 455 insertions(+), 81 deletions(-)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/{SqlDropCatalog.java
 => SqlDropMaterializedTable.java} (68%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterMaterializedTableSuspendConverter.java
 => SqlDropMaterializedTableConverter.java} (59%)



[RESULT][VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-13 Thread Ron Liu
Hi, Dev

I'm happy to announce that FLIP-448: Introduce Pluggable Workflow Scheduler
Interface for Materialized Table[1] has been accepted with 8 approving
votes (4 binding) [2].

- Xuyang
- Feng Jin
- Lincoln Lee(binding)
- Jark Wu(binding)
- Ron Liu(binding)
- Shengkai Fang(binding)
- Keith Lee
- Ahmed Hamdy

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
[2] https://lists.apache.org/thread/8qvh3brgvo46xprv4mxq4kyhyy0tsvny

Best,
Ron


Re: [9fans] Balancing Progress and Accessibility in the Plan 9 Community. (Was: [9fans] Interoperating between 9legacy and 9front)

2024-05-13 Thread ron minnich
On Sun, May 12, 2024 at 10:55 PM ibrahim via 9fans <9fans@9fans.net> wrote:

>
>
> Please correct me if I'm wrong.
> Permalink
> 
>

In my opinion? you are wrong. And that's as far as I will stay involved in
this discussion.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tcf128fa955b8aafc-M918765fe95c422bafdedbbf1
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Balancing Progress and Accessibility in the Plan 9 Community. (Was: [9fans] Interoperating between 9legacy and 9front)

2024-05-12 Thread ron minnich
On Sun, May 12, 2024 at 8:53 PM ibrahim via 9fans <9fans@9fans.net> wrote:

> Not a single developer who uses plan9 for distributed systems, commercial
> products will dare to use a system like 9front as the sources. The reason
> is quite simple :
>
> You ignore copyrights as you please and distributed 9front under an MIT
> license long before Nokia as the owner of it decided to do so. You did that
> at a time when plan9 was placed under GPL
>

I do not agree with what you are saying here. I was involved in the license
discussions starting in 2003, and was involved in both the GPL release and
the more recent MIT license release. The choice of license, both times, was
made by the same person in Bell Labs, even as the Bell Labs corporate
parent changed. In fact, in 2013, we were *required* to use the GPL,
whereas in the later release, the GPL was specifically mentioned as a
license we could *not* use. I won't pretend to understand why.

At no time in all this was there any evidence of incorrect behavior on the
part of 9front. None. Zip. Zero. Zed. They have always been careful to
follow the rules.

Further, when people in 9front wrote new code, they released it under MIT,
and Cinap among others was very kind in letting Harvey use it.

So, Ibrahim,  I can not agree with your statement here.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tcf128fa955b8aafc-M3d0b948ec892b2d0de94a895
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


(flink) 01/01: [FLINK-35197][table] Support the execution of suspend, resume materialized table in continuous refresh mode

2024-05-12 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e4972c003f68da6dc4066459d4c6e5d981f07e96
Author: Feng Jin 
AuthorDate: Thu May 9 16:26:12 2024 +0800

[FLINK-35197][table] Support the execution of suspend, resume materialized 
table in continuous refresh mode

This closes #24765
---
 .../MaterializedTableManager.java  | 215 ++-
 .../service/MaterializedTableStatementITCase.java  | 302 -
 .../MaterializedTableManagerTest.java  |  39 +++
 .../table/refresh/ContinuousRefreshHandler.java|  22 +-
 4 files changed, 561 insertions(+), 17 deletions(-)

diff --git 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
index ff0670462e0..b4ba12b8755 100644
--- 
a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
+++ 
b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java
@@ -34,8 +34,11 @@ import org.apache.flink.table.gateway.api.results.ResultSet;
 import org.apache.flink.table.gateway.service.operation.OperationExecutor;
 import org.apache.flink.table.gateway.service.result.ResultFetcher;
 import org.apache.flink.table.gateway.service.utils.SqlExecutionException;
+import org.apache.flink.table.operations.command.StopJobOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableChangeOperation;
 import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableRefreshOperation;
+import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableResumeOperation;
+import 
org.apache.flink.table.operations.materializedtable.AlterMaterializedTableSuspendOperation;
 import 
org.apache.flink.table.operations.materializedtable.CreateMaterializedTableOperation;
 import 
org.apache.flink.table.operations.materializedtable.DropMaterializedTableOperation;
 import 
org.apache.flink.table.operations.materializedtable.MaterializedTableOperation;
@@ -46,17 +49,23 @@ import 
org.apache.flink.table.types.logical.LogicalTypeFamily;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.Set;
+import java.util.stream.Collectors;
 
 import static org.apache.flink.api.common.RuntimeExecutionMode.BATCH;
 import static org.apache.flink.api.common.RuntimeExecutionMode.STREAMING;
+import static 
org.apache.flink.configuration.CheckpointingOptions.SAVEPOINT_DIRECTORY;
 import static org.apache.flink.configuration.DeploymentOptions.TARGET;
 import static org.apache.flink.configuration.ExecutionOptions.RUNTIME_MODE;
 import static org.apache.flink.configuration.PipelineOptions.NAME;
+import static 
org.apache.flink.configuration.StateRecoveryOptions.SAVEPOINT_PATH;
 import static 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions.CHECKPOINTING_INTERVAL;
 import static 
org.apache.flink.table.api.internal.TableResultInternal.TABLE_RESULT_OK;
 import static 
org.apache.flink.table.catalog.CatalogBaseTable.TableKind.MATERIALIZED_TABLE;
@@ -78,6 +87,12 @@ public class MaterializedTableManager {
 } else if (op instanceof AlterMaterializedTableRefreshOperation) {
 return callAlterMaterializedTableRefreshOperation(
 operationExecutor, handle, 
(AlterMaterializedTableRefreshOperation) op);
+} else if (op instanceof AlterMaterializedTableSuspendOperation) {
+return callAlterMaterializedTableSuspend(
+operationExecutor, handle, 
(AlterMaterializedTableSuspendOperation) op);
+} else if (op instanceof AlterMaterializedTableResumeOperation) {
+return callAlterMaterializedTableResume(
+operationExecutor, handle, 
(AlterMaterializedTableResumeOperation) op);
 }
 
 throw new SqlExecutionException(
@@ -115,6 +130,105 @@ public class MaterializedTableManager {
 CatalogMaterializedTable catalogMaterializedTable =
 createMaterializedTableOperation.getCatalogMaterializedTable();
 
+try {
+executeContinuousRefreshJob(
+operationExecutor,
+handle,
+catalogMaterializedTable,
+materializedTableIdentifier,
+Collections.emptyMap(),
+Optional.empty());
+} catch (Exception e) {
+// drop

(flink) branch master updated (9fe8d7bf870 -> e4972c003f6)

2024-05-12 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 9fe8d7bf870 [FLINK-35198][table] Support manual refresh materialized 
table
 add e80c2864db5 [FLINK-35197][table] Fix incomplete serialization and 
deserialization of materialized tables
 add 3b6e8db11fe [FLINK-35197][table] Support convert alter materialized 
table suspend/resume nodes to operations
 new e4972c003f6 [FLINK-35197][table] Support the execution of suspend, 
resume materialized table in continuous refresh mode

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../MaterializedTableManager.java  | 215 ++-
 .../service/MaterializedTableStatementITCase.java  | 302 -
 .../MaterializedTableManagerTest.java  |  39 +++
 ... => AlterMaterializedTableResumeOperation.java} |  43 ++-
 .../AlterMaterializedTableSuspendOperation.java}   |  23 +-
 .../catalog/CatalogBaseTableResolutionTest.java|  73 -
 .../flink/table/catalog/CatalogPropertiesUtil.java |  10 +-
 .../table/refresh/ContinuousRefreshHandler.java|  22 +-
 ... SqlAlterMaterializedTableResumeConverter.java} |  36 ++-
 ...SqlAlterMaterializedTableSuspendConverter.java} |  22 +-
 .../operations/converters/SqlNodeConverters.java   |   2 +
 ...erializedTableNodeToOperationConverterTest.java |  40 ++-
 12 files changed, 735 insertions(+), 92 deletions(-)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/{AlterMaterializedTableRefreshOperation.java
 => AlterMaterializedTableResumeOperation.java} (56%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/{command/ShowJobsOperation.java
 => materializedtable/AlterMaterializedTableSuspendOperation.java} (63%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterMaterializedTableRefreshConverter.java
 => SqlAlterMaterializedTableResumeConverter.java} (54%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterMaterializedTableRefreshConverter.java
 => SqlAlterMaterializedTableSuspendConverter.java} (69%)



(flink) branch master updated (86c8304d735 -> 9fe8d7bf870)

2024-05-11 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 86c8304d735 [FLINK-35041][test] Fix the 
IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration failed
 add 9fe8d7bf870 [FLINK-35198][table] Support manual refresh materialized 
table

No new revisions were added by this update.

Summary of changes:
 .../MaterializedTableManager.java  | 144 -
 .../service/MaterializedTableStatementITCase.java  | 238 +
 .../gateway/service/SqlGatewayServiceITCase.java   |  30 +--
 .../MaterializedTableManagerTest.java  |  54 +
 .../service/utils/SqlGatewayServiceTestUtil.java   |  19 ++
 .../sql/parser/ddl/SqlAlterMaterializedTable.java  |   4 +
 .../ddl/SqlAlterMaterializedTableRefresh.java  |  10 +-
 .../flink/table/operations/OperationUtils.java |   6 +-
 .../AlterMaterializedTableRefreshOperation.java|  68 ++
 ...SqlAlterMaterializedTableRefreshConverter.java} |  31 ++-
 .../operations/converters/SqlNodeConverters.java   |   1 +
 ...erializedTableNodeToOperationConverterTest.java |  30 +++
 12 files changed, 590 insertions(+), 45 deletions(-)
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManagerTest.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/materializedtable/AlterMaterializedTableRefreshOperation.java
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/{SqlAlterTableDropPartitionConverter.java
 => SqlAlterMaterializedTableRefreshConverter.java} (53%)



Re: Unnecessary buffer usage with multicolumn index, row comparison, and equility constraint

2024-05-10 Thread Ron Johnson
On Fri, May 10, 2024 at 11:28 PM WU Yan <4wu...@gmail.com> wrote:

> Hi everyone, first time here. Please kindly let me know if this is not the
> right place to ask.
>
> I notice a simple query can read a lot of buffer blocks in a meaningless
> way, when
> 1. there is an index scan on a multicolumn index
> 2. there is row constructor comparison in the Index Cond
> 3. there is also an equality constraint on the leftmost column of the
> multicolumn index
>
>
> ## How to reproduce
>
> I initially noticed it on AWS Aurora RDS, but it can be reproduced in
> docker container as well.
> ```bash
> docker run --name test-postgres -e POSTGRES_PASSWORD=mysecretpassword -d
> -p 5432:5432 postgres:16.3
> ```
>
> Create a table with a multicolumn index. Populate 12 million rows with
> random integers.
> ```sql
> CREATE TABLE t(a int, b int);
> CREATE INDEX my_idx ON t USING BTREE (a, b);
>
> INSERT INTO t(a, b)
> SELECT
> (random() * 123456)::int AS a,
> (random() * 123456)::int AS b
> FROM
> generate_series(1, 12345678);
>
> ANALYZE t;
> ```
>
> Simple query that uses the multicolumn index.
> ```
> postgres=# explain (analyze, buffers) select * from t where row(a, b) >
> row(123450, 123450) and a = 0 order by a, b;
>

Out of curiosity, why "where row(a, b) > row(123450, 123450)" instead of "where
a > 123450 and b > 123450"?


Re: Avoid inserting \beginL

2024-05-10 Thread Ron Yutkin
>
> You can specify in the Language package "always babel" and pass the option
> in the document class options.
>
>
How do I specify options in the document class options?
My usepackage line is as follows:

\usepackage[bidi=basic, layout=tabular, provide=*]{babel}

Which I assume I should comment out, I then have:

\babelprovide[main, import]{hebrew}

\babelprovide{rl}

Which I assume I shouldn't comment out.


Thanks!
-- 
lyx-users mailing list
lyx-users@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-users


Avoid inserting \beginL

2024-05-10 Thread Ron Yutkin
Hi,

I'm using LyX for university and I'm trying to write a document in Hebrew
but I had to switch to using babel and LuaLaTeX due to some annoying
packages like so:
1. Document Settings > Language > Language package > none
2. Latex preamble:

\usepackage[bidi=basic, layout=tabular, provide=*]{babel}

\babelprovide[main, import]{hebrew}

\babelprovide{rl}

3. File > Export > PDF (LuaTeX)


Once I try to export, I "undefined control sequence" on the following
commands: \beginL, \endL, \beginR, \endR, \R, \L

It seems like LyX adds these around numbers and words in english.


To mitigate this issue I tried to change the document language to English
under Document settings > language and also ctrl+a > right click > language
> english which then compiles successfully and looks correct thanks to
babel.

But when I do that, the text in LyX is backwards and I can't work like
that, so I have to switch the language to english every time I want to
export then switch back to hebrew when I want to edit my document.

Another problem is that when I switch to english LyX swaps the parenthesis
so they are swapped in the final PDF.

It's not a very fun way to use LyX.


Is there a way to tell LyX to not insert those commands? Or somehow
disassociate the document display language from the compile language? (And
set the compile to english and display to hebrew).


Thanks.
-- 
lyx-users mailing list
lyx-users@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-users


Topband: T32JV on160 and 80 morning of 5/10

2024-05-10 Thread Ron Spencer via Topband
George, T32JV, was quite loud this morning on 160. I worked him from here in NC 
at 1000Z. Pretty easy copy with the array listening NW to escape most of the 
QRN from the ongoing storms in Georgia. George was not nearly as loud on 3523 
when I worked him a few minutes later. 



As Dave, W0FLS posted, nice to see some life left in the band. George, you're 
RIB certainly works very, very well. Always a strong signal. Thanks for getting 
on!



Ron

N4XD


Sent using https://www.zoho.com/mail/
_
Searchable Archives: http://www.contesting.com/_topband - Topband Reflector


Re: Re: [VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-09 Thread Ron Liu
+1(binding)

Best,
Ron

Jark Wu  于2024年5月10日周五 09:51写道:

> +1 (binding)
>
> Best,
> Jark
>
> On Thu, 9 May 2024 at 21:27, Lincoln Lee  wrote:
>
> > +1 (binding)
> >
> > Best,
> > Lincoln Lee
> >
> >
> > Feng Jin  于2024年5月9日周四 19:45写道:
> >
> > > +1 (non-binding)
> > >
> > >
> > > Best,
> > > Feng
> > >
> > >
> > > On Thu, May 9, 2024 at 7:37 PM Xuyang  wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > >
> > > > --
> > > >
> > > > Best!
> > > > Xuyang
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > At 2024-05-09 13:57:07, "Ron Liu"  wrote:
> > > > >Sorry for the re-post, just to format this email content.
> > > > >
> > > > >Hi Dev
> > > > >
> > > > >Thank you to everyone for the feedback on FLIP-448: Introduce
> > Pluggable
> > > > >Workflow Scheduler Interface for Materialized Table[1][2].
> > > > >I'd like to start a vote for it. The vote will be open for at least
> 72
> > > > >hours unless there is an objection or not enough votes.
> > > > >
> > > > >[1]
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
> > > > >
> > > > >[2]
> https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
> > > > >
> > > > >Best,
> > > > >Ron
> > > > >
> > > > >Ron Liu  于2024年5月9日周四 13:52写道:
> > > > >
> > > > >> Hi Dev, Thank you to everyone for the feedback on FLIP-448:
> > Introduce
> > > > >> Pluggable Workflow Scheduler Interface for Materialized
> Table[1][2].
> > > I'd
> > > > >> like to start a vote for it. The vote will be open for at least 72
> > > hours
> > > > >> unless there is an objection or not enough votes. [1]
> > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
> > > > >>
> > > > >> [2]
> > https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
> > > > >> Best, Ron
> > > > >>
> > > >
> > >
> >
>


(flink) branch release-1.19 updated: [FLINK-35184][table-runtime] Fix mini-batch join hash collision when use InputSideHasNoUniqueKeyBundle (#24749)

2024-05-09 Thread ron
This is an automated email from the ASF dual-hosted git repository.

ron pushed a commit to branch release-1.19
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.19 by this push:
 new 17e7c3eaf14 [FLINK-35184][table-runtime] Fix mini-batch join hash 
collision when use InputSideHasNoUniqueKeyBundle (#24749)
17e7c3eaf14 is described below

commit 17e7c3eaf14b6c63f55d28a308e30ad6a3a80c95
Author: Roman Boyko 
AuthorDate: Fri May 10 10:57:45 2024 +0700

[FLINK-35184][table-runtime] Fix mini-batch join hash collision when use 
InputSideHasNoUniqueKeyBundle (#24749)
---
 .../bundle/InputSideHasNoUniqueKeyBundle.java  | 25 --
 .../join/stream/StreamingJoinOperatorTestBase.java |  4 +-
 .../stream/StreamingMiniBatchJoinOperatorTest.java | 95 +-
 3 files changed, 93 insertions(+), 31 deletions(-)

diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
index b5738835b95..fdc9e1d5193 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/bundle/InputSideHasNoUniqueKeyBundle.java
@@ -96,15 +96,26 @@ public class InputSideHasNoUniqueKeyBundle extends 
BufferBundle leftTypeInfo =
+protected InternalTypeInfo leftTypeInfo =
 InternalTypeInfo.of(
 RowType.of(
 new LogicalType[] {
@@ -57,7 +57,7 @@ public abstract class StreamingJoinOperatorTestBase {
 new LogicalType[] {new CharType(false, 20), new 
CharType(true, 10)},
 new String[] {"line_order_id0", 
"line_order_ship_mode"}));
 
-protected final RowDataKeySelector leftKeySelector =
+protected RowDataKeySelector leftKeySelector =
 HandwrittenSelectorUtil.getRowDataSelector(
 new int[] {1},
 leftTypeInfo.toRowType().getChildren().toArray(new 
LogicalType[0]));
diff --git 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
index 62b8116a0b0..7e92f72cf5e 100644
--- 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
+++ 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/operators/join/stream/StreamingMiniBatchJoinOperatorTest.java
@@ -25,13 +25,13 @@ import 
org.apache.flink.table.runtime.operators.bundle.trigger.CountCoBundleTrig
 import org.apache.flink.table.runtime.operators.join.FlinkJoinType;
 import 
org.apache.flink.table.runtime.operators.join.stream.state.JoinInputSideSpec;
 import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+import org.apache.flink.table.types.logical.BigIntType;
 import org.apache.flink.table.types.logical.CharType;
 import org.apache.flink.table.types.logical.LogicalType;
 import org.apache.flink.table.types.logical.RowType;
 import org.apache.flink.table.utils.HandwrittenSelectorUtil;
 import org.apache.flink.types.RowKind;
 
-import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Tag;
 import org.junit.jupiter.api.Test;
 import org.junit.jupiter.api.TestInfo;
@@ -55,27 +55,6 @@ public final class StreamingMiniBatchJoinOperatorTest 
extends StreamingJoinOpera
 private RowDataKeySelector leftUniqueKeySelector;
 private RowDataKeySelector rightUniqueKeySelector;
 
-@BeforeEach
-public void beforeEach(TestInfo testInfo) throws Exception {
-rightTypeInfo =
-InternalTypeInfo.of(
-RowType.of(
-new LogicalType[] {
-new CharType(false, 20),
-new CharType(false, 20),
-new CharType(true, 10)
-},
-new String[] {
-"order_id#", "line_order_id0", 
"line_order_ship_mode"
-}));
-
-rightKeySelector =
-HandwrittenSelectorUtil.getRowDataSelector(
-new int[] {1},
-rightTypeInfo.toRowType().getChildren().toArray(new 
LogicalType[0]));
-super.beforeEach(testInfo);
-}
-
 @

Re: [VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-08 Thread Ron Liu
Sorry for the re-post, just to format this email content.

Hi Dev

Thank you to everyone for the feedback on FLIP-448: Introduce Pluggable
Workflow Scheduler Interface for Materialized Table[1][2].
I'd like to start a vote for it. The vote will be open for at least 72
hours unless there is an objection or not enough votes.

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table

[2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1

Best,
Ron

Ron Liu  于2024年5月9日周四 13:52写道:

> Hi Dev, Thank you to everyone for the feedback on FLIP-448: Introduce
> Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd
> like to start a vote for it. The vote will be open for at least 72 hours
> unless there is an objection or not enough votes. [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table
>
> [2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
> Best, Ron
>


[VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table

2024-05-08 Thread Ron Liu
Hi Dev, Thank you to everyone for the feedback on FLIP-448: Introduce
Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd
like to start a vote for it. The vote will be open for at least 72 hours
unless there is an objection or not enough votes. [1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table

[2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1
Best, Ron


Attempting to Activate Account with Gmail Address

2024-05-08 Thread Ron Gordon
Per Post Edit Apr-2024 by TerryE, I am warning the forum through the “AOO dev 
mailing list" that I have applied for an account.
Unfortunately, I have a gmail address.

My proposed User ID is RGordon3503

AOO Ver 4.1.15 on MacOS 14.4.1

Thank you,

Ron Gordon
rucanoe...@gmail.com






  1   2   3   4   5   6   7   8   9   10   >