Re: postgres table statistics
On Wed, Jun 12, 2024 at 3:48 AM Chandy G wrote: > Hi, > We have postgres 13.9 running with tables thats got billions of records > of varying sizes. Eventhough pg jdbc driver provides a way to set fetch > size to tune the driver to achieve better throughput, the JVM fails at the > driver level when records of large size (say 200mb each) flows through. > this forces to reduce the fetch size (if were to operate at a fixed Xmx > setting of client jvm). > > It get a bit trickier when 100s of such tables exists with varying records > sizes. trying to see if the fetch size can be set dynamically based on the > row count and the record size distribution for a table. Unfortunately, > trying to get this data by a query run against each table (for row size: > max(length(t::text))) seem to be quite time consuming too. > Maybe create your own table with three columns: table_name (PK; taken from pg_class.relname) average_rec_size (taken from sum(pg_stat.avg_width)) max_rec_size (calculated yourself) Periodically refresh it. (How periodic depends on how often the average and max change substantively.) Does postgres maintain metadata about tables for the following. > 1. row count > https://www.postgresql.org/docs/13/catalog-pg-class.html pg_class.reltuples. This is an estimate, so make sure your tables are regularly analyzed. > 2. max row size. > https://www.postgresql.org/docs/13/view-pg-stats.html pg_stats.avg_width > or is there some other pg metadata that can help get this data quicker. > > TIA. >
Re: Does trigger only accept functions?
On Tue, Jun 11, 2024 at 2:53 PM veem v wrote: > > On Tue, 11 Jun 2024 at 17:03, hubert depesz lubaczewski > wrote: > >> On Tue, Jun 11, 2024 at 12:47:14AM +0530, veem v wrote: >> > to be called from ~50 triggers? or any other better approach exists to >> > handle this? >> >> pgaudit extension? >> >> Or just write all the changes to single table? >> >> Or use dynamic queries that will build the insert based on the name of >> table the event happened on? >> >> Or pass arguments? >> >> Best regards, >> >> depesz >> >> > Thank you so much. I hope you mean something as below when you say making > it dynamic. Because we have the audit tables having more number of columns > as compared to the source table and for a few the column name is a bit > different. > > -- Trigger for deletes > CREATE TRIGGER before_delete > BEFORE DELETE ON source_table > FOR EACH ROW EXECUTE FUNCTION log_deletes(); > > > -- Trigger for source_table1 > CREATE TRIGGER before_delete_source_table1 > BEFORE DELETE ON source_table1 > FOR EACH ROW EXECUTE FUNCTION log_deletes(); > > -- Trigger for source_table2 > CREATE TRIGGER before_delete_source_table2 > BEFORE DELETE ON source_table2 > FOR EACH ROW EXECUTE FUNCTION log_deletes(); > > > CREATE OR REPLACE FUNCTION log_deletes() > RETURNS TRIGGER AS $$ > BEGIN > IF TG_TABLE_NAME = 'source_table1' THEN > INSERT INTO delete_audit1 ( col1, col2, col3) > VALUES (OLD.col1, OLD.col2, OLD.col3); > ELSIF TG_TABLE_NAME = 'source_table2' THEN > INSERT INTO delete_audit2 ( col4, col5, col6) > VALUES (OLD.col4, OLD.col5, OLD.col6); > -- Add more conditions for other tables > Dear god, no. Since all the functions are going to be similar, I'd write a shell script to generate all the triggers, one per relevant. If you're going to record every field, then save effort, and don't bother enumerating them. You'll need to dig into the PG catalog's guts to list columns in the correct order, but Google and Stack Exchange makes that easy enough. (And, of course, that single trigger would be SLOW.) This is essentially what we did 25 years ago to "logically replicate" data from our OLTP system to the OLAP system. There were two log tables for every table to be replicated: foo_LOG1 and foo_LOG2. The trigger wrote to foo_LOG1 on even days, and foo_LOG2 on odd days. It even added a current_timestamp column, and action_code ("I" for insert, "D" for delete, and "U" for update). At around 01:00, a batch job copied out all of "yesterday's" log data (there were 80-90 tables), and then truncated the table.
Re: [External] : Re: New candidate JEP: 471: Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal
> On 11 Jun 2024, at 18:19, David Lloyd wrote: > > I would support this solution; it would solve the problem for conformant > serialization libraries. If a class has a `readObject`/etc. then we use it - > we wouldn't care if it was "natural" or generated. This also gives us the > option to allow the user to use `opens` selectively to opt-in to special > optimizations, without a major penalty if they do not. Right. Excellent. > > Is there already someone assigned for this task, or is it a > hot-off-the-presses new idea? It’s a new idea. > It would be great to see this. I've prototyped a few ideas for > constructor-based deserialization in the past (essentially, using > caller-sensitivity to control stream field access), but the issue always > comes down to needing the entire class hierarchy to participate and play > nicely, hence my suspicion that some amount of language/JDK changes would be > needed. I would support any new work in this direction either way though; it > gets rid of a lot of hackiness on the deserialization end. There’s no need to examine the class hierarchy. We can take this apart into two stages: Any class designed for serialization (and the axiom of “Serialization 2.0” is that only classes designed for serialization can be serialized) should offer a constructor that can recreate an object at an arbitrary (serializable) state, as well as access to whichever components make up that state. This should, ideally, be the situation today, but using this property today requires writing a custom serializer — code that knows how to access the components of the state of a particular class when serializing and knows how to pass them to an appropriate constructor when deserializing. What’s missing is a mechanism to automate this process. Some way to *generally* tell a class designed for serialization, “please give me the components of your state” when serializing, and a way to *generally* find an appropriate public constructor to which you can say, “here are your state components, now construct an object”. That is the broad idea outlined in Brian’s “Toward Better Serialization”[1] (if such a mechanism is ultimately introduced, the concrete details will likely be different). — Ron [1]: https://openjdk.org/projects/amber/design-notes/towards-better-serialization
Re: [External] : Re: New candidate JEP: 471: Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal
> On 11 Jun 2024, at 17:27, David Lloyd wrote: > > > > On Tue, Jun 11, 2024 at 10:17 AM Alan Bateman wrote: > On 06/06/2024 18:37, David Lloyd wrote: >> Just bumping this one more time. I intend to start by opening a JIRA to add >> the two proposed methods to `ReflectionFactory`, and go from there. I guess >> that we might need a JEP for the proposed serialization restrictions, which >> is going to be considerably more involved, so I'm putting that off as a >> second step for now, pending further discussion. >> > > I don't think the JDK should be adding another backdoor for serialization > libs to do deep reflection. > > I'm curious, does your serialization library uses the ReflectionFactory to > get method handles to the readObject/writeObject methods (if they are > defined)? > > Yes, all of the method-access methods on ReflectionFactory are used, not just > for readObject/writeObject but also readObjectNoData, readResolve, and > writeReplace, the constructor accessors, and the factory methods for > OptionalDataException. We don't use the static initializer one though (maybe > the ORB does, I'm not sure). > > -- > - DML • he/him Ok, good. So the way the JDK’s built-in serialization mechanism allows custom implementation is by extending ObjectInputStream/ObjectOutputStream (and to that purpose it offers protected constructors of those two classes). ReflectionFactory offers access to non-public readObject/writeObject methods, but it does not currently offer a way to exploit the default field-based reflection when such methods are not explicitly declared. Given that, I think that an appropriate mechanism to consider is having ReflectionFactory offer access to MethodHandles that can perform de/serialization using the default JDK field-based serialization, but still requiring the serialization library to extend ObjectInputStream/ObjectOutputStream — i.e. those MethodHandles will invoke the appropriate OIS/OOS methods for writing fields — but not offering any direct access to non-public fields. This would still be an interim mechanism, and it may still require significant work by serialization libraries that don’t wish to offer custom serializers for JDK classes and don’t wish to use --add-opens. Looking further ahead, a future JDK mechanism to support custom serialization would still be based on invoking public constructors. I.e., the same effect could be achieved today — albeit with some more work — by the serialization library offering custom serializers for JDK classes that invoke their public constructors (the difference being that such a future mechanism would make it easier to automatically identify the public constructor to be invoked, whereas today it would need to be individually determined on a per-class basis). Serialization libraries that want to be better prepared for that future and not do further significant work when it arrives, should use custom serializers for JDK classes that invoke public constructors rather than rely on the mechanism I proposed above or on --add-opens. Whatever they choose to do, to assist in forming some future JDK mechanism that could support custom serialization, serialization libraries should report which JDK classes they commonly serialize and what public constructors they are missing, if any. — Ron
Re: Multiple tables row insertions from single psql input file
On Mon, Jun 10, 2024 at 5:16 PM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Mon, Jun 10, 2024 at 12:43 PM Ron Johnson >> wrote: > > >> Most useful to you will be some number of "ALTER TABLE DISABLE >> TRIGGER ALL;" statements near the beginning of the file, and their "ALTER >> TABLE ... ENABLE TRIGGER ALL;" counterparts near the end of the file. >> >> > Have you just not heard of deferred constraints or is there some reason > besides deferring constraints that you'd want to use alter table in > transactional production code? > I mentioned bulk loading of data. Occasionally that's useful, even in a prod database.
Re: Multiple tables row insertions from single psql input file
On Mon, Jun 10, 2024 at 4:06 PM Rich Shepard wrote: > On Mon, 10 Jun 2024, Ron Johnson wrote: > > > With enough clever scripting you can create a .sql file that does almost > > anything. > > Ron, > > My projects don't all use SQL so I'm far from a clever scripter. :-) > No one is born a scripter, much less a clever scripter. > > Most useful to you will be some number of "ALTER TABLE DISABLE > > TRIGGER ALL;" statements near the beginning of the file, and their "ALTER > > TABLE ... ENABLE TRIGGER ALL;" counterparts near the end of the file. > > Doesn't alter table primarily apply to existing row values for specific > columns rather than inserting new rows and their column values? > I don't think so. For example, pg_dump has an option to add those DISABLE/ENABLE TRIGGER statements. It makes bulk loading of records much simpler.
Re: Multiple tables row insertions from single psql input file
On Mon, Jun 10, 2024 at 2:50 PM Rich Shepard wrote: > My business tracking database has three main tables: company, location, > contact. The company and contact primary keys are sequences. > > I've been adding new rows using INSERT INTO files separately for each table > after manually finding the last PK for the company and contact tables. The > location table has the company PK as a FK; the contact table has both > company PK and location PK as foreign keys. > > Now I will use next_val 'PK' to assign the value for each new table row. > > My question is whether I can create new rows for all three tables in the > same sql source file. Since the location and contact tables require > sequence > numbers from the company and location tables is there a way to specify, > e.g., current_val 'tablename PK' for the related tables? Or, do I still > need > to enter all new companies before their locations and contact? > With enough clever scripting you can create a .sql file that does almost anything. Most useful to you will be some number of "ALTER TABLE DISABLE TRIGGER ALL;" statements near the beginning of the file, and their "ALTER TABLE ... ENABLE TRIGGER ALL;" counterparts near the end of the file.
Re: Escaping single quotes with backslash seems not to work
On Mon, Jun 10, 2024 at 11:42 AM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Mon, Jun 10, 2024 at 8:19 AM Ron Johnson > wrote: > >> >> "set standard_encoding_strings = on" is at the top, and there's no other >> reference to it. >> >> > Well, if they are not using E-strings for escapes then you have the answer > why v14 is broken. Does it really matter why v9.6 apparently worked even > though it should not have if that setting was also set to on? > It matters that *something broke* either between PG 9.6 and 14 *OR* the old JDBC driver and the new JDBC driver, because the client end users are HOPPING MAD. (Don't ask why it wasn't caught in testing; that's beyond my control.)
Re: Escaping single quotes with backslash seems not to work
On Mon, Jun 10, 2024 at 11:08 AM Tom Lane wrote: > Ron Johnson writes: > > On Mon, Jun 10, 2024 at 10:56 AM David G. Johnston < > > david.g.johns...@gmail.com> wrote: > >> As the caution on that page says the default for standard conforming > >> strings changed in 9.1. But maybe your 9.6 had the old value configured > but > >> when you upgraded to 14 you decided to go with the new default. > > > That was the first thing I checked... It's the same on both the 9.6 and > 14 > > systems:. > > Did you check that as the user that runs the Java app (I sure hope > it's not the superuser you evidently used here), in the DB the Java > app uses? I'm wondering about per-user or per-DB settings of > standard_conforming_strings. > It's a remote Java app which runs as a non-superuser. I don't know what it's doing. I ran "pg_dumpuser -g" on the old systems, and applied the sql to the corresponding new servers. "set standard_encoding_strings = on" is at the top, and there's no other reference to it.
Re: Escaping single quotes with backslash seems not to work
On Mon, Jun 10, 2024 at 10:56 AM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Monday, June 10, 2024, Ron Johnson wrote: > >> On Mon, Jun 10, 2024 at 10:08 AM David G. Johnston < >> david.g.johns...@gmail.com> wrote: >> >>> On Mon, Jun 10, 2024 at 7:02 AM Ron Johnson >>> wrote: >>> >>>> PG 9.6 and PG 14 >>>> >>>> >>>> https://www.postgresql.org/docs/14/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS >>>> >>>> [quote] >>>> Any other character following a backslash is taken literally. Thus, to >>>> include a backslash character, write two backslashes (\\). Also, a >>>> single quote can be included in an escape string by writing \', in >>>> addition to the normal way of ''. >>>> [/quote] >>>> >>>> >>> The link you provided goes to the wrong subsection. The following >>> subsection, which discusses, String Constants With C-Style Escapes, >>> requires that you write the literal as E'abc\'def' >>> >>> Note the E prefix on the literal, which is the thing that enables >>> considering backslash as an escape. >>> >> >> This hasn't changed from 9.6, has it? >> >> A Java app that uses backslash escapes broke this morning on fields with >> single quotes, after the weekend migration from PG 9.6.24 to 14.12, and I >> don't know why. I'm not a Java programmer, though. >> >> > As the caution on that page says the default for standard conforming > strings changed in 9.1. But maybe your 9.6 had the old value configured but > when you upgraded to 14 you decided to go with the new default. > That was the first thing I checked... It's the same on both the 9.6 and 14 systems:. TAP=# show standard_conforming_strings; standard_conforming_strings - on (1 row) TAP=# TAP=# show backslash_quote; backslash_quote - safe_encoding (1 row)
Re: Escaping single quotes with backslash seems not to work
On Mon, Jun 10, 2024 at 10:08 AM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Mon, Jun 10, 2024 at 7:02 AM Ron Johnson > wrote: > >> PG 9.6 and PG 14 >> >> >> https://www.postgresql.org/docs/14/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS >> >> [quote] >> Any other character following a backslash is taken literally. Thus, to >> include a backslash character, write two backslashes (\\). Also, a >> single quote can be included in an escape string by writing \', in >> addition to the normal way of ''. >> [/quote] >> >> > The link you provided goes to the wrong subsection. The following > subsection, which discusses, String Constants With C-Style Escapes, > requires that you write the literal as E'abc\'def' > > Note the E prefix on the literal, which is the thing that enables > considering backslash as an escape. > This hasn't changed from 9.6, has it? A Java app that uses backslash escapes broke this morning on fields with single quotes, after the weekend migration from PG 9.6.24 to 14.12, and I don't know why. I'm not a Java programmer, though.
Escaping single quotes with backslash seems not to work
PG 9.6 and PG 14 https://www.postgresql.org/docs/14/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS [quote] Any other character following a backslash is taken literally. Thus, to include a backslash character, write two backslashes (\\). Also, a single quote can be included in an escape string by writing \', in addition to the normal way of ''. [/quote] But it doesn't seem to work. Obviously there's some misconfiguration or , but I don't see what I did wrong. TAP=# insert into foo (name, description) values ('XYZ_Name ', '''XYZ '''); INSERT 0 1 TAP=# insert into foo (name, description) values ('XYZ_Name ', '\'XYZ '); TAP'# TAP'# '); ERROR: syntax error at or near "XYZ" LINE 1: ...into foo (name, description) values ('XYZ_Name ', '\'XYZ '); TAP=# show standard_conforming_strings; standard_conforming_strings - on (1 row) TAP=# TAP=# show backslash_quote; backslash_quote - safe_encoding (1 row)
Re: Question on pg_cron
On Sat, Jun 8, 2024 at 5:31 AM yudhi s wrote: > Hello All, > > We have around 10 different partition tables for which the partition > maintenance is done using pg_partman extension. These tables have foreign > key dependency between them. We just called partman.run_maintanance_proc() > through pg_cron without any parameters and it was working fine. So we can > see only one entry in the cron.job table. And it runs daily once. > > It was all working fine and we were seeing the historical partition being > dropped and new partitions being created without any issue. But suddenly we > started seeing, its getting failed with error "ERROR: can not drop > schema1.tab1_part_p2023_12_01 because other objects depend on it" > Have you changed version lately of PG, pg_cron or pg_partman? Or maybe what pg_cron or pg_partman depends on?
Re: [External] : Re: New candidate JEP: 471: Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal
> On 6 Jun 2024, at 18:37, David Lloyd wrote: > > Just bumping this one more time. I intend to start by opening a JIRA to add > the two proposed methods to `ReflectionFactory`, and go from there. I guess > that we might need a JEP for the proposed serialization restrictions, which > is going to be considerably more involved, so I'm putting that off as a > second step for now, pending further discussion. Hi. Seven years before the upcoming delivery of JEP 471 we [announced that the world of accessing JDK internals would be coming to an end](https://openjdk.org/jeps/260). Four years later, the JDK [started enforcing that](https://openjdk.org/jeps/403), but to give extra time to laggards who have not yet adapted to either refraining from accessing internals or instructing their users on the proper configuration of the JDK to allow doing so, we left some unsupported mechanisms in place to temporarily allow hacking around the restrictions until they can properly adapt. Now, three years later, we're starting the process of removing the temporary hacks, although we don't yet know how long the process should last. My first question is, how many more years do you think we should wait for libraries to finish the process by which they either refrain from accessing internals or instruct their users on the proper configuration that allows them to do so? Knowing how long we should wait for the libraries that have not yet finished their adaptation in the past seven years, and how far along they are in the process, could help inform how long we should wait until the actual removal of Unsafe. My second question has to do with the consideration that any serialization procedure should no longer bypass constructors (at least not without `--add-opens`, that is). I'd be interested to know about the difficulties serialization libraries have encountered in their process of migrating away from accessing internals and toward custom serializers for JDK classes, such as what public constructors are missing. This would help us identify what constructors we should add to support serialization that doesn't violate integrity. — Ron
[Int-area] Re: Reverse Traceroute Alternative
Rolf, I don't believe that this is true. According to Section 3.4 of RFC 2151, Traceroute is aways directed towards an invalid port on the destination node. TCP and UDP ports 33434 are reserved for this purpose. I don't see any such restriction in your draft. Did I miss something? Ron Hi Ron, you raise a valid point, which however is not specific to reverse traceroute but applies to regular traceroute just as well, since we perform the exact same operation. One way to deal with this is assign a port for this purpose just as for regular traceroute: https://urldefense.com/v3/__https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?=131__;!!NEt6yMaO-gk!E-ykEBNpR2rNjoOUkPjIxU5n8mSBgySkexROpP52eUYXDgXVGu88eOOvzuXvb6MivXawy_Ckbv_6ea_QpuQg0GOaqLQ$ which is what we suggest in the document. For ICMP probes, this issue does not apply. Best, Rolf Am 06.06.24 um 16:32 schrieb Ron Bonica: > > Authors, > > Just a reminder regarding the one significant issue that was raised > during our phone call > > When a reverse traceroute messages reaches its destination (i.e., the > initiating node), what prevents it from being delivered to an application? > > >Ron > > > Juniper Business Use Only > > Juniper Business Use Only ___ Int-area mailing list -- int-area@ietf.org To unsubscribe send an email to int-area-le...@ietf.org
Re: Poor performance after restoring database from snapshot on AWS RDS
On Fri, Jun 7, 2024 at 4:36 AM Sam Kidman wrote: > > This is due to the way that RDS restores snapshots. > > Thanks, I never would have guessed. Would vacuum analyze be sufficient > to defeat the lazy loading or would we need to do something more > specific to our application? (for example. select(*) on some commonly > used tables) > https://www.postgresql.org/docs/14/pgprewarm.html pg_prewarm is probably what you want. Don't know if RDS Postgresql supports it or not, though. > > I think vacuum full would certainly defeat the lazy loading since it > would copy all of the table data, but that may take a very long time > to run. I think vacuum analyze only scans a subset of rows but I might > be wrong about that. > > Best, Sam > > On Wed, Jun 5, 2024 at 10:09 PM Jeremy Smith > wrote: > > > > On Wed, Jun 5, 2024 at 4:23 AM Sam Kidman wrote: > > > > > We get very poor performance in the staging environment after this > > > restore takes place - after some usage it seems to get better perhaps > > > because of caching. > > > > > > > This is due to the way that RDS restores snapshots. > > > > From the docs ( > https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html > ): > > > > You can use the restored DB instance as soon as its status is > > available. The DB instance continues to load data in the background. > > This is known as lazy loading. > > > > If you access data that hasn't been loaded yet, the DB instance > > immediately downloads the requested data from Amazon S3, and then > > continues loading the rest of the data in the background. > > > > > > > > -Jeremy > > >
Re: PG 14 pg_basebackup accepts --compress=server-zst option
On Fri, Jun 7, 2024 at 12:32 AM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Thursday, June 6, 2024, Kashif Zeeshan wrote: > >> Hi >> >> On Fri, Jun 7, 2024 at 6:54 AM Ron Johnson >> wrote: >> >>> >>> https://www.postgresql.org/docs/14/app-pgbasebackup.html doesn't >>> mention "--compress=[{client|server}-]method". That first appears in the >>> v15 docs. >>> >>> And yet pg_basebackup doesn't complain about an invalid option. >>> (Technically, this is a bug; I first noticed it a week after copying a >>> script from a PG 15 server to five PG 14 servers, and running it quite a >>> few times without fail.) >>> >> > Seems a bit suspect, but as your script doesn’t mention tar the option > itself is apparently ignored, I guess silently. > Does this mean that "--compress=server-zst" is only relevant with --format=tar? > Assuming this isn’t an actual regression in behavior in a patch-released > older version > My apologies for not mentioning the version: 14.12-1PGDG-rhel8. > I don’t see us adding an error message at this point. > Me neither. It just seemed odd.
PG 14 pg_basebackup accepts --compress=server-zst option
https://www.postgresql.org/docs/14/app-pgbasebackup.html doesn't mention "--compress=[{client|server}-]method". That first appears in the v15 docs. And yet pg_basebackup doesn't complain about an invalid option. (Technically, this is a bug; I first noticed it a week after copying a script from a PG 15 server to five PG 14 servers, and running it quite a few times without fail.) $ pg_basebackup \ > --pgdata=$PGDATA \ > --dbname=service=basebackup \ > --verbose --progress \ > --checkpoint=fast \ > --write-recovery-conf \ > --wal-method=stream \ > --create-slot --slot=pgstandby1 \ > --compress=server-zst ; echo $? pg_basebackup: initiating base backup, waiting for checkpoint to complete pg_basebackup: checkpoint completed pg_basebackup: write-ahead log start point: 256/BC28 on timeline 1 pg_basebackup: starting background WAL receiver pg_basebackup: created replication slot "pgstandby1" 42567083/42567083 kB (100%), 1/1 tablespace pg_basebackup: write-ahead log end point: 256/BC000138 pg_basebackup: waiting for background process to finish streaming ... pg_basebackup: syncing data to disk ... pg_basebackup: renaming backup_manifest.tmp to backup_manifest pg_basebackup: base backup completed 0
[jira] [Updated] (SPARK-48555) Support Column type for several SQL functions in scala and python
[ https://issues.apache.org/jira/browse/SPARK-48555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Serruya updated SPARK-48555: Priority: Major (was: Minor) > Support Column type for several SQL functions in scala and python > - > > Key: SPARK-48555 > URL: https://issues.apache.org/jira/browse/SPARK-48555 > Project: Spark > Issue Type: New Feature > Components: Connect, PySpark, Spark Core >Affects Versions: 3.5.1 >Reporter: Ron Serruya >Priority: Major > > Currently, several SQL functions accept both native types and Columns, but > only accept native types in their scala/python APIs: > * array_remove (works in SQL, scala, not in python) > * array_position(works in SQL, scala, not in python) > * map_contains_key (works in SQL, scala, not in python) > * substring (works only in SQL) > For example, this is possible in SQL: > {code:python} > spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)") > {code} > But not in python: > {code:python} > df.select(F.array_remove(F.col("col1"), F.col("col2")) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
Re: Logical replication type- WAL recovery fails and changes the size of wal segment in archivedir
On Wed, Jun 5, 2024 at 6:26 AM Laurenz Albe wrote: > On Wed, 2024-06-05 at 06:36 +, Meera Nair wrote: > > 2024-06-05 11:41:32.369 IST [54369] LOG: restored log file > "00050001006A" from archive > > 2024-06-05 11:41:33.112 IST [54369] LOG: restored log file > "00050001006B" from archive > > cp: cannot stat ‘/home/pgsql/wmaster/00050001006C’: No such > file or directory > > 2024-06-05 11:41:33.167 IST [54369] LOG: redo done at 1/6B000100 > > 2024-06-05 11:41:33.172 IST [54369] FATAL: archive file > "00050001006B" has wrong size: 0 instead of 16777216 > > 2024-06-05 11:41:33.173 IST [54367] LOG: startup process (PID 54369) > exited with exit code 1 > > 2024-06-05 11:41:33.173 IST [54367] LOG: terminating any other active > server processes > > 2024-06-05 11:41:33.174 IST [54375] FATAL: archive command was > terminated by signal 3: Quit > > 2024-06-05 11:41:33.174 IST [54375] DETAIL: The failed archive command > was: cp pg_wal/00050001006B > /home/pgsql/wmaster/00050001006B > > 2024-06-05 11:41:33.175 IST [54367] LOG: archiver process (PID 54375) > exited with exit code 1 > > 2024-06-05 11:41:33.177 IST [54367] LOG: database system is shut down > > > > Here ‘/home/pgsql/wmaster’ is my archivedir (the folder where WAL > segments are restored from) > > > > Before attempting start, size of > > 00050001006B file was 16 MB. > > After failing to detect 00050001006C, there is a FATAL error > saying wrong size for 00050001006B > > Now the size of 00050001006B is observed as 2 MB. Size of > all other WAL segments remain 16 MB. > > > > -rw--- 1 postgres postgres 2359296 Jun 5 11:34 > 00050001006B > > That looks like you have "archive_mode = always", and "archive_command" > writes > back to the archive. Don't do that. > In fact, don't write your own PITR backup process. Use something like PgBackRest or BarMan.
Re: Can't Remote connection by IpV6
On Thu, Jun 6, 2024 at 11:03 AM Adrian Klaver wrote: > On 6/6/24 07:46, Marcelo Marloch wrote: > > Hi everyone, is it possible to remote connect through IpV6? IpV4 works > > fine but I cant connect through V6 > > > > postgresql.conf is to listen all address and pg_hba.conf is set with > > host all all :: md5 i've tried ::/0 and ::0/0 but had no success > > Is the firewall open for IPv6 connections to the Postgres port? > netcat (comes with nmap) is great for this. There's a Windows client, too.
[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored
[ https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Serruya updated SPARK-48091: Description: When using an `explode` function, and `transform` function in the same select statement, aliases used inside the transformed column are ignored. This behavior only happens using the pyspark API and the scala API, but not when using the SQL API {code:java} from pyspark.sql import functions as F # Create the df df = spark.createDataFrame([ {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]} ]){code} Good case, where all aliases are used {code:java} df.select( F.transform( 'array2', lambda x: F.struct(x.alias("some_alias"), F.col("id").alias("second_alias")) ).alias("new_array2") ).printSchema() root |-- new_array2: array (nullable = true) ||-- element: struct (containsNull = false) |||-- some_alias: long (nullable = true) |||-- second_alias: long (nullable = true){code} Bad case, when using explode, the alises inside the transformed column is ignored, and `id` is kept instead of `second_alias`, and `x_17` is used instead of `some_alias` {code:java} df.select( F.explode("array1").alias("exploded"), F.transform( 'array2', lambda x: F.struct(x.alias("some_alias"), F.col("id").alias("second_alias")) ).alias("new_array2") ).printSchema() root |-- exploded: string (nullable = true) |-- new_array2: array (nullable = true) ||-- element: struct (containsNull = false) |||-- x_17: long (nullable = true) |||-- id: long (nullable = true) {code} {code:scala} import org.apache.spark.sql.functions._ var df2 = df.select(array(lit(1), lit(2), lit(3)).as("my_array"), array(lit(1), lit(2), lit(3)).as("my_array2")) df2.select( explode($"my_array").as("exploded"), transform($"my_array2", (x) => struct(x.as("data"))).as("my_struct") ).printSchema {code} {noformat} root |-- exploded: integer (nullable = false) |-- my_struct: array (nullable = false) ||-- element: struct (containsNull = false) |||-- x_2: integer (nullable = false) {noformat} When using the SQL API instead, it works fine {code:java} spark.sql( """ select explode(array1) as exploded, transform(array2, x-> struct(x as some_alias, id as second_alias)) as array2 from {df} """, df=df ).printSchema() root |-- exploded: string (nullable = true) |-- array2: array (nullable = true) ||-- element: struct (containsNull = false) |||-- some_alias: long (nullable = true) |||-- second_alias: long (nullable = true) {code} Workaround: for now, using F.named_struct can be used as a workaround was: When using an `explode` function, and `transform` function in the same select statement, aliases used inside the transformed column are ignored. This behaviour only happens using the pyspark API, and not when using the SQL API {code:java} from pyspark.sql import functions as F # Create the df df = spark.createDataFrame([ {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]} ]){code} Good case, where all aliases are used {code:java} df.select( F.transform( 'array2', lambda x: F.struct(x.alias("some_alias"), F.col("id").alias("second_alias")) ).alias("new_array2") ).printSchema() root |-- new_array2: array (nullable = true) ||-- element: struct (containsNull = false) |||-- some_alias: long (nullable = true) |||-- second_alias: long (nullable = true){code} Bad case, when using explode, the alises inside the transformed column is ignored, and `id` is kept instead of `second_alias`, and `x_17` is used instead of `some_alias` {code:java} df.select( F.explode("array1").alias("exploded"), F.transform( 'array2', lambda x: F.struct(x.alias("some_alias"), F.col("id").alias("second_alias")) ).alias("new_array2") ).printSchema() root |-- exploded: string (nullable = true) |-- new_array2: array (nullable = true) ||-- element: struct (containsNull = false) |||-- x_17: long (nullable = true) |||-- id: long (nullable = true) {code} When using the SQL API instead, it works fine {code:java} spark.sql( """ select explode(array1) as exploded, transform(array2, x-> struct(x as some_alias, id as second_alias)) as array2 from {df} """, df=df ).printSchema() root |-- exploded: string (nullable = true) |-- array2: array (n
[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored
[ https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Serruya updated SPARK-48091: Environment: Scala 2.12.15, Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1 (was: Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1) > Using `explode` together with `transform` in the same select statement causes > aliases in the transformed column to be ignored > - > > Key: SPARK-48091 > URL: https://issues.apache.org/jira/browse/SPARK-48091 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 3.4.0, 3.5.0, 3.5.1 > Environment: Scala 2.12.15, Python 3.10, 3.12, OSX 14.4 and > Databricks DBR 13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1 >Reporter: Ron Serruya >Priority: Minor > Labels: alias > > When using an `explode` function, and `transform` function in the same select > statement, aliases used inside the transformed column are ignored. > This behavior only happens using the pyspark API and the scala API, but not > when using the SQL API > > {code:java} > from pyspark.sql import functions as F > # Create the df > df = spark.createDataFrame([ > {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]} > ]){code} > Good case, where all aliases are used > > {code:java} > df.select( > F.transform( > 'array2', > lambda x: F.struct(x.alias("some_alias"), > F.col("id").alias("second_alias")) > ).alias("new_array2") > ).printSchema() > root > |-- new_array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- some_alias: long (nullable = true) > |||-- second_alias: long (nullable = true){code} > Bad case, when using explode, the alises inside the transformed column is > ignored, and `id` is kept instead of `second_alias`, and `x_17` is used > instead of `some_alias` > > > {code:java} > df.select( > F.explode("array1").alias("exploded"), > F.transform( > 'array2', > lambda x: F.struct(x.alias("some_alias"), > F.col("id").alias("second_alias")) > ).alias("new_array2") > ).printSchema() > root > |-- exploded: string (nullable = true) > |-- new_array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- x_17: long (nullable = true) > |||-- id: long (nullable = true) {code} > > {code:scala} > import org.apache.spark.sql.functions._ > var df2 = df.select(array(lit(1), lit(2), lit(3)).as("my_array"), > array(lit(1), lit(2), lit(3)).as("my_array2")) > df2.select( > explode($"my_array").as("exploded"), > transform($"my_array2", (x) => struct(x.as("data"))).as("my_struct") > ).printSchema > {code} > {noformat} > root > |-- exploded: integer (nullable = false) > |-- my_struct: array (nullable = false) > ||-- element: struct (containsNull = false) > |||-- x_2: integer (nullable = false) > {noformat} > > When using the SQL API instead, it works fine > {code:java} > spark.sql( > """ > select explode(array1) as exploded, transform(array2, x-> struct(x as > some_alias, id as second_alias)) as array2 from {df} > """, df=df > ).printSchema() > root > |-- exploded: string (nullable = true) > |-- array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- some_alias: long (nullable = true) > |||-- second_alias: long (nullable = true) {code} > > Workaround: for now, using F.named_struct can be used as a workaround -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored
[ https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Serruya updated SPARK-48091: Component/s: Spark Core (was: PySpark) > Using `explode` together with `transform` in the same select statement causes > aliases in the transformed column to be ignored > - > > Key: SPARK-48091 > URL: https://issues.apache.org/jira/browse/SPARK-48091 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 3.4.0, 3.5.0, 3.5.1 > Environment: Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, > 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1 >Reporter: Ron Serruya >Priority: Minor > Labels: alias > > When using an `explode` function, and `transform` function in the same select > statement, aliases used inside the transformed column are ignored. > This behaviour only happens using the pyspark API, and not when using the SQL > API > > {code:java} > from pyspark.sql import functions as F > # Create the df > df = spark.createDataFrame([ > {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]} > ]){code} > Good case, where all aliases are used > > {code:java} > df.select( > F.transform( > 'array2', > lambda x: F.struct(x.alias("some_alias"), > F.col("id").alias("second_alias")) > ).alias("new_array2") > ).printSchema() > root > |-- new_array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- some_alias: long (nullable = true) > |||-- second_alias: long (nullable = true){code} > Bad case, when using explode, the alises inside the transformed column is > ignored, and `id` is kept instead of `second_alias`, and `x_17` is used > instead of `some_alias` > > > {code:java} > df.select( > F.explode("array1").alias("exploded"), > F.transform( > 'array2', > lambda x: F.struct(x.alias("some_alias"), > F.col("id").alias("second_alias")) > ).alias("new_array2") > ).printSchema() > root > |-- exploded: string (nullable = true) > |-- new_array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- x_17: long (nullable = true) > |||-- id: long (nullable = true) {code} > > > > When using the SQL API instead, it works fine > {code:java} > spark.sql( > """ > select explode(array1) as exploded, transform(array2, x-> struct(x as > some_alias, id as second_alias)) as array2 from {df} > """, df=df > ).printSchema() > root > |-- exploded: string (nullable = true) > |-- array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- some_alias: long (nullable = true) > |||-- second_alias: long (nullable = true) {code} > > Workaround: for now, using F.named_struct can be used as a workaround -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-48091) Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored
[ https://issues.apache.org/jira/browse/SPARK-48091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Serruya updated SPARK-48091: Labels: alias (was: PySpark alias) > Using `explode` together with `transform` in the same select statement causes > aliases in the transformed column to be ignored > - > > Key: SPARK-48091 > URL: https://issues.apache.org/jira/browse/SPARK-48091 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 3.4.0, 3.5.0, 3.5.1 > Environment: Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, > 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1 >Reporter: Ron Serruya >Priority: Minor > Labels: alias > > When using an `explode` function, and `transform` function in the same select > statement, aliases used inside the transformed column are ignored. > This behaviour only happens using the pyspark API, and not when using the SQL > API > > {code:java} > from pyspark.sql import functions as F > # Create the df > df = spark.createDataFrame([ > {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]} > ]){code} > Good case, where all aliases are used > > {code:java} > df.select( > F.transform( > 'array2', > lambda x: F.struct(x.alias("some_alias"), > F.col("id").alias("second_alias")) > ).alias("new_array2") > ).printSchema() > root > |-- new_array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- some_alias: long (nullable = true) > |||-- second_alias: long (nullable = true){code} > Bad case, when using explode, the alises inside the transformed column is > ignored, and `id` is kept instead of `second_alias`, and `x_17` is used > instead of `some_alias` > > > {code:java} > df.select( > F.explode("array1").alias("exploded"), > F.transform( > 'array2', > lambda x: F.struct(x.alias("some_alias"), > F.col("id").alias("second_alias")) > ).alias("new_array2") > ).printSchema() > root > |-- exploded: string (nullable = true) > |-- new_array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- x_17: long (nullable = true) > |||-- id: long (nullable = true) {code} > > > > When using the SQL API instead, it works fine > {code:java} > spark.sql( > """ > select explode(array1) as exploded, transform(array2, x-> struct(x as > some_alias, id as second_alias)) as array2 from {df} > """, df=df > ).printSchema() > root > |-- exploded: string (nullable = true) > |-- array2: array (nullable = true) > ||-- element: struct (containsNull = false) > |||-- some_alias: long (nullable = true) > |||-- second_alias: long (nullable = true) {code} > > Workaround: for now, using F.named_struct can be used as a workaround -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[Int-area] Re: Reverse Traceroute Alternative
Authors, Just a reminder regarding the one significant issue that was raised during our phone call When a reverse traceroute messages reaches its destination (i.e., the initiating node), what prevents it from being delivered to an application? Ron Juniper Business Use Only ___ Int-area mailing list -- int-area@ietf.org To unsubscribe send an email to int-area-le...@ietf.org
[jira] [Created] (SPARK-48555) Support Column type for several SQL functions in scala and python
Ron Serruya created SPARK-48555: --- Summary: Support Column type for several SQL functions in scala and python Key: SPARK-48555 URL: https://issues.apache.org/jira/browse/SPARK-48555 Project: Spark Issue Type: New Feature Components: Connect, PySpark, Spark Core Affects Versions: 3.5.1 Reporter: Ron Serruya Currently, several SQL functions accept both native types and Columns, but only accept native types in their scala/python APIs: * array_remove (works in SQL, scala, not in python) * array_position(works in SQL, scala, not in python) * map_contains_key (works in SQL, scala, not in python) * substring (works only in SQL) For example, this is possible in SQL: {code:python} spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)") {code} {code:python} df.select(F.array_remove(F.col("col1"), F.col("col2")) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-48555) Support Column type for several SQL functions in scala and python
[ https://issues.apache.org/jira/browse/SPARK-48555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Serruya updated SPARK-48555: Description: Currently, several SQL functions accept both native types and Columns, but only accept native types in their scala/python APIs: * array_remove (works in SQL, scala, not in python) * array_position(works in SQL, scala, not in python) * map_contains_key (works in SQL, scala, not in python) * substring (works only in SQL) For example, this is possible in SQL: {code:python} spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)") {code} But not in python: {code:python} df.select(F.array_remove(F.col("col1"), F.col("col2")) {code} was: Currently, several SQL functions accept both native types and Columns, but only accept native types in their scala/python APIs: * array_remove (works in SQL, scala, not in python) * array_position(works in SQL, scala, not in python) * map_contains_key (works in SQL, scala, not in python) * substring (works only in SQL) For example, this is possible in SQL: {code:python} spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)") {code} {code:python} df.select(F.array_remove(F.col("col1"), F.col("col2")) {code} > Support Column type for several SQL functions in scala and python > - > > Key: SPARK-48555 > URL: https://issues.apache.org/jira/browse/SPARK-48555 > Project: Spark > Issue Type: New Feature > Components: Connect, PySpark, Spark Core >Affects Versions: 3.5.1 >Reporter: Ron Serruya >Priority: Minor > > Currently, several SQL functions accept both native types and Columns, but > only accept native types in their scala/python APIs: > * array_remove (works in SQL, scala, not in python) > * array_position(works in SQL, scala, not in python) > * map_contains_key (works in SQL, scala, not in python) > * substring (works only in SQL) > For example, this is possible in SQL: > {code:python} > spark.sql("select array_remove(col1, col2) from values(array(1,2,3), 2)") > {code} > But not in python: > {code:python} > df.select(F.array_remove(F.col("col1"), F.col("col2")) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
(flink) branch master updated (9708f9fd657 -> f462926ad9c)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from 9708f9fd657 [FLINK-35501] Use common IO thread pool for RocksDB data transfer add 42289bd2c69 [FLINK-35201][table] Support the execution of drop materialized table in full refresh mode add c862fa60119 [FLINK-35201][table] Enhance function names in MaterializedTableStatementITCase for better readability add f462926ad9c [FLINK-35201][table] Remove unnecessary logs in MaterializedTableManager No new revisions were added by this update. Summary of changes: .../MaterializedTableManager.java | 133 +++-- .../scheduler/EmbeddedQuartzScheduler.java | 5 - .../AbstractMaterializedTableStatementITCase.java | 23 ++-- .../service/MaterializedTableStatementITCase.java | 110 +++-- .../workflow/EmbeddedSchedulerRelatedITCase.java | 27 + 5 files changed, 182 insertions(+), 116 deletions(-)
Re: Heading Level Access In Safari Browser
Hi Richard, This is on the iPhone. I don't understand this arrows and shift key business. On 5/31/2024 12:57 PM, Richard Turner wrote: When on a web site or in your html file, turn on quicknav if it isn't on using left+right arrows together. Then, press VO+q to turn on single letter quicknav. I have the VO command as control+Options so control+Options+q. Then, you can use the singel numbers or h for the next heading, shift plus h for previous or even shift+1 for previous heading level 1, etc. HTH, Richard, USA “Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.” -- Cedrick Bridgeforth My web site: https://www.turner42.com/ On May 31, 2024, at 9:18 AM, Mario Eiland wrote: Use the rotor while in the Safari app and look for headings. Once you hear headings then flick down with one finger and that should take you from heading to heading. To go up flick up. If you can't find the heading option in the rotor then you must add it in the VoiceOver rotor settings. Good luck! -Original Message- From: viphone@googlegroups.com On Behalf Of Ron Canazzi Sent: Friday, May 31, 2024 8:42 AM To: ViPhone List Subject: Heading Level Access In Safari Browser Hi Group, I finally was able to change some settings in Safari Browser on the iPhone to get it to display HTML files that are stored locally on the iPhone. I created my modified Dice Football Game Play Sheet by using headings to more quickly navigate from play list to play list. I have the lists separated into running plays, kicking plays, passing plays and conversions at level one and the various list of items such as short pass, long pass and screen pass for the passing plays and the running plays such as left end run, right tackle play and reverse at heading level two. Is there any way to navigate by heading levels using a quick number scheme such as is done on Windows desktops with quick key number navigation such as number one for heading level one and number two for heading level two on the iPhone? Thanks for any help. -- Signature: For a nation to admit it has done grievous wrongs and will strive to correct them for the betterment of all is no vice; For a nation to claim it has always been great, needs no improvement and to cling to its past achievements is no virtue! -- The following information is important for all members of the V iPhone list. If you have any questions or concerns about the running of this list, or if you feel that a member's post is inappropriate, please contact the owners or moderators directly rather than posting on the list itself. Your V iPhone list moderator is Mark Taylor. Mark can be reached at: mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at caraqu...@caraquinn.com The archives for this list can be searched at: http://www.mail-archive.com/viphone@googlegroups.com/ --- You received this message because you are subscribed to the Google Groups "VIPhone" group. To unsubscribe from this group and stop receiving emails from it, send an email to viphone+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/viphone/7a3d2c9c-6deb-8621-6d2a-105199764add%40roadrunner.com. -- The following information is important for all members of the V iPhone list. If you have any questions or concerns about the running of this list, or if you feel that a member's post is inappropriate, please contact the owners or moderators directly rather than posting on the list itself. Your V iPhone list moderator is Mark Taylor. Mark can be reached at: mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at caraqu...@caraquinn.com The archives for this list can be searched at: http://www.mail-archive.com/viphone@googlegroups.com/ --- You received this message because you are subscribed to the Google Groups "VIPhone" group. To unsubscribe from this group and stop receiving emails from it, send an email to viphone+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/viphone/07b201dab376%24268febd0%2473afc370%24%40gmail.com. -- The following information is important for all members of the V iPhone list. If you have any questions or concerns about the running of this list, or if you feel that a member's post is inappropriate, please contact the owners or moderators directly rather than posting on the list itself. Your V iPhone list moderator is Mark Taylor. Mark can be reached at: mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at caraqu...@caraquinn.com The archives for this list can be searched at: http://www.mail-archive.com/viphone@googlegroups.com/ --- You received this message because you are subscribed to the Google Groups "VIPhone" group. To unsubscribe from this group
(flink) 02/03: [FLINK-35200][table] Support the execution of suspend, resume materialized table in full refresh mode
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 9b51711d00a2e1bd93f5a474b9c99b542aaf27cf Author: Feng Jin AuthorDate: Sat Jun 1 23:43:54 2024 +0800 [FLINK-35200][table] Support the execution of suspend, resume materialized table in full refresh mode This closes #24877 --- .../MaterializedTableManager.java | 308 ++--- .../AbstractMaterializedTableStatementITCase.java | 12 +- ...GatewayRestEndpointMaterializedTableITCase.java | 10 +- .../service/MaterializedTableStatementITCase.java | 130 - 4 files changed, 337 insertions(+), 123 deletions(-) diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java index eeb6b5109e3..ea2a56e2010 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java @@ -56,6 +56,9 @@ import org.apache.flink.table.refresh.RefreshHandlerSerializer; import org.apache.flink.table.types.logical.LogicalTypeFamily; import org.apache.flink.table.workflow.CreatePeriodicRefreshWorkflow; import org.apache.flink.table.workflow.CreateRefreshWorkflow; +import org.apache.flink.table.workflow.ModifyRefreshWorkflow; +import org.apache.flink.table.workflow.ResumeRefreshWorkflow; +import org.apache.flink.table.workflow.SuspendRefreshWorkflow; import org.apache.flink.table.workflow.WorkflowScheduler; import org.slf4j.Logger; @@ -173,10 +176,10 @@ public class MaterializedTableManager { CatalogMaterializedTable materializedTable = createMaterializedTableOperation.getCatalogMaterializedTable(); if (CatalogMaterializedTable.RefreshMode.CONTINUOUS == materializedTable.getRefreshMode()) { -createMaterializedInContinuousMode( +createMaterializedTableInContinuousMode( operationExecutor, handle, createMaterializedTableOperation); } else { -createMaterializedInFullMode( +createMaterializedTableInFullMode( operationExecutor, handle, createMaterializedTableOperation); } // Just return ok for unify different refresh job info of continuous and full mode, user @@ -184,7 +187,7 @@ public class MaterializedTableManager { return ResultFetcher.fromTableResult(handle, TABLE_RESULT_OK, false); } -private void createMaterializedInContinuousMode( +private void createMaterializedTableInContinuousMode( OperationExecutor operationExecutor, OperationHandle handle, CreateMaterializedTableOperation createMaterializedTableOperation) { @@ -207,17 +210,21 @@ public class MaterializedTableManager { } catch (Exception e) { // drop materialized table while submit flink streaming job occur exception. Thus, weak // atomicity is guaranteed -LOG.warn( -"Submit continuous refresh job occur exception, drop materialized table {}.", -materializedTableIdentifier, -e); operationExecutor.callExecutableOperation( handle, new DropMaterializedTableOperation(materializedTableIdentifier, true)); -throw e; +LOG.error( +"Submit continuous refresh job for materialized table {} occur exception.", +materializedTableIdentifier, +e); +throw new SqlExecutionException( +String.format( +"Submit continuous refresh job for materialized table %s occur exception.", +materializedTableIdentifier), +e); } } -private void createMaterializedInFullMode( +private void createMaterializedTableInFullMode( OperationExecutor operationExecutor, OperationHandle handle, CreateMaterializedTableOperation createMaterializedTableOperation) { @@ -258,12 +265,13 @@ public class MaterializedTableManager { handle, materializedTableIdentifier, catalogMaterializedTable, +CatalogMaterializedTable.RefreshStatus.ACTIVATED, refreshHandler.asSummaryString(), serializedRefreshHandler); } catch (Exception e) { // drop materialized table while create refresh workflo
(flink) branch master updated (62f9de806ac -> 2e158fe300f)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from 62f9de806ac fixup! [FLINK-35351][checkpoint] Fix fail during restore from unaligned checkpoint with custom partitioner new 8d1e043b0c4 [FLINK-35200][table] Add dynamic options for ResumeRefreshWorkflow new 9b51711d00a [FLINK-35200][table] Support the execution of suspend, resume materialized table in full refresh mode new 2e158fe300f [FLINK-35200][table] Fix missing clusterInfo in materialized table refresh rest API return value The 3 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../ResumeEmbeddedSchedulerWorkflowHandler.java| 17 +- .../ResumeEmbeddedSchedulerWorkflowHeaders.java| 42 ++- ...esumeEmbeddedSchedulerWorkflowRequestBody.java} | 24 +- .../MaterializedTableManager.java | 341 +++-- .../table/gateway/service/utils/Constants.java | 1 + .../workflow/EmbeddedWorkflowScheduler.java| 17 +- .../scheduler/EmbeddedQuartzScheduler.java | 50 ++- .../AbstractMaterializedTableStatementITCase.java | 12 +- ...GatewayRestEndpointMaterializedTableITCase.java | 96 -- .../service/MaterializedTableStatementITCase.java | 130 +++- .../workflow/EmbeddedSchedulerRelatedITCase.java | 14 +- .../resources/sql_gateway_rest_api_v3.snapshot | 8 +- .../table/workflow/ResumeRefreshWorkflow.java | 11 +- 13 files changed, 601 insertions(+), 162 deletions(-) copy flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/{EmbeddedSchedulerWorkflowRequestBody.java => ResumeEmbeddedSchedulerWorkflowRequestBody.java} (73%)
(flink) 01/03: [FLINK-35200][table] Add dynamic options for ResumeRefreshWorkflow
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 8d1e043b0c4277582b8862c2bc3314631eec4a7b Author: Feng Jin AuthorDate: Sat Jun 1 23:43:07 2024 +0800 [FLINK-35200][table] Add dynamic options for ResumeRefreshWorkflow This closes #24877 --- .../ResumeEmbeddedSchedulerWorkflowHandler.java| 17 -- .../ResumeEmbeddedSchedulerWorkflowHeaders.java| 42 - ...ResumeEmbeddedSchedulerWorkflowRequestBody.java | 71 ++ .../workflow/EmbeddedWorkflowScheduler.java| 17 -- .../scheduler/EmbeddedQuartzScheduler.java | 50 ++- .../workflow/EmbeddedSchedulerRelatedITCase.java | 14 - .../resources/sql_gateway_rest_api_v3.snapshot | 8 ++- .../table/workflow/ResumeRefreshWorkflow.java | 11 +++- 8 files changed, 212 insertions(+), 18 deletions(-) diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java index 4d0979946b8..d5030367839 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java @@ -25,7 +25,7 @@ import org.apache.flink.runtime.rest.messages.EmptyResponseBody; import org.apache.flink.runtime.rest.messages.MessageHeaders; import org.apache.flink.table.gateway.api.SqlGatewayService; import org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler; -import org.apache.flink.table.gateway.rest.message.materializedtable.scheduler.EmbeddedSchedulerWorkflowRequestBody; +import org.apache.flink.table.gateway.rest.message.materializedtable.scheduler.ResumeEmbeddedSchedulerWorkflowRequestBody; import org.apache.flink.table.gateway.rest.util.SqlGatewayRestAPIVersion; import org.apache.flink.table.gateway.workflow.scheduler.EmbeddedQuartzScheduler; @@ -34,13 +34,16 @@ import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpResponseSt import javax.annotation.Nonnull; import javax.annotation.Nullable; +import java.util.Collections; import java.util.Map; import java.util.concurrent.CompletableFuture; /** Handler to resume workflow in embedded scheduler. */ public class ResumeEmbeddedSchedulerWorkflowHandler extends AbstractSqlGatewayRestHandler< -EmbeddedSchedulerWorkflowRequestBody, EmptyResponseBody, EmptyMessageParameters> { +ResumeEmbeddedSchedulerWorkflowRequestBody, +EmptyResponseBody, +EmptyMessageParameters> { private final EmbeddedQuartzScheduler quartzScheduler; @@ -49,7 +52,7 @@ public class ResumeEmbeddedSchedulerWorkflowHandler EmbeddedQuartzScheduler quartzScheduler, Map responseHeaders, MessageHeaders< -EmbeddedSchedulerWorkflowRequestBody, +ResumeEmbeddedSchedulerWorkflowRequestBody, EmptyResponseBody, EmptyMessageParameters> messageHeaders) { @@ -60,12 +63,16 @@ public class ResumeEmbeddedSchedulerWorkflowHandler @Override protected CompletableFuture handleRequest( @Nullable SqlGatewayRestAPIVersion version, -@Nonnull HandlerRequest request) +@Nonnull HandlerRequest request) throws RestHandlerException { String workflowName = request.getRequestBody().getWorkflowName(); String workflowGroup = request.getRequestBody().getWorkflowGroup(); +Map dynamicOptions = request.getRequestBody().getDynamicOptions(); try { -quartzScheduler.resumeScheduleWorkflow(workflowName, workflowGroup); +quartzScheduler.resumeScheduleWorkflow( +workflowName, +workflowGroup, +dynamicOptions == null ? Collections.emptyMap() : dynamicOptions); return CompletableFuture.completedFuture(EmptyResponseBody.getInstance()); } catch (Exception e) { throw new RestHandlerException( diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java index dface14468c..cc5
(flink) 03/03: [FLINK-35200][table] Fix missing clusterInfo in materialized table refresh rest API return value
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 2e158fe300f6e93ba9b3d600e0237237ac0b2131 Author: Feng Jin AuthorDate: Tue Jun 4 01:56:08 2024 +0800 [FLINK-35200][table] Fix missing clusterInfo in materialized table refresh rest API return value This closes #24877 --- .../MaterializedTableManager.java | 35 - .../table/gateway/service/utils/Constants.java | 1 + ...GatewayRestEndpointMaterializedTableITCase.java | 86 ++ 3 files changed, 104 insertions(+), 18 deletions(-) diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java index ea2a56e2010..4c35e211e0d 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java @@ -23,15 +23,20 @@ import org.apache.flink.annotation.VisibleForTesting; import org.apache.flink.api.common.JobStatus; import org.apache.flink.configuration.CheckpointingOptions; import org.apache.flink.configuration.Configuration; +import org.apache.flink.table.api.DataTypes; import org.apache.flink.table.api.ValidationException; import org.apache.flink.table.api.config.TableConfigOptions; import org.apache.flink.table.catalog.CatalogMaterializedTable; +import org.apache.flink.table.catalog.Column; import org.apache.flink.table.catalog.ObjectIdentifier; import org.apache.flink.table.catalog.ResolvedCatalogBaseTable; import org.apache.flink.table.catalog.ResolvedCatalogMaterializedTable; import org.apache.flink.table.catalog.ResolvedSchema; import org.apache.flink.table.catalog.TableChange; +import org.apache.flink.table.data.GenericMapData; +import org.apache.flink.table.data.GenericRowData; import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.StringData; import org.apache.flink.table.factories.WorkflowSchedulerFactoryUtil; import org.apache.flink.table.gateway.api.operation.OperationHandle; import org.apache.flink.table.gateway.api.results.ResultSet; @@ -94,6 +99,8 @@ import static org.apache.flink.table.api.internal.TableResultInternal.TABLE_RESU import static org.apache.flink.table.catalog.CatalogBaseTable.TableKind.MATERIALIZED_TABLE; import static org.apache.flink.table.factories.WorkflowSchedulerFactoryUtil.WORKFLOW_SCHEDULER_PREFIX; import static org.apache.flink.table.gateway.api.endpoint.SqlGatewayEndpointFactoryUtils.getEndpointConfig; +import static org.apache.flink.table.gateway.service.utils.Constants.CLUSTER_INFO; +import static org.apache.flink.table.gateway.service.utils.Constants.JOB_ID; import static org.apache.flink.table.utils.DateTimeUtils.formatTimestampString; import static org.apache.flink.table.utils.IntervalFreshnessUtils.convertFreshnessToCron; @@ -594,11 +601,33 @@ public class MaterializedTableManager { dynamicOptions); try { -LOG.debug( -"Begin to manually refreshing the materialization table {}, statement: {}", +LOG.info( +"Begin to manually refreshing the materialized table {}, statement: {}", materializedTableIdentifier, insertStatement); -return operationExecutor.executeStatement(handle, customConfig, insertStatement); +ResultFetcher resultFetcher = +operationExecutor.executeStatement(handle, customConfig, insertStatement); + +List results = fetchAllResults(resultFetcher); +String jobId = results.get(0).getString(0).toString(); +String executeTarget = + operationExecutor.getSessionContext().getSessionConf().get(TARGET); +Map clusterInfo = new HashMap<>(); +clusterInfo.put( +StringData.fromString(TARGET.key()), StringData.fromString(executeTarget)); +// TODO get clusterId + +return ResultFetcher.fromResults( +handle, +ResolvedSchema.of( +Column.physical(JOB_ID, DataTypes.STRING()), +Column.physical( +CLUSTER_INFO, +DataTypes.MAP(DataTypes.STRING(), DataTypes.STRING(, +Collections.singletonList( +GenericRowData.of( +StringData.fromString(jobId), +new
Re: Purpose of pg_dump tar archive format?
On Tue, Jun 4, 2024 at 3:47 PM Gavin Roy wrote: > > On Tue, Jun 4, 2024 at 3:15 PM Ron Johnson > wrote: > >> >> But why tar instead of custom? That was part of my original question. >> > > I've found it pretty useful for programmatically accessing data in a dump > for large databases outside of the normal pg_dump/pg_restore workflow. You > don't have to seek through one large binary file to get to the data section > to get at the data. > Interesting. Please explain, though, since a big tarball _is_ "one large binary file" that you have to sequentially scan. (I don't know the internal structure of custom format files, and whether they have file pointers to each table.) Is it because you need individual .dat "COPY" files for something other than loading into PG tables (since pg_restore --table= does that, too), and directory format archives can be inconvenient?
[jira] [Commented] (OLINGO-1624) Serialization performance regression in Olingo 5
[ https://issues.apache.org/jira/browse/OLINGO-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852208#comment-17852208 ] Ron Passerini commented on OLINGO-1624: --- I've attached a patch for 5.0 that will # Still handle the fix for OLINGO-1167 # Remove the performance problem referenced in this Jira > Serialization performance regression in Olingo 5 > > > Key: OLINGO-1624 > URL: https://issues.apache.org/jira/browse/OLINGO-1624 > Project: Olingo > Issue Type: Bug > Components: odata4-commons >Affects Versions: (Java) V4 4.10.0, Version (Java) V4 5.0.0 >Reporter: Florent Albert >Priority: Major > Attachments: > 0001-OLINGO-1624-Fix-performance-issue-for-resolving-EdmP.patch > > > Olingo 4.10 (via OLINGO-1167) introduced a performance regression. Commit > [https://github.com/apache/olingo-odata4/commit/ce5028d24f220ad0f60b5ac023c10e7b88b7c806] > now makes resolution of EdmPrimitiveTypeKind create and suppress an > exception for any non primitive type. > Construction in EdmTypeInfo in 4.10 and 5.0 is very expensive and causes > severe performance degradation on large datasets. For the same dataset, > ODataJsonSerializer.getEdmProperty() spends <200 ms in Olingo 4.9 and ~3000 > ms in Olingo 5 (15x slower). > This same issue was originally reported in in Olingo 4.2 and fixed in 4.7 > (via OLINGO-1357 and > [https://github.com/apache/olingo-odata4/pull/51/files|https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Folingo-odata4%2Fpull%2F51%2Ffiles=05%7C02%7Cfalbert%40ptc.com%7Cd24ae4d9097c4fcf037c08dc80c242c1%7Cb9921086ff774d0d828acb3381f678e2%7C0%7C0%7C638526819046368587%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=Y5ae4MIeiqxXLXbwJICWVMy0vQgfOohocPVmDqo1vlo%3D=0]). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (OLINGO-1624) Serialization performance regression in Olingo 5
[ https://issues.apache.org/jira/browse/OLINGO-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Passerini updated OLINGO-1624: -- Attachment: 0001-OLINGO-1624-Fix-performance-issue-for-resolving-EdmP.patch > Serialization performance regression in Olingo 5 > > > Key: OLINGO-1624 > URL: https://issues.apache.org/jira/browse/OLINGO-1624 > Project: Olingo > Issue Type: Bug > Components: odata4-commons >Affects Versions: (Java) V4 4.10.0, Version (Java) V4 5.0.0 >Reporter: Florent Albert >Priority: Major > Attachments: > 0001-OLINGO-1624-Fix-performance-issue-for-resolving-EdmP.patch > > > Olingo 4.10 (via OLINGO-1167) introduced a performance regression. Commit > [https://github.com/apache/olingo-odata4/commit/ce5028d24f220ad0f60b5ac023c10e7b88b7c806] > now makes resolution of EdmPrimitiveTypeKind create and suppress an > exception for any non primitive type. > Construction in EdmTypeInfo in 4.10 and 5.0 is very expensive and causes > severe performance degradation on large datasets. For the same dataset, > ODataJsonSerializer.getEdmProperty() spends <200 ms in Olingo 4.9 and ~3000 > ms in Olingo 5 (15x slower). > This same issue was originally reported in in Olingo 4.2 and fixed in 4.7 > (via OLINGO-1357 and > [https://github.com/apache/olingo-odata4/pull/51/files|https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Folingo-odata4%2Fpull%2F51%2Ffiles=05%7C02%7Cfalbert%40ptc.com%7Cd24ae4d9097c4fcf037c08dc80c242c1%7Cb9921086ff774d0d828acb3381f678e2%7C0%7C0%7C638526819046368587%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=Y5ae4MIeiqxXLXbwJICWVMy0vQgfOohocPVmDqo1vlo%3D=0]). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (OLINGO-1625) The serializers have performance issues when Entities contain very large numbers of Properties
[ https://issues.apache.org/jira/browse/OLINGO-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Passerini updated OLINGO-1625: -- Flags: Patch Description: I've implemented an OData service that serves up some large datasets in a streaming fashion. Some of those datasets have large numbers of fields (over 1,000). When I requested one of them which was around 350M in size, it took way longer than expected. I profiled the request in IntelliJ's profiler and found that over 75% of the CPU cycles were spent in String.equals() comparing column names in the serializers. This is because there is an O(N^2) issue that for every column selected (in my case all of them) it will iterate across the entire list of entity properties looking for the one with the same name. I have already implemented a fix whereby before doing the property serialization, the serializer builds a hash map of property-name-to-property, making the resulting algorithm O(N) with the number of properties being serialized. After profiling the change, again in IntelliJ's profiler, the String.equals() which was over 75% before, is now under 1%. I will be creating a patch and attaching it momentarily. was: I've implemented an OData service that serves up some large datasets in a streaming fashion. Some of those datasets have large numbers of fields (over 1,000). When I requested one of them which was around 350M in size, it took way longer than expected. I profiled the request in IntelliJ's profiler and found that over 75% of the CPU cycles were spent in String.equals() comparing column names in the serializers. This is because there is an O(N^2) issue that for every column selected (in my case all of them) it will iterate across the entire list of entity properties looking for the one with the same name. I have already implemented a fix whereby before doing the property serialization, the serializer builds a hash map of property-name-to-property, making the resulting algorithm O(N) with the number of properties being serialized. After profiling the change, again in IntelliJ's profiler, the String.equals() which was over 75% before, is now under 1%. I will be creating a patch and attaching it momentarily. Patch now attached. > The serializers have performance issues when Entities contain very large > numbers of Properties > -- > > Key: OLINGO-1625 > URL: https://issues.apache.org/jira/browse/OLINGO-1625 > Project: Olingo > Issue Type: Bug > Components: odata4-server >Affects Versions: Version (Java) V4 5.0.0 >Reporter: Ron Passerini >Priority: Major > Labels: json, performance, serialization, xml > Fix For: (Java) V4 5.0.1 > > Attachments: > 0001-OLINGO-1625-Fix-performance-problem-in-serializers-f.patch > > > I've implemented an OData service that serves up some large datasets in a > streaming fashion. Some of those datasets have large numbers of fields (over > 1,000). When I requested one of them which was around 350M in size, it took > way longer than expected. > I profiled the request in IntelliJ's profiler and found that over 75% of the > CPU cycles were spent in String.equals() comparing column names in the > serializers. This is because there is an O(N^2) issue that for every column > selected (in my case all of them) it will iterate across the entire list of > entity properties looking for the one with the same name. > I have already implemented a fix whereby before doing the property > serialization, the serializer builds a hash map of property-name-to-property, > making the resulting algorithm O(N) with the number of properties being > serialized. > After profiling the change, again in IntelliJ's profiler, the String.equals() > which was over 75% before, is now under 1%. > I will be creating a patch and attaching it momentarily. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (OLINGO-1625) The serializers have performance issues when Entities contain very large numbers of Properties
[ https://issues.apache.org/jira/browse/OLINGO-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Passerini updated OLINGO-1625: -- Attachment: 0001-OLINGO-1625-Fix-performance-problem-in-serializers-f.patch > The serializers have performance issues when Entities contain very large > numbers of Properties > -- > > Key: OLINGO-1625 > URL: https://issues.apache.org/jira/browse/OLINGO-1625 > Project: Olingo > Issue Type: Bug > Components: odata4-server >Affects Versions: Version (Java) V4 5.0.0 >Reporter: Ron Passerini >Priority: Major > Labels: json, performance, serialization, xml > Fix For: (Java) V4 5.0.1 > > Attachments: > 0001-OLINGO-1625-Fix-performance-problem-in-serializers-f.patch > > > I've implemented an OData service that serves up some large datasets in a > streaming fashion. Some of those datasets have large numbers of fields (over > 1,000). When I requested one of them which was around 350M in size, it took > way longer than expected. > I profiled the request in IntelliJ's profiler and found that over 75% of the > CPU cycles were spent in String.equals() comparing column names in the > serializers. This is because there is an O(N^2) issue that for every column > selected (in my case all of them) it will iterate across the entire list of > entity properties looking for the one with the same name. > I have already implemented a fix whereby before doing the property > serialization, the serializer builds a hash map of property-name-to-property, > making the resulting algorithm O(N) with the number of properties being > serialized. > After profiling the change, again in IntelliJ's profiler, the String.equals() > which was over 75% before, is now under 1%. > I will be creating a patch and attaching it momentarily. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: Purpose of pg_dump tar archive format?
On Tue, Jun 4, 2024 at 2:55 PM Rob Sargent wrote: > > > On 6/4/24 11:40, Shaheed Haque wrote: > > > > We use it. I bet lots of others do too. > > > > > > Of course. There are lots of small, real, useful databases in the wild. > But why tar instead of custom? That was part of my original question.
[jira] [Created] (OLINGO-1625) The serializers have performance issues when Entities contain very large numbers of Properties
Ron Passerini created OLINGO-1625: - Summary: The serializers have performance issues when Entities contain very large numbers of Properties Key: OLINGO-1625 URL: https://issues.apache.org/jira/browse/OLINGO-1625 Project: Olingo Issue Type: Bug Components: odata4-server Affects Versions: Version (Java) V4 5.0.0 Reporter: Ron Passerini Fix For: (Java) V4 5.0.1 I've implemented an OData service that serves up some large datasets in a streaming fashion. Some of those datasets have large numbers of fields (over 1,000). When I requested one of them which was around 350M in size, it took way longer than expected. I profiled the request in IntelliJ's profiler and found that over 75% of the CPU cycles were spent in String.equals() comparing column names in the serializers. This is because there is an O(N^2) issue that for every column selected (in my case all of them) it will iterate across the entire list of entity properties looking for the one with the same name. I have already implemented a fix whereby before doing the property serialization, the serializer builds a hash map of property-name-to-property, making the resulting algorithm O(N) with the number of properties being serialized. After profiling the change, again in IntelliJ's profiler, the String.equals() which was over 75% before, is now under 1%. I will be creating a patch and attaching it momentarily. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: Purpose of pg_dump tar archive format?
On Tue, Jun 4, 2024 at 10:43 AM Adrian Klaver wrote: > On 6/4/24 05:13, Ron Johnson wrote: > > It doesn't support compression nor restore reordering like the custom > > format, so I'm having trouble seeing why it still exists (at least > > without a doc warning that it's obsolete). > > pg_dump -d test -U postgres -Ft | gzip --stdout > test.tgz > Who's got meaningful databases that small anymore? And if you've got meaningfully sized databases, open port 5432 and move them using pg_dump.
Purpose of pg_dump tar archive format?
It doesn't support compression nor restore reordering like the custom format, so I'm having trouble seeing why it still exists (at least without a doc warning that it's obsolete).
Re: Postgresql 16.3 Out Of Memory
On Mon, Jun 3, 2024 at 9:12 AM Greg Sabino Mullane wrote: > On Mon, Jun 3, 2024 at 6:19 AM Radu Radutiu wrote: > >> Do you have any idea how to further debug the problem? >> > > Putting aside the issue of non-reclaimed memory for now, can you show us > the actual query? The explain analyze you provided shows it doing an awful > lot of joins and then returning 14+ million rows to the client. Surely the > client does not need that many rows? > And the query cost is really high. "Did you ANALYZE the instance after conversion?" was my first question.
Re: RFR: 8330846: Add stacks of mounted virtual threads to the HotSpot thread dump [v6]
On Mon, 3 Jun 2024 11:26:27 GMT, Inigo Mediavilla Saiz wrote: >> Print the stack traces of mounted virtual threads when calling `jcmd >> Thread.print`. > > Inigo Mediavilla Saiz has updated the pull request incrementally with one > additional commit since the last revision: > > Add indentation for virtual thread stack About the output format: 1. The text Carrying virtual thread #N should appear, as it does, in the header of the output for the platform thread. 2. The stack for the mounted virtual thread should appear, indented *after* the stack of the platform thread, with the header `Mounted virtual thread #N`. - PR Comment: https://git.openjdk.org/jdk/pull/19482#issuecomment-2145162783
[kdenlive] [Bug 487950] New: Title dialog "+X" and "+Y" fields max out at 5000
https://bugs.kde.org/show_bug.cgi?id=487950 Bug ID: 487950 Summary: Title dialog "+X" and "+Y" fields max out at 5000 Classification: Applications Product: kdenlive Version: 24.05.0 Platform: Appimage OS: Linux Status: REPORTED Severity: normal Priority: NOR Component: User Interface Assignee: j...@kdenlive.org Reporter: kdenlive-b...@contact.dot-oz.net Target Milestone: --- Hi, Occasionally I use animated titles to create scrolling credits and the like, and occasionally those lists are long. There doesn't seem to be any problem with making them arbitrarily large in any direction - but the displayed X,Y coordinate for content maxes out at 5000. You can place it beyond that by dragging and dropping, but you can't tweak or see the exact placement point using those input dialog fields, which makes precise placement harder than it needs to be. For the same reason it would be nice if the guide lines expanded to cover all the space used by title elements, not just the visible viewport area, and if the increments of the snap-to grid were configurable. Cheers, Ron -- You are receiving this mail because: You are watching all bug changes.
[kdenlive] [Bug 487947] New: Subtitle .srt files are not autosaved with the project file
https://bugs.kde.org/show_bug.cgi?id=487947 Bug ID: 487947 Summary: Subtitle .srt files are not autosaved with the project file Classification: Applications Product: kdenlive Version: 24.02.2 Platform: Appimage OS: Linux Status: REPORTED Severity: normal Priority: NOR Component: User Interface Assignee: j...@kdenlive.org Reporter: kdenlive-b...@contact.dot-oz.net Target Milestone: --- Hi! This might arguably be a feature request - but: On the (getting rarer! :) occasions when some action crashes kdenlive, being able to rely on it having autosaved all but the last few moments of work, and able to reliably restore them when you restart, makes those crashes *much* less frustrating than they otherwise might be. But I discovered recently that changes to the subtitles are *not* saved and do not get restored. Any changes to them since the last manual save will be lost. It would be nice if they get automatically backed up along with the project files. Thanks! Ron -- You are receiving this mail because: You are watching all bug changes.
(flink) branch master updated (f3a3f926c6c -> e4fa72d9e48)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from f3a3f926c6c [FLINK-35483][runtime] Fix unstable BatchJobRecoveryTest. new 309e3246e02 [FLINK-35199][table] Remove dynamic options and add initialization configuration to CreatePeriodicRefreshWorkflow new e4fa72d9e48 [FLINK-35199][table] Support the execution of create materialized table in full refresh mode The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../table/client/gateway/SingleSessionManager.java | 1 + .../table/gateway/rest/SqlGatewayRestEndpoint.java | 6 + .../CreateEmbeddedSchedulerWorkflowHandler.java| 5 +- ...CreateEmbeddedSchedulerWorkflowRequestBody.java | 15 +- .../gateway/service/context/SessionContext.java| 36 ++- .../MaterializedTableManager.java | 248 + .../service/operation/OperationExecutor.java | 27 ++- .../table/gateway/service/session/Session.java | 4 + .../service/session/SessionManagerImpl.java| 1 + .../workflow/EmbeddedWorkflowScheduler.java| 2 +- .../flink/table/gateway/workflow/WorkflowInfo.java | 12 +- .../scheduler/EmbeddedQuartzScheduler.java | 248 - .../AbstractMaterializedTableStatementITCase.java | 47 +++- ...GatewayRestEndpointMaterializedTableITCase.java | 8 - .../rest/util/SqlGatewayRestEndpointExtension.java | 4 + .../service/MaterializedTableStatementITCase.java | 106 +++-- .../gateway/workflow/QuartzSchedulerUtilsTest.java | 11 +- .../resources/sql_gateway_rest_api_v3.snapshot | 2 +- .../workflow/CreatePeriodicRefreshWorkflow.java| 10 +- 19 files changed, 679 insertions(+), 114 deletions(-)
(flink) 01/02: [FLINK-35199][table] Remove dynamic options and add initialization configuration to CreatePeriodicRefreshWorkflow
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 309e3246e0232a0a363aa44ab6d5524133f8f548 Author: Feng Jin AuthorDate: Fri May 31 11:41:12 2024 +0800 [FLINK-35199][table] Remove dynamic options and add initialization configuration to CreatePeriodicRefreshWorkflow --- .../scheduler/CreateEmbeddedSchedulerWorkflowHandler.java | 5 +++-- .../CreateEmbeddedSchedulerWorkflowRequestBody.java | 15 +++ .../table/gateway/workflow/EmbeddedWorkflowScheduler.java | 2 +- .../apache/flink/table/gateway/workflow/WorkflowInfo.java | 12 +++- .../table/gateway/workflow/QuartzSchedulerUtilsTest.java | 11 --- .../src/test/resources/sql_gateway_rest_api_v3.snapshot | 2 +- .../table/workflow/CreatePeriodicRefreshWorkflow.java | 10 +- 7 files changed, 36 insertions(+), 21 deletions(-) diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java index b52094a39e6..9a7071ab935 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java @@ -72,14 +72,15 @@ public class CreateEmbeddedSchedulerWorkflowHandler String materializedTableIdentifier = request.getRequestBody().getMaterializedTableIdentifier(); String cronExpression = request.getRequestBody().getCronExpression(); -Map dynamicOptions = request.getRequestBody().getDynamicOptions(); +Map initConfig = request.getRequestBody().getInitConfig(); Map executionConfig = request.getRequestBody().getExecutionConfig(); String customScheduleTime = request.getRequestBody().getCustomScheduleTime(); String restEndpointURL = request.getRequestBody().getRestEndpointUrl(); WorkflowInfo workflowInfo = new WorkflowInfo( materializedTableIdentifier, -dynamicOptions == null ? Collections.emptyMap() : dynamicOptions, +Collections.emptyMap(), +initConfig == null ? Collections.emptyMap() : initConfig, executionConfig == null ? Collections.emptyMap() : executionConfig, customScheduleTime, restEndpointURL); diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java index e0628933560..d0ebf3201ba 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowRequestBody.java @@ -34,7 +34,7 @@ public class CreateEmbeddedSchedulerWorkflowRequestBody implements RequestBody { private static final String FIELD_NAME_MATERIALIZED_TABLE = "materializedTableIdentifier"; private static final String FIELD_NAME_CRON_EXPRESSION = "cronExpression"; -private static final String FIELD_NAME_DYNAMIC_OPTIONS = "dynamicOptions"; +private static final String FIELD_NAME_INIT_CONFIG = "initConfig"; private static final String FIELD_NAME_EXECUTION_CONFIG = "executionConfig"; private static final String FIELD_NAME_SCHEDULE_TIME = "customScheduleTime"; private static final String FIELD_NAME_REST_ENDPOINT_URL = "restEndpointUrl"; @@ -45,9 +45,8 @@ public class CreateEmbeddedSchedulerWorkflowRequestBody implements RequestBody { @JsonProperty(FIELD_NAME_CRON_EXPRESSION) private final String cronExpression; -@JsonProperty(FIELD_NAME_DYNAMIC_OPTIONS) -@Nullable -private final Map dynamicOptions; +@JsonProperty(FIELD_NAME_INIT_CONFIG) +private final Map initConfig; @JsonProperty(FIELD_NAME_EXECUTION_CONFIG) @Nullable @@ -63,14 +62,14 @@ public class CreateEmbeddedSchedulerWorkflowRequestBody implements RequestBody { public CreateEmbeddedSchedulerWorkflowRequestBody( @JsonProperty(FIELD_NAME_MATERIALIZED_
(flink) 02/02: [FLINK-35199][table] Support the execution of create materialized table in full refresh mode
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit e4fa72d9e480664656818395741c37a9995f9334 Author: Feng Jin AuthorDate: Fri May 31 11:52:18 2024 +0800 [FLINK-35199][table] Support the execution of create materialized table in full refresh mode --- .../table/client/gateway/SingleSessionManager.java | 1 + .../table/gateway/rest/SqlGatewayRestEndpoint.java | 6 + .../gateway/service/context/SessionContext.java| 36 ++- .../MaterializedTableManager.java | 248 + .../service/operation/OperationExecutor.java | 27 ++- .../table/gateway/service/session/Session.java | 4 + .../service/session/SessionManagerImpl.java| 1 + .../scheduler/EmbeddedQuartzScheduler.java | 248 - .../AbstractMaterializedTableStatementITCase.java | 47 +++- ...GatewayRestEndpointMaterializedTableITCase.java | 8 - .../rest/util/SqlGatewayRestEndpointExtension.java | 4 + .../service/MaterializedTableStatementITCase.java | 106 +++-- 12 files changed, 643 insertions(+), 93 deletions(-) diff --git a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java index 27b1ccaa484..9c7e7dee0bb 100644 --- a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java +++ b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/SingleSessionManager.java @@ -96,6 +96,7 @@ public class SingleSessionManager implements SessionManager { sessionHandle, environment, operationExecutorService)); +session.open(); return session; } diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java index 2e24b967850..2fa462ade85 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpoint.java @@ -18,6 +18,7 @@ package org.apache.flink.table.gateway.rest; +import org.apache.flink.annotation.VisibleForTesting; import org.apache.flink.api.java.tuple.Tuple2; import org.apache.flink.configuration.Configuration; import org.apache.flink.runtime.rest.RestServerEndpoint; @@ -83,6 +84,11 @@ public class SqlGatewayRestEndpoint extends RestServerEndpoint implements SqlGat quartzScheduler = new EmbeddedQuartzScheduler(); } +@VisibleForTesting +public EmbeddedQuartzScheduler getQuartzScheduler() { +return quartzScheduler; +} + @Override protected List> initializeHandlers( CompletableFuture localAddressFuture) { diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java index fa9ae05220f..cf1597ecea9 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/context/SessionContext.java @@ -38,6 +38,7 @@ import org.apache.flink.table.gateway.api.endpoint.EndpointVersion; import org.apache.flink.table.gateway.api.session.SessionEnvironment; import org.apache.flink.table.gateway.api.session.SessionHandle; import org.apache.flink.table.gateway.api.utils.SqlGatewayException; +import org.apache.flink.table.gateway.service.materializedtable.MaterializedTableManager; import org.apache.flink.table.gateway.service.operation.OperationExecutor; import org.apache.flink.table.gateway.service.operation.OperationManager; import org.apache.flink.table.gateway.service.utils.SqlExecutionException; @@ -237,6 +238,18 @@ public class SessionContext { statementSetOperations.add(operation); } +public void open() { +try { +sessionState.materializedTableManager.open(); +} catch (Exception e) { +LOG.error( +String.format( +"Failed to open the materialized table manager for the session %s.", +sessionId), +e); +} +} + // /** Close resources, e.g. catalogs. */ @@ -268,6 +281,15 @@ public class S
Re: ERROR: found xmin from before relfrozenxid; MultiXactid does no longer exist -- apparent wraparound
On Fri, May 31, 2024 at 1:25 PM Alanoly Andrews wrote: > Yes, and I know that upgrading the Postgres version is the stock answer > for situations like this. The upgrade is in the works. > *Patching *was the solution. It takes *five minutes*. Here's how I did it (since our RHEL systems are blocked from the Internet, and I had to manually d/l the relevant RPMs): $ sudo -iu postgres pg_ctl stop -wt -mfast $ sudo yum install PG96.24_RHEL6/*rpm $ sudo -iu postgres pg_ctl start -wt You'll have a bit of effort finding the PG10 repository, since it's EOL, but it can be found.
[kdenlive] [Bug 485356] External proxy preset, error when setting multiple profiles
https://bugs.kde.org/show_bug.cgi?id=485356 Ron changed: What|Removed |Added Version|git-master |24.05.0 Platform|Debian stable |Appimage -- You are receiving this mail because: You are watching all bug changes.
Heading Level Access In Safari Browser
Hi Group, I finally was able to change some settings in Safari Browser on the iPhone to get it to display HTML files that are stored locally on the iPhone. I created my modified Dice Football Game Play Sheet by using headings to more quickly navigate from play list to play list. I have the lists separated into running plays, kicking plays, passing plays and conversions at level one and the various list of items such as short pass, long pass and screen pass for the passing plays and the running plays such as left end run, right tackle play and reverse at heading level two. Is there any way to navigate by heading levels using a quick number scheme such as is done on Windows desktops with quick key number navigation such as number one for heading level one and number two for heading level two on the iPhone? Thanks for any help. -- Signature: For a nation to admit it has done grievous wrongs and will strive to correct them for the betterment of all is no vice; For a nation to claim it has always been great, needs no improvement and to cling to its past achievements is no virtue! -- The following information is important for all members of the V iPhone list. If you have any questions or concerns about the running of this list, or if you feel that a member's post is inappropriate, please contact the owners or moderators directly rather than posting on the list itself. Your V iPhone list moderator is Mark Taylor. Mark can be reached at: mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at caraqu...@caraquinn.com The archives for this list can be searched at: http://www.mail-archive.com/viphone@googlegroups.com/ --- You received this message because you are subscribed to the Google Groups "VIPhone" group. To unsubscribe from this group and stop receiving emails from it, send an email to viphone+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/viphone/7a3d2c9c-6deb-8621-6d2a-105199764add%40roadrunner.com.
[kdenlive] [Bug 485356] External proxy preset, error when setting multiple profiles
https://bugs.kde.org/show_bug.cgi?id=485356 --- Comment #1 from Ron --- Hi, Just a followup to this now that the external proxy editing dialog has been enabled in 24.05.0. The bug I noted here is still present in the 24.05.0 appimage. I can see it manifest if I just open that dialog and then flick between the various preset options. If you select the GoPro or Insta option (or any with multiple profiles) then select a different profile, then flick back to the GoPro or Insta one (without closing the dialog), you'll see that each time you go back to the multiple profile preset, the options in it get shuffled around and appear in the wrong fields. Cheers, Ron -- You are receiving this mail because: You are watching all bug changes.
(flink) branch master updated (ce0b61f376b -> 2c35e48addf)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from ce0b61f376b [FLINK-35351][checkpoint] Clean up and unify code for the custom partitioner test case add bc14d551e04 [FLINK-35195][test/test-filesystem] test-filesystem support partition.fields option add 2c35e48addf [FLINK-35348][table] Introduce refresh materialized table rest api No new revisions were added by this update. Summary of changes: .../file/table/FileSystemTableFactory.java | 2 +- .../flink/table/gateway/api/SqlGatewayService.java | 28 ++ .../gateway/api/utils/MockedSqlGatewayService.java | 14 + .../table/gateway/rest/SqlGatewayRestEndpoint.java | 15 + .../RefreshMaterializedTableHandler.java | 95 .../RefreshMaterializedTableHeaders.java | 96 .../MaterializedTableIdentifierPathParameter.java | 46 ++ .../RefreshMaterializedTableParameters.java| 56 +++ .../RefreshMaterializedTableRequestBody.java | 99 .../RefreshMaterializedTableResponseBody.java | 43 ++ .../gateway/service/SqlGatewayServiceImpl.java | 31 ++ .../MaterializedTableManager.java | 127 - .../service/operation/OperationExecutor.java | 24 + .../AbstractMaterializedTableStatementITCase.java | 339 + ...GatewayRestEndpointMaterializedTableITCase.java | 187 +++ .../service/MaterializedTableStatementITCase.java | 535 +++-- .../MaterializedTableManagerTest.java | 77 ++- .../resources/sql_gateway_rest_api_v3.snapshot | 57 +++ .../api/config/MaterializedTableConfigOptions.java | 2 + .../file/testutils/TestFileSystemTableFactory.java | 16 + .../testutils/TestFileSystemTableFactoryTest.java | 3 + 21 files changed, 1602 insertions(+), 290 deletions(-) create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/RefreshMaterializedTableHandler.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/RefreshMaterializedTableHeaders.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/MaterializedTableIdentifierPathParameter.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableParameters.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableRequestBody.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/materializedtable/RefreshMaterializedTableResponseBody.java create mode 100644 flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/AbstractMaterializedTableStatementITCase.java create mode 100644 flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpointMaterializedTableITCase.java
[TICTOC]Re: Enterprise Profile: Support for Non standard TCs
Hi Doug, This draft intends to be a standard track RFC. Can an enterprise-profile compliant TC modify the source IP address of event messages? Is an enterprise-profile compliant time-transmitter (i.e. Master or Boundary clocks) required to support configuration of clock-id to ip-address mappings? Thanks Ron From: Doug Arnold Sent: Tuesday, May 28, 2024 4:52 PM To: Ron Cohen ; tictoc@ietf.org Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments Hello Ron, The enterprise profile draft does not state that TCs MUST modify the source addresses of PTP event messages. Nor does it state that TCs MUST NOT modify the source addresses. It is merely pointing out that, in the field, a PTP instance can receive PTP event messages with either the source address of the parent clock or the source address of a TC in the communication path. I think that this is critically important information for implementors of PTP capable devices and should remain in the draft. I personally prefer TC implementations that do not modify the source address, as that is more helpful for people deploying and maintaining PTP networks. However, some TC vendors have told me that they don't do that because they believe that it violates the standards of the transport network (IP and/or Ethernet). From a layer model architecture point of view, they have a point: PTP UDP IP Ethernet Any packet payload sent up to the PTP layer, modified, sent back down the stack and retransmitted would be a new packet and a new frame. Regards, Doug From: Ron Cohen mailto:r...@marvell.com>> Sent: Sunday, May 26, 2024 7:44 AM To: Doug Arnold mailto:doug.arn...@meinberg-usa.com>>; tictoc@ietf.org<mailto:tictoc@ietf.org> mailto:tictoc@ietf.org>> Subject: RE: Enterprise Profile: Support for Non standard TCs Hi Doug, Thanks for the reference. This note was added in the 2019 version, and I believe requires further discussion/clarifications, but I would like to keep the focus on the UDP/IP encapsulation, which is the one required by the Enterprise profile. "All messages, including PTP messages, shall be transmitted and received in conformance with the standards governing the transport, network, and physical layers of the communication paths used." An IEEE-1588 compliant TC supporting UDP/IP encapsulation must either modify the source-IP address of event messages or must not modify the address. Annex E of 1588-2019 is the normative specification of this encapsulation. If an E2E TC changes the source IPv4 address of an event message, it must re-calculate the IPv4 header checksum as well. This is an important consideration in HW implementations. Update of the IPv4 header checksum is not mentioned in Annex E (or anywhere else in the spec). My point is that it is not specified in Annex-E because a TC must not modify the IP header fields protected by the IPv4 header checksum. AFAIK, the IEEE-1588-2019 standard does not specify the need for Clock-ID to delay-resp mapping to support UDP/IP encapsulation either, for the same reason; it is not required for standard E2E TC implementations. If we are not in agreement what is the mandatory behavior of Annex-E TC with regards to source IP address, I suggest to first ratify it with other members of the WG / with other established TC vendors before moving forward with the draft. Best, Ron From: Doug Arnold mailto:doug.arn...@meinberg-usa.com>> Sent: Friday, May 24, 2024 12:40 AM To: Ron Cohen mailto:r...@marvell.com>>; tictoc@ietf.org<mailto:tictoc@ietf.org> Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments Hi Ron, I excluded NATs because I don't think that they are common in networks where enterprise profile PTP is used. So I just didn't want to address them, I wouldn't say the same about TCs. Some TC implementations do change the source address, and some don't. I've seen both kinds at PTP plugfests. That is why the language in the draft says TCs might change the source. address. I think that this is important for network operators to know. That is why I want that statement in there. Technically speaking TCs do not forward frames/packets containing PTP event messages. Instead, they take them up the PTP layer, alter them, sed them back down to the data link or network layers and then transmit new frames/packets. That is officially true even in 1-step cut-through when the implementation combines all of these steps. At the PTP layer we call this retransmission, but that is not how it is viewed
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Ron / BCLUG wrote on 2024-05-27 18:10: you'll love both the runit and s6 init systems. That's great, I didn't know they ran startup stuff in parallel. Is it achieved through "script_name &" or something else? Answering myself, runit looks kinda nifty according to this: https://en.wikipedia.org/wiki/Runit It's actually init + services management, which is nice. And originally from daemontools, by DJB (Daniel Bernstein), who's quite a wizard and has written an impressive number of core utilities (qmail, djbdns, etc.). Kinda sounds like Lennart Poettering, come to think of it. So, yeah, it looks nice, for sure. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
(flink) 02/02: [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 49f22254a78d554ac49810058c209297331129cd Author: fengli AuthorDate: Mon May 27 20:54:39 2024 +0800 [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode --- .../flink/table/utils/IntervalFreshnessUtils.java | 74 .../table/utils/IntervalFreshnessUtilsTest.java| 80 +- .../SqlCreateMaterializedTableConverter.java | 6 ++ ...erializedTableNodeToOperationConverterTest.java | 9 +++ 4 files changed, 168 insertions(+), 1 deletion(-) diff --git a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java index 121200098ec..cd58bff4d91 100644 --- a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java +++ b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java @@ -31,6 +31,15 @@ import java.time.Duration; @Internal public class IntervalFreshnessUtils { +private static final String SECOND_CRON_EXPRESSION_TEMPLATE = "0/%s * * * * ? *"; +private static final String MINUTE_CRON_EXPRESSION_TEMPLATE = "0 0/%s * * * ? *"; +private static final String HOUR_CRON_EXPRESSION_TEMPLATE = "0 0 0/%s * * ? *"; +private static final String ONE_DAY_CRON_EXPRESSION_TEMPLATE = "0 0 0 * * ? *"; + +private static final long SECOND_CRON_UPPER_BOUND = 60; +private static final long MINUTE_CRON_UPPER_BOUND = 60; +private static final long HOUR_CRON_UPPER_BOUND = 24; + private IntervalFreshnessUtils() {} @VisibleForTesting @@ -69,4 +78,69 @@ public class IntervalFreshnessUtils { intervalFreshness.getTimeUnit())); } } + +/** + * This is an util method that is used to convert the freshness of materialized table to cron + * expression in full refresh mode. Since freshness and cron expression cannot be converted + * equivalently, there are currently only a limited patterns of freshness that can be converted + * to cron expression. + */ +public static String convertFreshnessToCron(IntervalFreshness intervalFreshness) { +switch (intervalFreshness.getTimeUnit()) { +case SECOND: +return validateAndConvertCron( +intervalFreshness, +SECOND_CRON_UPPER_BOUND, +SECOND_CRON_EXPRESSION_TEMPLATE); +case MINUTE: +return validateAndConvertCron( +intervalFreshness, +MINUTE_CRON_UPPER_BOUND, +MINUTE_CRON_EXPRESSION_TEMPLATE); +case HOUR: +return validateAndConvertCron( +intervalFreshness, HOUR_CRON_UPPER_BOUND, HOUR_CRON_EXPRESSION_TEMPLATE); +case DAY: +return validateAndConvertDayCron(intervalFreshness); +default: +throw new ValidationException( +String.format( +"Unknown freshness time unit: %s.", +intervalFreshness.getTimeUnit())); +} +} + +private static String validateAndConvertCron( +IntervalFreshness intervalFreshness, long cronUpperBound, String cronTemplate) { +long interval = Long.parseLong(intervalFreshness.getInterval()); +IntervalFreshness.TimeUnit timeUnit = intervalFreshness.getTimeUnit(); +// Freshness must be less than cronUpperBound for corresponding time unit when convert it +// to cron expression +if (interval >= cronUpperBound) { +throw new ValidationException( +String.format( +"In full refresh mode, freshness must be less than %s when the time unit is %s.", +cronUpperBound, timeUnit)); +} +// Freshness must be factors of cronUpperBound for corresponding time unit +if (cronUpperBound % interval != 0) { +throw new ValidationException( +String.format( +"In full refresh mode, only freshness that are factors of %s are currently supported when the time unit is %s.", +cronUpperBound, timeUnit)); +} + +return String.format(cronTemplate, interval); +} + +private static String validateAndConvertDayCron(IntervalFreshness intervalFreshness) { +// Since the number of days in each month is different, only one day of freshness is +
(flink) 01/02: [FLINK-35425][table-common] Introduce IntervalFreshness to support materialized table full refresh mode
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 61a68bc9dc74926775dd546af64fe176782f70ba Author: fengli AuthorDate: Fri May 24 12:24:49 2024 +0800 [FLINK-35425][table-common] Introduce IntervalFreshness to support materialized table full refresh mode --- .../catalog/CatalogBaseTableResolutionTest.java| 10 +- .../table/catalog/CatalogMaterializedTable.java| 19 +++- .../flink/table/catalog/CatalogPropertiesUtil.java | 20 +++- .../catalog/DefaultCatalogMaterializedTable.java | 7 +- .../flink/table/catalog/IntervalFreshness.java | 104 + .../catalog/ResolvedCatalogMaterializedTable.java | 5 +- .../flink/table/utils/IntervalFreshnessUtils.java | 72 ++ .../table/utils/IntervalFreshnessUtilsTest.java| 67 + .../SqlCreateMaterializedTableConverter.java | 9 +- .../planner/utils/MaterializedTableUtils.java | 16 ++-- ...erializedTableNodeToOperationConverterTest.java | 4 +- .../catalog/TestFileSystemCatalogTest.java | 6 +- 12 files changed, 302 insertions(+), 37 deletions(-) diff --git a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java index 72a22c22935..a9436ac21df 100644 --- a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java +++ b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java @@ -38,7 +38,6 @@ import org.junit.jupiter.api.Test; import javax.annotation.Nullable; -import java.time.Duration; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; @@ -235,8 +234,8 @@ class CatalogBaseTableResolutionTest { assertThat(resolvedCatalogMaterializedTable.getResolvedSchema()) .isEqualTo(RESOLVED_MATERIALIZED_TABLE_SCHEMA); -assertThat(resolvedCatalogMaterializedTable.getFreshness()) -.isEqualTo(Duration.ofSeconds(30)); +assertThat(resolvedCatalogMaterializedTable.getDefinitionFreshness()) +.isEqualTo(IntervalFreshness.ofSecond("30")); assertThat(resolvedCatalogMaterializedTable.getDefinitionQuery()) .isEqualTo(DEFINITION_QUERY); assertThat(resolvedCatalogMaterializedTable.getLogicalRefreshMode()) @@ -424,7 +423,8 @@ class CatalogBaseTableResolutionTest { properties.put("schema.3.comment", ""); properties.put("schema.primary-key.name", "primary_constraint"); properties.put("schema.primary-key.columns", "id"); -properties.put("freshness", "PT30S"); +properties.put("freshness-interval", "30"); +properties.put("freshness-unit", "SECOND"); properties.put("logical-refresh-mode", "CONTINUOUS"); properties.put("refresh-mode", "CONTINUOUS"); properties.put("refresh-status", "INITIALIZING"); @@ -454,7 +454,7 @@ class CatalogBaseTableResolutionTest { .partitionKeys(partitionKeys) .options(Collections.emptyMap()) .definitionQuery(definitionQuery) -.freshness(Duration.ofSeconds(30)) +.freshness(IntervalFreshness.ofSecond("30")) .logicalRefreshMode(CatalogMaterializedTable.LogicalRefreshMode.AUTOMATIC) .refreshMode(CatalogMaterializedTable.RefreshMode.CONTINUOUS) .refreshStatus(CatalogMaterializedTable.RefreshStatus.INITIALIZING) diff --git a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java index 51856cc859e..1b41ed0ddb9 100644 --- a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java +++ b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogMaterializedTable.java @@ -30,6 +30,8 @@ import java.util.List; import java.util.Map; import java.util.Optional; +import static org.apache.flink.table.utils.IntervalFreshnessUtils.convertFreshnessToDuration; + /** * Represents the unresolved metadata of a materialized table in a {@link Catalog}. * @@ -113,9 +115,18 @@ public interface CatalogMaterializedTable extends CatalogBaseTable { String getDefinitionQuery(); /** - * Get the freshness of materialized table which is used to determine the physical refresh mode. +
(flink) branch master updated (6c417719972 -> 49f22254a78)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from 6c417719972 [hotfix] Fix modification conflict between FLINK-35465 and FLINK-35359 new 61a68bc9dc7 [FLINK-35425][table-common] Introduce IntervalFreshness to support materialized table full refresh mode new 49f22254a78 [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../catalog/CatalogBaseTableResolutionTest.java| 10 +- .../table/catalog/CatalogMaterializedTable.java| 19 ++- .../flink/table/catalog/CatalogPropertiesUtil.java | 20 ++- .../catalog/DefaultCatalogMaterializedTable.java | 7 +- .../flink/table/catalog/IntervalFreshness.java | 104 +++ .../catalog/ResolvedCatalogMaterializedTable.java | 5 +- .../flink/table/utils/IntervalFreshnessUtils.java | 146 + .../table/utils/IntervalFreshnessUtilsTest.java| 145 .../SqlCreateMaterializedTableConverter.java | 15 ++- .../planner/utils/MaterializedTableUtils.java | 16 ++- ...erializedTableNodeToOperationConverterTest.java | 13 +- .../catalog/TestFileSystemCatalogTest.java | 6 +- 12 files changed, 469 insertions(+), 37 deletions(-) create mode 100644 flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java create mode 100644 flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/IntervalFreshnessUtils.java create mode 100644 flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/IntervalFreshnessUtilsTest.java
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Steve Litt wrote on 2024-05-27 05:24: If you like parallelism, It is a compelling idea... you'll love both the runit and s6 init systems. That's great, I didn't know they ran startup stuff in parallel. Is it achieved through "script_name &" or something else? Try em, you'll like em. Not really keen on swapping out everything just for an init system, when I already have parallelism and a whole bunch more. Plus, as mentioned elsewhere, having suffered through OS/2 vs Windows, and early days of Linux, I prefer to stick to more mainstream stuff these days. But again, I think it's great those init systems use parallelism and am curious how they do it. Thanks! rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Jonathan Drews wrote on 2024-05-27 14:43: There's a lot of cross-over with servers and software between the FLOSS families. >> How would you know if you don't run FreeBSD or OpenBSD? Because I'm not stupid? I mean, that's a really dumb question; are you disputing the overlap between Linux and BSD systems? Things like Nick's shell scripting presentations have lots of overlap between FLOSS systems and are immensely enjoyable and informative. When the questions are BSD specific, I don't say anything. You just told me how OpenBSD didn't have tools similar to systemd when you have no working experience of OpenBSD tools such as hostctl, smtpctl, sysctl, rcctl etc. I *asked* if it was possible to get a list like the example I gave. You answered "I have logs". Without specific tools to parse the logs, that means "no". Are you intentionally misunderstanding that? Are you not disclosing such tools for some reason? Do you have reading comprehension difficulties? It was a simple question that wasn't inflammatory, just a "how to ...?" question. Maybe there is a tool that generates just such a listing in the BSD world. That'd be great, I'd like to hear about it. But you're clearly the wrong person to engage with on such things. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Jonathan Drews wrote on 2024-05-27 12:09: This is a list devoted to helping people with *BSD systems. If you have no intention of using it, why are you even here? There's a lot of cross-over with servers and software between the FLOSS families. I try to contribute answers to questions (like your inventory management one, for example) when no one else chimes in. When the questions are BSD specific, I don't say anything. But I don't disparage BSD, I have no problem with it (them?). Also, someone needs to challenge the "Linux is becoming a poor implementation of Window!!1!" comments. Hope that helps. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Jonathan Drews wrote on 2024-05-27 10:59: The boot time was so slow that it was obvious. If different processes were involved in the boot sequence, that just may have an effect on time-to-desktop, but since it's left unaddressed, I guess we'll never know. Is something like that even possible on non-systemd machines? I have log files in /var/log on OpenBSD. So the answer is "no". Having log files and processing them to extract startup times, sequencing, etc. is quite different. Have you even installed OpenBSD or FreeBSD? Have you ever used *BSD for longer than one day? Installed, yes. Used, no. I used OS/2 back in the day and have experienced the hassle of using niche software that isn't well supported and do not wish to subject myself to that again unless there's a compelling reason. It was a bit of a hassle initially using Linux as a daily driver, but it's gotten so much better in the past 10-ish years. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Jonathan Drews wrote on 2024-05-23 19:54: I don't know what the cause was but I could never get scanning (xsane) to work on either Linux Mint or Kubuntu. Scanning has been a solved problem in Linux for a decade or two, so it's hard to know what went wrong, nor what purpose is served bringing it up really. One of the claims about systemd is that it would provide faster boot up. However, my Devuan Linux boots faster than either KdeNeon or Kubuntu or Linux Mint. All three were installed on the same T480 laptop, which now runs Devuan. All running KDE? With the exact same packages installed? Seems like apples to oranges without that info. What times did you measure for them? Parallelism is pretty much always going to be faster, all else being equal. General question for the list: how does one diagnose which process(es) slow down booting up on non-systemd hosts? I run `systemd-analyze blame` and get a nice list like this: 1min 10.758s plocate-updatedb.service 31.549s apt-daily.service 31.283s apt-daily-upgrade.service 15.423s fstrim.service 7.902s dev-loop2.device 6.296s snapd.service 4.059s systemd-networkd-wait-online.service 3.830s systemd-udev-settle.service 3.819s smartmontools.service 3.433s zfs-import-cache.service 2.432s postfix@-.service I can see at a glance exactly what is going on with my boot sequence timing. Is something like that even possible on non-systemd machines? Finally there is the xz exploit, which has a writeup: https://marc.info/?l=openbsd-misc=171179460913574=2 it leads in with a quote to remember - "This dependency existed not because of a deliberate design decision by the developers of OpenSSH, but because of a kludge added by some Linux distributions to integrate the tool with the operating system's newfangled orchestration service, systemd." "kludge". "newfangled". That's quite the biased take on it, not worth the time it took to read it. The xz exploit was a nation-state attack targeting sshd via xz-utils as a vector, then pivoting via systemd's dynamic linking of xz. Everyone knows that if one is targeted by nation-state actors, it's pretty much game over. Defenders need 100% success, attackers only need 1 success. As for systemd linking to xz-utils, everyone realizes that log files get compressed, I hope? When software statically links libraries, people complain because: * multiple versions statically linked "waste disk space" * with dynamic linking, a vulnerability only needs one library to be patched for all apps to be patched The flip side is, one compromised library and lots of apps are vulnerable, I guess. There isn't really a Right Answer™ to statically vs dynamically linking. Anyway, systemd had a patch committed that would statically link xz-utils, just waiting for distributions to bundle it, when the xz-utils hack happened. FWIW. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
(flink) branch master updated (4b342da6d14 -> 90e2d6cfeea)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from 4b342da6d14 [FLINK-35426][table-planner] Change the distribution of DynamicFilteringDataCollector to Broadcast add 90e2d6cfeea [FLINK-35342][table] Fix the unstable MaterializedTableStatementITCase test due to wrong job status check logic No new revisions were added by this update. Summary of changes: .../gateway/service/MaterializedTableStatementITCase.java | 10 ++ 1 file changed, 10 insertions(+)
[TICTOC]Re: Enterprise Profile: Support for Non standard TCs
Hi Doug, Thanks for the reference. This note was added in the 2019 version, and I believe requires further discussion/clarifications, but I would like to keep the focus on the UDP/IP encapsulation, which is the one required by the Enterprise profile. "All messages, including PTP messages, shall be transmitted and received in conformance with the standards governing the transport, network, and physical layers of the communication paths used." An IEEE-1588 compliant TC supporting UDP/IP encapsulation must either modify the source-IP address of event messages or must not modify the address. Annex E of 1588-2019 is the normative specification of this encapsulation. If an E2E TC changes the source IPv4 address of an event message, it must re-calculate the IPv4 header checksum as well. This is an important consideration in HW implementations. Update of the IPv4 header checksum is not mentioned in Annex E (or anywhere else in the spec). My point is that it is not specified in Annex-E because a TC must not modify the IP header fields protected by the IPv4 header checksum. AFAIK, the IEEE-1588-2019 standard does not specify the need for Clock-ID to delay-resp mapping to support UDP/IP encapsulation either, for the same reason; it is not required for standard E2E TC implementations. If we are not in agreement what is the mandatory behavior of Annex-E TC with regards to source IP address, I suggest to first ratify it with other members of the WG / with other established TC vendors before moving forward with the draft. Best, Ron From: Doug Arnold Sent: Friday, May 24, 2024 12:40 AM To: Ron Cohen ; tictoc@ietf.org Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments ________ Hi Ron, I excluded NATs because I don't think that they are common in networks where enterprise profile PTP is used. So I just didn't want to address them, I wouldn't say the same about TCs. Some TC implementations do change the source address, and some don't. I've seen both kinds at PTP plugfests. That is why the language in the draft says TCs might change the source. address. I think that this is important for network operators to know. That is why I want that statement in there. Technically speaking TCs do not forward frames/packets containing PTP event messages. Instead, they take them up the PTP layer, alter them, sed them back down to the data link or network layers and then transmit new frames/packets. That is officially true even in 1-step cut-through when the implementation combines all of these steps. At the PTP layer we call this retransmission, but that is not how it is viewed by the layers below. IEEE 802.1Q is explicit about this, and the IEEE 802.1 working group sent a message to the 1588 WG asking us to point this out in the 2019 edition of 1588. IEEE 1588-2019 subclause 7.3.1 starts with these two paragraphs: "All messages, including PTP messages, shall be transmitted and received in conformance with the standards governing the transport, network, and physical layers of the communication paths used. NOTE-As an example, consider IEEE 1588 PTP Instances, specifically including Transparent Clocks, running on IEEE 802.1Q communication paths. Suppose we have two Boundary Clocks separated by a Transparent Clock. The Transparent Clock entity (the PTP stack running above the MAC layer) is required to insert the appropriate MAC address of the Transparent Clock into the sourceAddress field of the Ethernet header for ALL messages it transmits. Other communication protocols can have similar requirements." Regards, Doug ________ From: Ron Cohen mailto:r...@marvell.com>> Sent: Wednesday, May 22, 2024 11:57 PM To: Doug Arnold mailto:doug.arn...@meinberg-usa.com>>; tictoc@ietf.org<mailto:tictoc@ietf.org> mailto:tictoc@ietf.org>> Subject: RE: Enterprise Profile: Support for Non standard TCs Hi Doug, The draft states that deployments with NAT are out of scope of the document. "In IPv4 networks some clocks might be hidden behind a NAT, which hides their IP addresses from the rest of the network. Note also that the use of NATs may place limitations on the topology of PTP networks, depending on the port forwarding scheme employed. Details of implementing PTP with NATs are out of cope of this document." A PTP TC that is a bridge per 802.1q or an IPv4/6 router must not change the source IP address of PTP delay requests. I've been working with TC solutions for more than 10 years. Both 1-step PTP TCs in HW (as well as 2-step in HW+SW) and none modified the source IP address of E2E delay requests, when working as either a bridge or router. This is the case for the products of the company I currently work for as well. My input
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Steve Litt wrote on 2024-05-25 01:25: That being said I don't think it calls for a full boycott of Linux, > Thanks Kyle. Like you, I don't think systemd calls for a boycott on Linux, and I hadn't intended to imply it. Cheers Steve, Kyle, et al. I just wanted to say, despite the spirited debates, it's absolutely wonderful that there's top-notch software available for free that we all get to use in the way we choose to use it. Thanks to all the contributors, and thanks to the list for giving us all a place to chat, rant, rave, and in the end, discuss our topics of interest! rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Kyle Willett wrote on 2024-05-23 21:31: One piece of software can't be that good at so many different tasks! I'm not sure that logic holds up: "Fedora can't be that good at so many different tasks" "Linux kernel can't be that good at so many different tasks" GNU utilities - contains logging tools, mcron cron job implementation, grub, ... That's not really an apples-to-apples comparison, but packaging a bunch of tools under one moniker isn't uncommon. > sudo replacement with run0 What did you think about the discussion (was it on this list?) about suid and the inherent risks with the (allegedly) spotty implementations of that vis-á-vis sudo? It was over my head, but there were issues raised and sudo CVEs patches mentioned. Someone with very deep knowledge of the topic proposed the run0 vs sudo and had some valid-looking reasons for doing so. Now, granted systemd utils show up in a *lot* of places, giving valid reason to be curious about why. On the other hand, a services management system probably should handle a lot of different functionality. And, some of those new utilities have great features, i.e.: * show me all log messages from postfix from 2 boots ago *only* * show me all the "cron" jobs in order of when they next launch, the time elapsed since last launch,... * show me a list of all services that start at boot time and how long they took to become active (wow, I just noticed it took 30.566s for apt-daily-upgrade.service to come up) Admittedly, I'm not a fan of resolvectl and some other stuff, and more often than not use cron, not timers. Cheers, rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
(flink) branch master updated (0737220959f -> 71e6746727a)
This is an automated email from the ASF dual-hosted git repository. ron pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git from 0737220959f [FLINK-35216] Support for RETURNING clause of JSON_QUERY add 0ec6302cff4 [FLINK-35347][table-common] Introduce RefreshWorkflow related implementation to support full refresh mode for materialized table add 62b8fee5208 [FLINK-35347][table] Introduce embedded scheduler to support full refresh mode for materialized table add 71e6746727a [FLINK-35347][table] Introduce EmbeddedWorkflowScheduler plugin based on embedded scheduler No new revisions were added by this update. Summary of changes: flink-table/flink-sql-gateway/pom.xml | 26 ++ .../table/gateway/rest/SqlGatewayRestEndpoint.java | 60 ++- .../CreateEmbeddedSchedulerWorkflowHandler.java| 98 .../DeleteEmbeddedSchedulerWorkflowHandler.java| 75 +++ .../ResumeEmbeddedSchedulerWorkflowHandler.java| 75 +++ .../SuspendEmbeddedSchedulerWorkflowHandler.java | 75 +++ .../AbstractEmbeddedSchedulerWorkflowHeaders.java | 63 +++ .../CreateEmbeddedSchedulerWorkflowHeaders.java} | 65 ++- .../DeleteEmbeddedSchedulerWorkflowHeaders.java| 50 ++ .../ResumeEmbeddedSchedulerWorkflowHeaders.java| 50 ++ .../SuspendEmbeddedSchedulerWorkflowHeaders.java | 50 ++ .../header/session/ConfigureSessionHeaders.java| 4 +- .../header/statement/CompleteStatementHeaders.java | 4 +- ...CreateEmbeddedSchedulerWorkflowRequestBody.java | 105 + ...reateEmbeddedSchedulerWorkflowResponseBody.java | 53 +++ .../EmbeddedSchedulerWorkflowRequestBody.java | 55 +++ .../rest/util/SqlGatewayRestAPIVersion.java| 5 +- .../gateway/workflow/EmbeddedRefreshHandler.java | 84 .../workflow/EmbeddedRefreshHandlerSerializer.java | 45 ++ .../workflow/EmbeddedWorkflowScheduler.java| 235 ++ .../workflow/EmbeddedWorkflowSchedulerFactory.java | 67 +++ .../flink/table/gateway/workflow/WorkflowInfo.java | 125 + .../scheduler/EmbeddedQuartzScheduler.java | 229 + .../workflow/scheduler/QuartzSchedulerUtils.java | 125 + .../workflow/scheduler/SchedulerException.java}| 14 +- .../src/main/resources/META-INF/NOTICE | 9 + .../org.apache.flink.table.factories.Factory | 1 + .../table/gateway/rest/RestAPIITCaseBase.java | 6 +- .../rest/util/TestingSqlGatewayRestEndpoint.java | 4 +- .../workflow/EmbeddedRefreshHandlerTest.java} | 28 +- .../workflow/EmbeddedSchedulerRelatedITCase.java | 350 ++ .../gateway/workflow/QuartzSchedulerUtilsTest.java | 83 .../resources/sql_gateway_rest_api_v3.snapshot | 519 + .../table/refresh/ContinuousRefreshHandler.java| 2 + .../workflow/CreatePeriodicRefreshWorkflow.java| 85 ...owException.java => ResumeRefreshWorkflow.java} | 19 +- ...wException.java => SuspendRefreshWorkflow.java} | 19 +- .../flink/table/workflow/WorkflowException.java| 5 +- flink-table/pom.xml| 1 + 39 files changed, 2887 insertions(+), 81 deletions(-) create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHandler.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHandler.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/AbstractEmbeddedSchedulerWorkflowHeaders.java copy flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/{statement/CompleteStatementHeaders.java => materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHeaders.java} (51%) create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHeaders.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHeaders.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/header/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHeaders.java create mode 100644 flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/message/material
Re: [Semibug] tragedy of systemd: was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Steve Litt wrote on 2024-05-23 18:06: I'll address his central point, which is that systemd has many benefits. My rebuttal is that nobody needs that kind of complexity. Computers are complex, imagine that. Most systemd features can and have been done better and simpler other ways. Asserts facts not in evidence; show your evidence. ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: PG 12.2 ERROR: cannot freeze committed xmax
On Thu, May 23, 2024 at 9:41 AM bruno da silva wrote: > Hello, > I have a deployment with PG 12.2 reporting ERROR: cannot freeze committed > xmax > using Red Hat Enterprise Linux 8.9. > > What is the recommended to find any bug fixes that the version 12.2 had > that could have caused this error. > https://www.postgresql.org/docs/release/ You're missing *four years* of bug fixes. Could this error be caused by OS/Hardware related issues? > Four years of bug fixes is more likely the answer.
Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Steve Litt wrote on 2024-05-23 02:53: LibreOffice' reason for existence is to interact with MS Office documents. If it can't do that, why use it? The blame for poor interaction lies with Microsoft, 100%. Also, another major reason for LibreOffice is to have a full featured office suite that is *not* Microsoft Office, one that runs natively on all OSs. Which it has succeeded at nicely. From my perspective, LibreOffice suffers from the same problem now afflicting most Linux distributions: Trying to be easy for Windows people. Systemd But systemd has absolutely nothing to do with being easy for Windows people. It exists to provide a services lifecycle management system. Just because you dislike both of them does not mean the two are related somehow. Once again, I'll link "The Tragedy of systemd" by Benno Rice, FreeBSD developer. I'm still waiting for an anti-systemd person to address one single point he raised: The presentation at linux.conf.au: https://www.youtube.com/watch?v=o_AIw9bGogo Specifically, "The arguments against systemd that people tend to advance", starting with "it violates the Unix philosophy": https://youtu.be/o_AIw9bGogo?si=0xJ0-JpXGEBGpW0K=1040 The slide show (note its domain): > https://papers.freebsd.org/2018/bsdcan/rice-The_Tragedy_of_systemd.files/rice-The_Tragedy_of_systemd.pdf ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] LO backups & OO [was OT: is there any office package (especially spreadsheet) that lets me choose a PEN color]
CAREY SCHUG wrote on 2024-05-23 08:07: LO vs OO (topic 1) I was pissed when I was told I "had" to convert from OO to LO. LO was buggy (see below) later found a friend stayed with OO and was happy. any opinions on which of LO and OO (or others) do the best job on reading in XL or other formats? LibreOffice is the project where all the devs forked Open Office (an Oracle product at the time). It's been refactored and received rapid development updates. OnlyOffice was abandoned by Oracle to the Apache foundation and virtually no one works on it other than a few commits by IBM employees (IBM has distributed OO in the past so wanted it to survive in some form). LibreOffice beats Open Office by every metric available. I can't speak to "XL" (XLS?) format specifically, but assume nothing has changed in that regard in 10 years for OO. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Off line HTML Documents and Safari Browser
Hi Group, I have a document that I downloaded and sent to my iPhone via e-mail attachment. I saved it to files. When I try to open it, Voice Dream Reader grabs it and opens it. How can I allow Safari Browser to open it? -- Signature: For a nation to admit it has done grievous wrongs and will strive to correct them for the betterment of all is no vice; For a nation to claim it has always been great, needs no improvement and to cling to its past achievements is no virtue! -- The following information is important for all members of the V iPhone list. If you have any questions or concerns about the running of this list, or if you feel that a member's post is inappropriate, please contact the owners or moderators directly rather than posting on the list itself. Your V iPhone list moderator is Mark Taylor. Mark can be reached at: mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at caraqu...@caraquinn.com The archives for this list can be searched at: http://www.mail-archive.com/viphone@googlegroups.com/ --- You received this message because you are subscribed to the Google Groups "VIPhone" group. To unsubscribe from this group and stop receiving emails from it, send an email to viphone+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/viphone/27ed8dc7-f09d-4893-b567-7c4ad29561c7%40roadrunner.com.
Re: [Semibug] internal number storage in libre office calc [was: LibreOffice is summing incorrectly]
CAREY SCHUG wrote on 2024-05-22 22:55: OK, my spreadsheet is only 1.3 MB 1.3MB is miniscule in relation to any disk in the past 10 (20?) years. What's your time worth? single precision calculation is faster too. How many microseconds could you save and how much time are you willing to invest into that? (3) perhaps I could have all integers (100 times bigger than the desired number), and shift the decimal while displaying, but if I have to learn to type in everything, that is a LOT more work (mostly i now enter numbers like 5 or 3 or 2.1 or .12 so typing 500, 300, 210 and 12 would be more typing) and an even longer learning curve... Yeah, it's going to cause errors in data entry, just to potentially avoid rounding errors of fractions of cents. Probably LibreOffice and GnuCash are suitable for your needs (I've never looked at GnuCash). Your file sizes without fiddling could still almost fit on a 3½" floppy. Or, eleventeen gazillion of them on a fingernail-sized media card. You're okay there too. Sounds like things are actually fine and trying to optimize away via single vs double precision just ain't worth it. Good luck, rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
Steve Litt wrote on 2024-05-22 23:26: This command can be run in 3 seconds Ctrl+S == saved, 0.3 seconds. Haven't personally experienced much instability with LO. Certainly would *not* advise against using it. > LibreOffice is notorious for randomly, summarily and permanently > changing styles. I have had an issue with styles in the past, but that was in documents that were opened by other office suite apps too, so I never knew who to blame: a) me? b) LO? c) OnlyOffice? d) All of the above? e) Something else (10 years of upgrades between edits)? Maybe it was LO if it's a known issue. Everything was recoverable. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] re tracking changes in libre office spreadsheet [was: OT: is there any office package (especially spreadsheet) that lets me choose a PEN color]
CAREY SCHUG wrote on 2024-05-22 23:46: Perhaps version tracking would help with this? All changes can be tracked and reviewed. ok, found this page: https://itsfoss.com/libreoffice-version-control/ it says click on edit/track changes/record. --done click on view/toolbars/track changes That's wrong, at least for v7.3.7. Try Edit > Track Changes > Manage A dialogue pops up with a list of all changes since recording started. One can click through the list to highlight individual changes, and Accept or Reject them. It's pretty nice. I am on version 7.3.7.2, is that too old? It works in this version (since about version 4.0 in 2013), but the link you found has (currently) invalid info. when I do a general search for what is the current "libre office spreadsheet", I get libre office (overall) is 24.2, so clearly a different series of numbers. As of February, they're going with the year.month format, which is kinda nice, once one knows what's going on. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
Re: [Semibug] LibreOffice is summing incorrectly
CAREY SCHUG wrote on 2024-05-22 14:27: I typed 4.73 into a cell, it was actually stored that way LibreOffice Calc (which uses 64-bit double-precision numbers internally) https://help.libreoffice.org/latest/en-US/text/scalc/01/calculation_accuracy.html never have more than 5 (actually 4.5, meaning 199.99 to .01) significant digits, so single precision could make the spreadsheet a LOT smaller. Since all numbers are stored as 64 bit double-precision, it doesn't look like there's a way to reduce spreadsheet size by fiddling with storage. If I define a cell as ONLY a date OR a time, will it insist on storing it the internal clock format, which requires double precision? Yes. From the link above: internally, any time is a fraction of a day, 12:00 (noon) being represented as 0.5. Or better yet, store (the non date/time) as 100x integers, meaning .02 would be stored as 2, and 5 would be stored as 500. If you input numbers as n*100 and display them back as n÷100, that might work nicely. I've heard of that technique used in financial transactions. For displaying, I would like to DISPLAY in non-scientific format, but with a limited number of significant digits Number formatting supports custom formats, so that's do-able. Inherent Accuracy Problem LibreOffice Calc, just like most other spreadsheet software, uses floating-point math capabilities available on hardware. Given that most contemporary hardware uses binary floating-point arithmetic with limited precision defined in IEEE 754 standard, many decimal numbers - including as simple as 0.1 - cannot be precisely represented in LibreOffice Calc (which uses 64-bit double-precision numbers internally). That link is pretty interesting, and I didn't realize time formats were susceptible to rounding issues; I expected them to be stored in Unix epoch format. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
[TICTOC]Re: Enterprise Profile: Support for Non standard TCs
Hi Doug, The draft states that deployments with NAT are out of scope of the document. "In IPv4 networks some clocks might be hidden behind a NAT, which hides their IP addresses from the rest of the network. Note also that the use of NATs may place limitations on the topology of PTP networks, depending on the port forwarding scheme employed. Details of implementing PTP with NATs are out of cope of this document." A PTP TC that is a bridge per 802.1q or an IPv4/6 router must not change the source IP address of PTP delay requests. I've been working with TC solutions for more than 10 years. Both 1-step PTP TCs in HW (as well as 2-step in HW+SW) and none modified the source IP address of E2E delay requests, when working as either a bridge or router. This is the case for the products of the company I currently work for as well. My input is that per my understanding the following is not true for standard TCs: "This is important since Transparent Clocks will treat PTP messages that are altered at the PTP application layer as new IP packets and new Layer 2 frames when the PTP messages are retranmitted." And with NAT services out of scope, this part should be removed in my opinion too: "In PTP Networks that contain Transparent Clocks, timeTransmitters might receive Delay Request messages that no longer contains the IP Addresses of the timeReceivers. This is because Transparent Clocks might replace the IP address of Delay Requests with their own IP address after updating the Correction Fields. For this deployment scenario timeTransmitters will need to have configured tables of timeReceivers' IP addresses and associated Clock Identities in order to send Delay Responses to the correct PTP Nodes" I don't have further new input beyond that. Best, Ron From: Doug Arnold Sent: Thursday, May 23, 2024 12:05 AM To: Ron Cohen ; tictoc@ietf.org Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments ____ Hello Ron, For Ethernet - IEEE 802.1Q, I can't remember the RFCs for IPv4 and IPv6 but you can look them up. Here is the thing. I understand from a network layer model perspective a TC should not change the payload for a frame/packet and just forward it. However, there is no other way to do a cut-through 1-step TC. I pointed that out to the folks in IEEE 802.1 but they ignored me. I know for a fact that multiple companies' implementations of TCs do not replace the source address before retransmitting. I don't blame them. The standards are preventing a valuable use case just to preserve the purity of their layer model. I would be surprised if 1588 is the only technology that needs to change message fields on the fly in a cut through switch. Regards, Doug ____ From: Ron Cohen mailto:r...@marvell.com>> Sent: Wednesday, May 22, 2024 2:58 AM To: Doug Arnold mailto:doug.arn...@meinberg-usa.com>>; tictoc@ietf.org<mailto:tictoc@ietf.org> mailto:tictoc@ietf.org>> Subject: RE: Enterprise Profile: Support for Non standard TCs Hi Doug, TC are not supposed to change source IP address of delay requests. If the TC is a layer2 switch/bridge, it must not modify the source MAC address while forwarding and must never touch the layer3 addresses. If the TC is a layer3 IP router, it must not modify the source IP address while forwarding and must change the source MAC address to the MAC address of its egress port. If the TC is a layer4 device, e.g., a NAT device, it modifies the source IP address of messages as it is its functionality. It may be the case that such functionality is required in the enterprise. My point is that it is far from obvious and the draft needs to elaborate why it's needed. >> This is required by the standards that specify the transport networks. I would appreciate if you point to the relevant standards. The draft states that additional support is required for this deployment scenario: "For this deployment scenario timeTransmitters will need to have configured tables of timeReceivers' IP addresses and associated Clock Identities in order to send Delay Responses to the correct PTP Nodes" These tables would be part of the IEEE1588 spec if this TC behavior was standard. It is not trivial to add support for these tables in HW, if you want to support scale and speed. Best, Ron From: Doug Arnold mailto:doug.arn...@meinberg-usa.com>> Sent: Wednesday, May 22, 2024 12:36 AM To: Ron Cohen mailto:r...@marvell.com>>; tictoc@ietf.org<mailto:tictoc@ietf.org> Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments ___
Re: [Semibug] OT: is there any office package (especially spreadsheet) that lets me choose a PEN color
CAREY SCHUG wrote on 2024-05-22 15:34: I would like to choose a PEN color, e.g. red, no matter what I enter or change, no matter where, it will be in the pen color. As Carl mentioned, LibreOffice supports colourizing text. so I can make a group of changes, then go back and verify them, when confirmed, change everything to black. Perhaps version tracking would help with this? All changes can be tracked and reviewed. Also, there's the ability to add comments, which might help with the review process. rb ___ Semibug mailing list Semibug@lists.nycbug.org https://lists.nycbug.org:8443/mailman/listinfo/semibug
[Int-area] ICMP Considerations
Folks, Over the years, I have written several forwarding plane documents that mention ICMP. During the review of these documents, people have raised issues of the like the following: * shouldn't we mention that ICMP message delivery is not reliable? * shouldn't we mention that ICMP messages are rate limited? * How is the ICMP message processed at its destination? In each of these documents, I have added an ICMP considerations section to address these issues. Rather than repeating that text in every document we write in the future, I have abstracted it into a separate document. If anyone would like to contribute to this document, it can be found at https://github.com/ronbonica/ICMP Please send a private email if you are interested in contributing to the document. Ron [https://opengraph.githubassets.com/7f935b93000c8251bb7045f430e9243476e1d8f5fc79ffb6b46dab706b6dc4de/ronbonica/ICMP]<https://github.com/ronbonica/ICMP> GitHub - ronbonica/ICMP: ICMP Inherited Wisdom Draft<https://github.com/ronbonica/ICMP> ICMP Inherited Wisdom Draft. Contribute to ronbonica/ICMP development by creating an account on GitHub. github.com Juniper Business Use Only ___ Int-area mailing list -- int-area@ietf.org To unsubscribe send an email to int-area-le...@ietf.org
Re: search_path and SET ROLE
On Wed, May 22, 2024 at 2:02 PM Isaac Morland wrote: > On Wed, 22 May 2024 at 13:48, Ron Johnson wrote: > > As a superuser administrator, I need to be able to see ALL tables in ALL >> schemas when running "\dt", not just the ones in "$user" and public. And I >> need it to act consistently across all the systems. >> > > \dt *.* > Also shows information_schema, pg_catalog, and pg_toast. I can adjust to that, though. > But I am skeptical how often you really want this in a real database with > more than a few tables. Surely \dn+ followed by \dt [schemaname].* for a > few strategically chosen [schemaname] would be more useful? > More than you'd think. I'm always looking up the definition of this table or that table (mostly for indices and keys), and I never remember which schema they're in.
Re: search_path wildcard?
On Wed, May 22, 2024 at 1:58 PM Tom Lane wrote: > Ron Johnson writes: > > That would be a helpful feature for administrators, when there are > multiple > > schemas in multiple databases, on multiple servers: superusers get ALTER > > ROLE foo SET SEARCH_PATH = '*'; and they're done with it. > > ... and they're pwned within five minutes by any user with the wits > to create a trojan-horse function or operator. Generally speaking, > you want admins to run with a minimal search path not a maximal one. > Missing tables when running "\t" is a bigger hassle.
Re: search_path wildcard?
On Wed, May 22, 2024 at 12:53 PM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Wed, May 22, 2024, 10:36 Ron Johnson wrote: > >> This doesn't work, and I've found nothing similar: >> ALTER ROLE foo SET SEARCH_PATH = '*'; >> > > Correct, you cannot do that. > That would be a helpful feature for administrators, when there are multiple schemas in multiple databases, on multiple servers: superusers get ALTER ROLE foo SET SEARCH_PATH = '*'; and they're done with it.
Re: search_path and SET ROLE
On Wed, May 22, 2024 at 1:10 PM Tom Lane wrote: > Ron Johnson writes: > > It seems that the search_path of the role that you SET ROLE to does not > > become the new search_path. > > It does for me: > > regression=# create role r1; > CREATE ROLE > regression=# create schema r1 authorization r1; > CREATE SCHEMA > regression=# select current_schemas(true), current_user; >current_schemas | current_user > -+-- > {pg_catalog,public} | postgres > (1 row) > > regression=# set role r1; > SET > regression=> select current_schemas(true), current_user; > current_schemas | current_user > +-- > {pg_catalog,r1,public} | r1 > (1 row) > > regression=> show search_path ; >search_path > - > "$user", public > (1 row) > > The fine manual says that $user tracks the result of > CURRENT_USER, and at least in this example it's doing that. > (I hasten to add that I would not swear there are no > bugs in this area.) > > > Am I missing something, or is that PG's behavior? > > I bet what you missed is granting (at least) USAGE on the > schema to that role. PG will silently ignore unreadable > schemas when computing the effective search path. > There are multiple schemata in (sometimes) multiple databases on (many) multiple servers. As a superuser administrator, I need to be able to see ALL tables in ALL schemas when running "\dt", not just the ones in "$user" and public. And I need it to act consistently across all the systems. (Heck, none of our schemas are named the same as roles.) This would be useful for account maintenance: CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN; ALTER ROLE dbagrp SET search_path = public, dba, sch1, sch2, sch3, sch4; CREATE USER joe IN GROUP dbagrp INHERIT PASSWORD = 'linenoise'; Then, as user joe: SHOW search_path; search_path - "$user", public (1 row) SET ROLE dbagrp RELOAD SESSION; -- note the new clause SHOW search_path; search_path --- public , dba, sch1, sch2, sch3, sch4 (1 row) When a new DBA comes on board, add him/her to dbagrp, and they automagically have everything that dbagrp has. Now, each dba must individually be given a search_path. If you forget, or forget to add some schemas, etc, mistakes ger made and time is wasted.
search_path wildcard?
This doesn't work, and I've found nothing similar: ALTER ROLE foo SET SEARCH_PATH = '*'; Is there a single SQL statement which will generate a search path based on information_schema.schemata, or do I have to write an anonymous DO procedure? SELECT schema_name FROM information_schema.schemata WHERE schema_name != 'information_schema' AND schema_name NOT LIKE 'pg_%';
search_path and SET ROLE
PG 9.6.24 (Soon, I swear!) It seems that the search_path of the role that you SET ROLE to does not become the new search_path. Am I missing something, or is that PG's behavior? AS USER postgres $ psql -h 10.143.170.52 -Xac "CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN;" CREATE ROLE dbagrp SUPERUSER INHERIT NOLOGIN; CREATE ROLE $ psql -h 10.143.170.52 -Xac "CREATE USER rjohnson IN GROUP dbagrp INHERIT;" CREATE USER rjohnson IN GROUP dbagrp INHERIT; CREATE ROLE [postgres@FISPMONDB001 ~]$ psql -h 10.143.170.52 -Xac "CREATE USER \"11026270\" IN GROUP dbagrp INHERIT PASSWORD '${NewPass}' VALID UNTIL '2024-06-30 23:59:59';" CREATE USER "11026270" IN GROUP dbagrp INHERIT PASSWORD 'linenoise' VALID UNTIL '2024-06-30 23:59:59'; CREATE ROLE $ psql -h 10.143.170.52 -Xac "ALTER ROLE dbagrp set search_path = dbagrp, public, dba, cds, tms;" ALTER ROLE dbagrp set search_path = dbagrp, public, dba, cds, tms; ALTER ROLE AS USER rjohnson [rjohnson@fpslbxcdsdbppg1 ~]$ psql -dCDSLBXW psql (9.6.24) Type "help" for help. CDSLBXW=> SET ROLE dbagrp; SET CDSLBXW=# CDSLBXW=# SHOW SEARCH_PATH; search_path - "$user", public (1 row) Back to user postgres = $ psql -h 10.143.170.52 -Xac "ALTER ROLE rjohnson set search_path = dbagrp, public, dba, cds, tms;" ALTER ROLE rjohnson set search_path = dbagrp, public, dba, cds, tms; ALTER ROLE Back to user rjohnson = [rjohnson@fpslbxcdsdbppg1 ~]$ psql -dCDSLBXW psql (9.6.24) Type "help" for help. CDSLBXW=> CDSLBXW=> SET ROLE dbagrp; SET CDSLBXW=# SHOW SEARCH_PATH; search_path --- dbagrp, public, dba, cds, tms (1 row)
Re: DFSort query
My Apologies Kolusu for wrong details inputted Thanks Much for the sample job and it is working good for my requirement. Regards Ron T -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
[TICTOC]Re: Enterprise Profile: Support for Non standard TCs
Hi Doug, TC are not supposed to change source IP address of delay requests. If the TC is a layer2 switch/bridge, it must not modify the source MAC address while forwarding and must never touch the layer3 addresses. If the TC is a layer3 IP router, it must not modify the source IP address while forwarding and must change the source MAC address to the MAC address of its egress port. If the TC is a layer4 device, e.g., a NAT device, it modifies the source IP address of messages as it is its functionality. It may be the case that such functionality is required in the enterprise. My point is that it is far from obvious and the draft needs to elaborate why it's needed. >> This is required by the standards that specify the transport networks. I would appreciate if you point to the relevant standards. The draft states that additional support is required for this deployment scenario: "For this deployment scenario timeTransmitters will need to have configured tables of timeReceivers' IP addresses and associated Clock Identities in order to send Delay Responses to the correct PTP Nodes" These tables would be part of the IEEE1588 spec if this TC behavior was standard. It is not trivial to add support for these tables in HW, if you want to support scale and speed. Best, Ron From: Doug Arnold Sent: Wednesday, May 22, 2024 12:36 AM To: Ron Cohen ; tictoc@ietf.org Subject: [EXTERNAL] Re: Enterprise Profile: Support for Non standard TCs Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments ________ Hello Ron, Yes. A TC is required to change the source address of a message at least for Ethernet and IP mappings. This is not an IEEE 1588 decision. This is required by the standards that specify the transport networks. Ethernet (IEEE 802.1Q) IPv4 and IPv6. A TC effectively changes the payload of the messages from the point of view of L2 and L3, so it is a new frame and new packet to those layers. I think that IPv4 has an option to alter a message in-route, but the node is supposed to zero out the source address. Regards, Doug ________ From: Ron Cohen mailto:r...@marvell.com>> Sent: Tuesday, May 7, 2024 12:43 PM To: tictoc@ietf.org<mailto:tictoc@ietf.org> mailto:tictoc@ietf.org>> Subject: [TICTOC]Enterprise Profile: Support for Non standard TCs Hi, I'm late to the game here. I apologize in advance if this has already been discussed and decided: I can't figure out why the profile needs to support non-standard TCs, or what seems to be a strange combination of a NAT+TC devices: "In PTP Networks that contain Transparent Clocks, timeTransmitters might receive Delay Request messages that no longer contains the IP Addresses of the timeReceivers. This is because Transparent Clocks might replace the IP address of Delay Requests with their own IP address after updating the Correction Fields. For this deployment scenario timeTransmitters will need to have configured tables of timeReceivers' IP addresses and associated Clock Identities in order to send Delay Responses to the correct PTP Nodes." Is a standard TC allowed to change the source IP address of messages? There should be a strong reason to require support for such devices in a standard profile. Best, Ron /* * Ron Cohen * Email: r...@marvell.com<mailto:r...@marvell.com> * Mobile: +972.54.5751506 */ ___ TICTOC mailing list -- tictoc@ietf.org To unsubscribe send an email to tictoc-le...@ietf.org
[TICTOC]Enterprise Profile: Support for Non standard TCs
Hi, I'm late to the game here. I apologize in advance if this has already been discussed and decided: I can't figure out why the profile needs to support non-standard TCs, or what seems to be a strange combination of a NAT+TC devices: "In PTP Networks that contain Transparent Clocks, timeTransmitters might receive Delay Request messages that no longer contains the IP Addresses of the timeReceivers. This is because Transparent Clocks might replace the IP address of Delay Requests with their own IP address after updating the Correction Fields. For this deployment scenario timeTransmitters will need to have configured tables of timeReceivers' IP addresses and associated Clock Identities in order to send Delay Responses to the correct PTP Nodes." Is a standard TC allowed to change the source IP address of messages? There should be a strong reason to require support for such devices in a standard profile. Best, Ron /* * Ron Cohen * Email: r...@marvell.com<mailto:r...@marvell.com> * Mobile: +972.54.5751506 */ ___ TICTOC mailing list -- tictoc@ietf.org To unsubscribe send an email to tictoc-le...@ietf.org
DFSort query
Hi All- In the below Data we need to extract with in the cross ref nbr , if seq Nbr =1 get Pacct_NBR and its related acct nbrs from the set In the below dataset for cross ref nbr = 24538 we have 2 sets of data and 24531 we have 1 set . Acct _NBR Pacct_NBR LAST_CHANGE_TS CROSS_REF_NBR SEQ_NBR 600392811 1762220138659 2024-04-18-10.38.09.570030 24538 1 505756281 1500013748790 2024-04-18-10.38.09.570030 24538 2 593830611500013748790 2024-04-18-10.38.09.570030 24538 3 592670711500013748790 2024-04-18-10.38.09.570030 24538 4 505756281 1500013748790 2024-01-15-08.05.14.038792 24538 1 593830611500013748790 2024-01-15-08.05.14.038792 24538 2 592670711500013748790 2024-01-15-08.05.14.038792 24538 3 600392811 1762220138659 2024-01-15-08.05.14.038792 24538 4 600392561 1762220138631 2024-01-15-08.05.14.038792 24531 1 Output Acct _NBR Pacct_NBR 600392811 1762220138659 505756281 1762220138659 593830611762220138659 592670711762220138659 505756281 1500013748790 593830611500013748790 592670711500013748790 600392811 1500013748790 600392561 1762220138631 Data size Acct _NBR 10 bytes Pacct_NBR 15 bytes LAST_CHANGE_TS 20 bytes CROSS_REF_NBR 5 bytes SEQ_NBR 2 bytes Could someone please let me know how we can build this data using dfsort ? Regards Ron T -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: pg_dump and not MVCC-safe commands
On Mon, May 20, 2024 at 11:54 AM Christophe Pettus wrote: > > > > On May 20, 2024, at 08:49, PetSerAl wrote: > > Basically, you need application cooperation to make > > consistent live database backup. > > If it is critical that you have a completely consistent backup as of a > particular point in time, and you are not concerned about restoring to a > different processor architecture, pg_basebackup is a superior solution to > pg_dump. > Single-threaded, and thus dreadfully slow. I'll stick with PgBackRest.
Re: [DISCUSSION] FLIP-457: Improve Table/SQL Configuration for Flink 2.0
Hi, Lincoln > 2. Regarding the options in HashAggCodeGenerator, since this new feature has gone through a couple of release cycles and could be considered for PublicEvolving now, cc @Ron Liu WDYT? Thanks for cc'ing me, +1 for public these options now. Best, Ron Benchao Li 于2024年5月20日周一 13:08写道: > I agree with Lincoln about the experimental features. > > Some of these configurations do not even have proper implementation, > take 'table.exec.range-sort.enabled' as an example, there was a > discussion[1] about it before. > > [1] https://lists.apache.org/thread/q5h3obx36pf9po28r0jzmwnmvtyjmwdr > > Lincoln Lee 于2024年5月20日周一 12:01写道: > > > > Hi Jane, > > > > Thanks for the proposal! > > > > +1 for the changes except for these annotated as experimental ones. > > > > For the options annotated as experimental, > > > > +1 for the moving of IncrementalAggregateRule & RelNodeBlock. > > > > For the rest of the options, there are some suggestions: > > > > 1. for the batch related parameters, it's recommended to either delete > > them (leaving the necessary defaults value in place) or leave them as > they > > are. Including: > > FlinkRelMdRowCount > > FlinkRexUtil > > BatchPhysicalSortRule > > JoinDeriveNullFilterRule > > BatchPhysicalJoinRuleBase > > BatchPhysicalSortMergeJoinRule > > > > What I understand about the history of these options is that they were > once > > used for fine > > tuning for tpc testing, and the current flink planner no longer relies on > > these internal > > options when testing tpc[1]. In addition, these options are too obscure > for > > SQL users, > > and some of them are actually magic numbers. > > > > 2. Regarding the options in HashAggCodeGenerator, since this new feature > > has gone > > through a couple of release cycles and could be considered for > > PublicEvolving now, > > cc @Ron Liu WDYT? > > > > 3. Regarding WindowEmitStrategy, IIUC it is currently unsupported on TVF > > window, so > > it's recommended to keep it untouched for now and follow up in > > FLINK-29692[2]. cc @Xuyang > > > > [1] > > > https://github.com/ververica/flink-sql-benchmark/blob/master/tools/common/flink-conf.yaml > > [2] https://issues.apache.org/jira/browse/FLINK-29692 > > > > > > Best, > > Lincoln Lee > > > > > > Yubin Li 于2024年5月17日周五 10:49写道: > > > > > Hi Jane, > > > > > > Thank Jane for driving this proposal ! > > > > > > This makes sense for users, +1 for that. > > > > > > Best, > > > Yubin > > > > > > On Thu, May 16, 2024 at 3:17 PM Jark Wu wrote: > > > > > > > > Hi Jane, > > > > > > > > Thanks for the proposal. +1 from my side. > > > > > > > > > > > > Best, > > > > Jark > > > > > > > > On Thu, 16 May 2024 at 10:28, Xuannan Su > wrote: > > > > > > > > > Hi Jane, > > > > > > > > > > Thanks for driving this effort! And +1 for the proposed changes. > > > > > > > > > > I have one comment on the migration plan. > > > > > > > > > > For options to be moved to another module/package, I think we have > to > > > > > mark the old option deprecated in 1.20 for it to be removed in 2.0, > > > > > according to the API compatibility guarantees[1]. We can introduce > the > > > > > new option in 1.20 with the same option key in the intended class. > > > > > WDYT? > > > > > > > > > > Best, > > > > > Xuannan > > > > > > > > > > [1] > > > > > > > > > https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/#api-compatibility-guarantees > > > > > > > > > > > > > > > > > > > > On Wed, May 15, 2024 at 6:20 PM Jane Chan > > > wrote: > > > > > > > > > > > > Hi all, > > > > > > > > > > > > I'd like to start a discussion on FLIP-457: Improve Table/SQL > > > > > Configuration > > > > > > for Flink 2.0 [1]. This FLIP revisited all Table/SQL > configurations > > > to > > > > > > improve user-friendliness and maintainability as Flink moves > toward > > > 2.0. > > > > > > > > > > > > I am looking forward to your feedback. > > > > > > > > > > > > Best regards, > > > > > > Jane > > > > > > > > > > > > [1] > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992 > > > > > > > > > > > > -- > > Best, > Benchao Li >
Which iPhone App Allows The Creation Of level 1 2 and so on of Headings that is accessible with VoiceOver?
Hi Group, I want to make a document as part of a game I am creating that will quickly allow me to access aspects of the game via a document using header navigation. I tried using Microsoft Word for PC and moving it to the iPhone but the header navigation seemed broken. Only a few words would appear on each line which Ms Word had created a heading. Which app on the iPhone itself would be used to create various levels of headings that would be accessible with VoiceOver? -- Signature: For a nation to admit it has done grievous wrongs and will strive to correct them for the betterment of all is no vice; For a nation to claim it has always been great, needs no improvement and to cling to its past achievements is no virtue! -- The following information is important for all members of the V iPhone list. If you have any questions or concerns about the running of this list, or if you feel that a member's post is inappropriate, please contact the owners or moderators directly rather than posting on the list itself. Your V iPhone list moderator is Mark Taylor. Mark can be reached at: mk...@ucla.edu. Your list owner is Cara Quinn - you can reach Cara at caraqu...@caraquinn.com The archives for this list can be searched at: http://www.mail-archive.com/viphone@googlegroups.com/ --- You received this message because you are subscribed to the Google Groups "VIPhone" group. To unsubscribe from this group and stop receiving emails from it, send an email to viphone+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/viphone/d9a737e0-dec2-b5e1-e6db-3d91fadc5096%40roadrunner.com.
[cctalk] Re: Mylar/Sponge Keyboard Repair Kits
TexElec makes and sells replacement "foam and foil" discs for those keyboards. See https://texelec.com/product/foam-capacitive-pads-keytronic/ . They are usually shown as on backorder. The one time I ordered a set, they were on backorder and arrived a few weeks after I placed the order. I wouldn't recommend waiting for them to be in stock before ordering as that might require a VERY long wait. -- Ron Pool -Original Message- From: Marvin Johnston via cctalk Sent: Friday, May 17, 2024 5:49 AM To: cctalk@classiccmp.org Cc: Marvin Johnston Subject: [cctalk] Mylar/Sponge Keyboard Repair Kits I've got a couple of keyboards where the sponge has disintegrated to the point they no longer work. The latest one is a Vector 3 keyboard and I would love to get it fixed. Can repair kits still be purchased and/or are the instructions for making those sponge/mylar pieces available? Thanks! Marvin
(flink) branch master updated: [FLINK-35346][table-common] Introduce workflow scheduler interface for materialized table
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 1378979f02e [FLINK-35346][table-common] Introduce workflow scheduler interface for materialized table 1378979f02e is described below commit 1378979f02eed55bbf3f91b08ec166d55b2c42a6 Author: Ron AuthorDate: Thu May 16 19:41:54 2024 +0800 [FLINK-35346][table-common] Introduce workflow scheduler interface for materialized table [FLINK-35346][table-common] Introduce workflow scheduler interface for materialized table This closes #24767 --- .../apache/flink/table/factories/FactoryUtil.java | 9 +- .../table/factories/WorkflowSchedulerFactory.java | 56 +++ .../factories/WorkflowSchedulerFactoryUtil.java| 156 ++ .../table/workflow/CreateRefreshWorkflow.java | 29 .../table/workflow/DeleteRefreshWorkflow.java | 48 ++ .../table/workflow/ModifyRefreshWorkflow.java | 40 + .../flink/table/workflow/RefreshWorkflow.java | 34 .../flink/table/workflow/WorkflowException.java| 37 + .../flink/table/workflow/WorkflowScheduler.java| 91 +++ .../workflow/TestWorkflowSchedulerFactory.java | 175 + .../workflow/WorkflowSchedulerFactoryUtilTest.java | 107 + .../org.apache.flink.table.factories.Factory | 1 + 12 files changed, 782 insertions(+), 1 deletion(-) diff --git a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java index d8d6d7e9000..5d66b23c3d8 100644 --- a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java +++ b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java @@ -167,6 +167,13 @@ public final class FactoryUtil { + "tasks to advance their watermarks without the need to wait for " + "watermarks from this source while it is idle."); +public static final ConfigOption WORKFLOW_SCHEDULER_TYPE = +ConfigOptions.key("workflow-scheduler.type") +.stringType() +.noDefaultValue() +.withDescription( +"Specify the workflow scheduler type that is used for materialized table."); + /** * Suffix for keys of {@link ConfigOption} in case a connector requires multiple formats (e.g. * for both key and value). @@ -903,7 +910,7 @@ public final class FactoryUtil { return loadResults; } -private static String stringifyOption(String key, String value) { +public static String stringifyOption(String key, String value) { if (GlobalConfiguration.isSensitive(key)) { value = HIDDEN_CONTENT; } diff --git a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java new file mode 100644 index 000..72e144f7d19 --- /dev/null +++ b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/WorkflowSchedulerFactory.java @@ -0,0 +1,56 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.factories; + +import org.apache.flink.annotation.PublicEvolving; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.table.workflow.WorkflowScheduler; + +import java.util.Map; + +/** + * A factory to create a {@link WorkflowScheduler} instance. + * + * See {@link Factory} for more information about the general design of a factory. + */ +@PublicEvolving +public interface WorkflowSchedulerFactory extends Factory { + +/** Create a workflow scheduler instance which interacts with external scheduler service. */ +
Re: [EVDL] 46 Pure EVs for sale, Teslas competition.
t seems that since 2017, Tesla has gone into reverse on their original master plan. So let China take the lead/heat for pushing out ICE cars. In five or ten years the ICE folks can adjust/catch up. No bailout needed. The pressure on Musk is reduced and Optimus can go to Mars and or drive Robotaxis, a win-win except for the carbon problem. Ron Solberg > On May 14, 2024, at 7:22 PM, EV List Lackey via EV wrote: > > On 14 May 2024 at 10:35, Rush via EV wrote: > >> I think that anybody having any knowledge of how a business is conducted >> would say that 'yes, profit is a good thing'. > > Let's restore the context: > >> AND still make a hefty profit on each car > > As I understood it, and someone correct me if this is wrong, the original > Tesla "master plan" was to get to mass market EVs. They'd start with > building luxury EVs for rich people, and use the presumably *hefty* profits > from that venture to design and build EVs for the rest of us. > > That plan was written a long time ago - maybe 2008? Again, someone please > help me out here. > > The Model 3 was introduced 7 years ago, in 2017. That was real progress > toward affordable EVs, 9 years on from the master plan's inception. Not > bad. > > Is that master plan still their guide? If so, what progress have they made > on it since? > > Not the Model Y (2020). It's more expensive. > > I'm pretty sure it's not the Cybertruck (2023), either. > > It seems that since 2017, Tesla has gone into reverse on their original > master plan. > > Their recent investor call suggested pretty strongly that they're going to > start using their EV profits less to develop EVs, and more to develop AI, > autonomy software, and robotaxis. > > Their recent layoffs seem to confirm that direction. > > What do you think of this? > > Is it a good thing? > > Is it likely to be permanent, or is it just another Elon Musk shot-from-the- > hip that he'll change next month or next year? > > David Roden, EVDL moderator & general lackey > > To reach me, don't reply to this message; I won't get it. Use my > offlist address here : http://evdl.org/help/index.html#supt > > = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = > > If economists wished to study the horse, they wouldn't go and look at > horses. They'd sit in their studies and say to themselves, "What would > I do if I were a horse?" > > -- Ely Devons > > = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = > > ___ > Address messages to ev@lists.evdl.org > No other addresses in TO and CC fields > HELP: http://www.evdl.org/help/ > ___ Address messages to ev@lists.evdl.org No other addresses in TO and CC fields HELP: http://www.evdl.org/help/
(flink) 01/04: [FLINK-35193][table] Support drop materialized table syntax
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 8551ef39e0387f723a72299cc73aaaf827cf74bf Author: Feng Jin AuthorDate: Mon May 13 20:06:41 2024 +0800 [FLINK-35193][table] Support drop materialized table syntax --- .../src/main/codegen/data/Parser.tdd | 1 + .../src/main/codegen/includes/parserImpls.ftl | 30 ++ .../sql/parser/ddl/SqlDropMaterializedTable.java | 68 ++ .../flink/sql/parser/utils/ParserResource.java | 3 + .../MaterializedTableStatementParserTest.java | 25 5 files changed, 127 insertions(+) diff --git a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd index 81b3412954c..883b6aec1b2 100644 --- a/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd +++ b/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd @@ -76,6 +76,7 @@ "org.apache.flink.sql.parser.ddl.SqlDropCatalog" "org.apache.flink.sql.parser.ddl.SqlDropDatabase" "org.apache.flink.sql.parser.ddl.SqlDropFunction" +"org.apache.flink.sql.parser.ddl.SqlDropMaterializedTable" "org.apache.flink.sql.parser.ddl.SqlDropPartitions" "org.apache.flink.sql.parser.ddl.SqlDropPartitions.AlterTableDropPartitionsContext" "org.apache.flink.sql.parser.ddl.SqlDropTable" diff --git a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl index bdc97818914..b2a5ea02d0f 100644 --- a/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl +++ b/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl @@ -1801,6 +1801,34 @@ SqlCreate SqlCreateMaterializedTable(Span s, boolean replace, boolean isTemporar } } +/** +* Parses a DROP MATERIALIZED TABLE statement. +*/ +SqlDrop SqlDropMaterializedTable(Span s, boolean replace, boolean isTemporary) : +{ +SqlIdentifier tableName = null; +boolean ifExists = false; +} +{ + + { + if (isTemporary) { + throw SqlUtil.newContextException( + getPos(), + ParserResource.RESOURCE.dropTemporaryMaterializedTableUnsupported()); + } + } + + +ifExists = IfExistsOpt() + +tableName = CompoundIdentifier() + +{ +return new SqlDropMaterializedTable(s.pos(), tableName, ifExists); +} +} + /** * Parses alter materialized table. */ @@ -2427,6 +2455,8 @@ SqlDrop SqlDropExtended(Span s, boolean replace) : ( drop = SqlDropCatalog(s, replace) | +drop = SqlDropMaterializedTable(s, replace, isTemporary) +| drop = SqlDropTable(s, replace, isTemporary) | drop = SqlDropView(s, replace, isTemporary) diff --git a/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java new file mode 100644 index 000..ec9439fb13a --- /dev/null +++ b/flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDropMaterializedTable.java @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser.ddl; + +import org.apache.calcite.sql.SqlDrop; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlKind; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlOperator; +import org.apache.calcite.sql.SqlSpecialOperator; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.util.ImmutableNullableList; + +import java.util.List; + +/** DROP MATERIALIZED TABLE DDL sql call. */ +public class SqlDropMaterializedTable extends SqlDrop { + +private static final SqlOperator OPERATOR = +new SqlSpecialOperator("DROP MATERIALIZED TABLE", SqlKind.DRO
(flink) 03/04: [FLINK-35193][table] Support execution of drop materialized table
This is an automated email from the ASF dual-hosted git repository. ron pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 51b744bca1bdf53385152ed237f2950525046488 Author: Feng Jin AuthorDate: Mon May 13 20:08:38 2024 +0800 [FLINK-35193][table] Support execution of drop materialized table --- .../MaterializedTableManager.java | 115 +- .../service/operation/OperationExecutor.java | 9 + .../service/MaterializedTableStatementITCase.java | 241 ++--- .../apache/flink/table/catalog/CatalogManager.java | 4 +- 4 files changed, 328 insertions(+), 41 deletions(-) diff --git a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java index b4ba12b8755..a51b1885c98 100644 --- a/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java +++ b/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/materializedtable/MaterializedTableManager.java @@ -20,6 +20,7 @@ package org.apache.flink.table.gateway.service.materializedtable; import org.apache.flink.annotation.Internal; import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.api.common.JobStatus; import org.apache.flink.configuration.Configuration; import org.apache.flink.table.api.ValidationException; import org.apache.flink.table.catalog.CatalogMaterializedTable; @@ -34,6 +35,7 @@ import org.apache.flink.table.gateway.api.results.ResultSet; import org.apache.flink.table.gateway.service.operation.OperationExecutor; import org.apache.flink.table.gateway.service.result.ResultFetcher; import org.apache.flink.table.gateway.service.utils.SqlExecutionException; +import org.apache.flink.table.operations.command.DescribeJobOperation; import org.apache.flink.table.operations.command.StopJobOperation; import org.apache.flink.table.operations.materializedtable.AlterMaterializedTableChangeOperation; import org.apache.flink.table.operations.materializedtable.AlterMaterializedTableRefreshOperation; @@ -93,6 +95,9 @@ public class MaterializedTableManager { } else if (op instanceof AlterMaterializedTableResumeOperation) { return callAlterMaterializedTableResume( operationExecutor, handle, (AlterMaterializedTableResumeOperation) op); +} else if (op instanceof DropMaterializedTableOperation) { +return callDropMaterializedTableOperation( +operationExecutor, handle, (DropMaterializedTableOperation) op); } throw new SqlExecutionException( @@ -146,8 +151,7 @@ public class MaterializedTableManager { materializedTableIdentifier, e); operationExecutor.callExecutableOperation( -handle, -new DropMaterializedTableOperation(materializedTableIdentifier, true, false)); +handle, new DropMaterializedTableOperation(materializedTableIdentifier, true)); throw e; } } @@ -170,7 +174,8 @@ public class MaterializedTableManager { materializedTable.getSerializedRefreshHandler(), operationExecutor.getSessionContext().getUserClassloader()); -String savepointPath = stopJobWithSavepoint(operationExecutor, handle, refreshHandler); +String savepointPath = +stopJobWithSavepoint(operationExecutor, handle, refreshHandler.getJobId()); ContinuousRefreshHandler updateRefreshHandler = new ContinuousRefreshHandler( @@ -183,9 +188,12 @@ public class MaterializedTableManager { CatalogMaterializedTable.RefreshStatus.SUSPENDED, materializedTable.getRefreshHandlerDescription().orElse(null), serializeContinuousHandler(updateRefreshHandler)); +List tableChanges = new ArrayList<>(); +tableChanges.add( + TableChange.modifyRefreshStatus(CatalogMaterializedTable.RefreshStatus.ACTIVATED)); AlterMaterializedTableChangeOperation alterMaterializedTableChangeOperation = new AlterMaterializedTableChangeOperation( -tableIdentifier, Collections.emptyList(), updatedMaterializedTable); +tableIdentifier, tableChanges, updatedMaterializedTable); operationExecutor.callExecutableOperation(handle, alterMaterializedTableChangeOperation); @@ -284,8 +292,7 @@ public class MaterializedTableManager { // drop materialized table while submit flink streaming job occur exception. Thus, weak // atomicity is guar