[Bug 1955336] [NEW] mate-volume-control: Speaker testing dialog changes size on pressing "Test" in Russian localization
Public bug reported: When "Test" button is pressed (RU: "Проверить"), the window "Speaker testing" (RU: "Проверка динамиков") becomes wider. Steps to reproduce the behaviour: In Russian localization "ru_RU". Start mate-volume-control, click on "Hardware" tab (RU: "Оборудование"), then on the "Test Speakers" button (RU: "Проверить динамики") below. Click on "Test" buttons in the new dialog to watch the symptom. This is not reproduced in English localization. ProblemType: Bug DistroRelease: Ubuntu 21.10 Package: mate-media 1.26.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.13.0-22.22-generic 5.13.19 Uname: Linux 5.13.0-22-generic x86_64 ApportVersion: 2.20.11-0ubuntu71 Architecture: amd64 CasperMD5CheckResult: pass CurrentDesktop: MATE Date: Sun Dec 19 15:45:10 2021 InstallationDate: Installed on 2021-12-15 (3 days ago) InstallationMedia: Ubuntu-MATE 21.10 "Impish Indri" - Release amd64 (20211012) SourcePackage: mate-media UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: mate-media (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug impish ** Bug watch added: github.com/mate-desktop/mate-control-center/issues #685 https://github.com/mate-desktop/mate-control-center/issues/685 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1955336 Title: mate-volume-control: Speaker testing dialog changes size on pressing "Test" in Russian localization To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mate-media/+bug/1955336/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1955335] [NEW] clock-applet doesn't contain majority of Russian cities
Public bug reported: "Clock Preferences" -> "Location" -> "Choose Location" All regional capitals and other major cities must be included into the list of Russian locations. Here is the list of missing locations: Arzamas (RU: "Арзамас") Armavir (RU: "Армавир") Achinsk (RU: "Ачинск") Belgorod (RU: "Белгород") Biysk (RU: "Бийск") Blagoveschensk (RU: "Благовещенск") Birobidzhan (RU: "Биробиджан") Vanino (RU: "Ванино") Velikiy Novgorod (RU: "Великий Новгород") Velikiy Ustyug (RU: "Великий Устюг") Vladimir (RU: "Владимир") Vladikavkaz (RU: "Владикавказ") Volgodonsk (RU: "Волгодонск") Vorkuta (RU: "Воркута") Vyborg (RU: "Выборг") Vyksa (RU: "Выкса") Vyshny Volochyok (RU: "Вышний Волочёк") Vyazma (RU: "Вязьма") Gelendzhik (RU: "Геленджик") Gorno-Altaysk (RU: "Горно-Алтайск") Gorodets (RU: "Городец") Grozny (RU: "Грозный") Gus-Khrustalny (RU: "Гусь-Хрустальный") Derbent (RU: "Дербент") Dimitrovgrad (RU: "Димитровград") Dubna (RU: "Дубна") Yelabuga (RU: "Елабуга") Yelets (RU: "Елец") Yessentuki (RU: "Ессентуки") Zheleznogorsk (RU: "Железногорск") Zavolzhye (RU: "Заволжье") Ivanovo (RU: "Иваново") Igarka (RU: "Игарка") Izhevsk (RU: "Ижевск") Yoshkar-Ola (RU: "Йошкар-Ола") Kamyshin (RU: "Камышин") Kasimov (RU: "Касимов") Kaluga (RU: "Калуга") Kizlyar (RU: "Кизляр") Kineshma (RU: "Кинешма") Kirov (RU: "Киров") Kirovo-Chepetsk (RU: "Кирово-Чепецк") Kislovodsk (RU: "Кисловодск") Kolomna (RU: "Коломна") Kovrov (RU: "Ковров") Kogalym (RU: "Когалым) Kostroma (RU: "Кострома") Kotlas (RU: "Котлас") Komsomolsk-on-Amur (RU: "Комсомольск-на-Амуре") Kurgan (RU: "Курган") Kyzyl (RU: "Кызыл") Lipetsk (RU: "Липецк") Magnitogorsk (RU: "Магнитогорск") Maykop (RU: "Майкоп") Makhachkala (RU: "Махачкала") Miass (RU: "Миасс") Minusinsk (RU: "Минусинск") Magas (RU: "Магас") Mozdok (RU: "Моздок) Murom (RU: "Муром") Nadym (RU: "Надым") Nakhodka (RU: "Находка") Naryan-Mar (RU: "Нарьян-Мар") Naberezhnye Chelny (RU: "Набережные Челны") Nerekhta (RU: "Нерехта") Nefteyugansk (RU: "Нефтеюганск") Nizhnekamsk (RU: "Нижнекамск") Nizhny Novgorod (RU: "Нижний Новгород") Nizhniy Tagil (RU: "Нижний Тагил") Novorossiysk (RU: "Новороссийск") Noviy Urengoy (RU: "Новый Уренгой") Norilsk (RU: "Норильск") Oryol (RU: "Орёл") Orsk (RU: "Орск") Okha (RU: "Оха") Pereslavl-Zalessky (RU: "Переславль-Залесский") Petrozavodsk (RU: "Петрозаводск") Pechory (RU: "Печоры") Pskov (RU: "Псков") Pyatigorsk (RU: "Пятигорск") Rubtsovsk (RU: "Рубцовск") Rybinsk (RU: "Рыбинск") Salekhard (RU: "Салехард") Sarov (RU: "Саров") Saransk (RU: "Саранск") Severodvinsk (RU: "Северодвинск") Smolensk (RU: "Смоленск") Solikamsk (RU: "Соликамск") Sovetskaya Gavan (RU: "Советская Гавань") Sochi (RU: "Сочи") Staryy Oskol (RU: "Старый Оскол") Sterlitamak (RU: "Стерлитамак") Syzran (RU: "Сызрань") Taganrog (RU: "Таганрог") Tambov (RU: "Тамбов") Tobolsk (RU: "Тобольск") Tolyatti (RU: "Тольятти") Tver (RU: "Тверь") Tuapse (RU: "Туапсе") Tula (RU: "Тула") Tynda (RU: "Тында") Ussuriysk (RU: "Уссурийск") Ukhta (RU: "Ухта") Cheboksary (RU: "Чебоксары") Cherepovets (RU: "Череповец") Cherkessk (RU: "Черкесск") Chusovoy (RU: "Чусовой") Shakhty (RU: "Шахты") Shuya (RU: "Шуя") Elista (RU: "Элиста") Yuzhno-Kurilsk (RU: "Южно-Курильск") Yaroslavl (RU: "Ярославль") ProblemType: Bug DistroRelease: Ubuntu 21.10 Package: mate-panel 1.26.0-0ubuntu3 ProcVersionSignature: Ubuntu 5.13.0-22.22-generic 5.13.19 Uname: Linux 5.13.0-22-generic x86_64 ApportVersion: 2.20.11-0ubuntu71 Architecture: amd64 CasperMD5CheckResult: pass CurrentDesktop: MATE Date: Sun Dec 19 15:39:24 2021 InstallationDate: Installed on 2021-12-15 (3 days ago) InstallationMedia: Ubuntu-MATE 21.10 "Impish Indri" - Release amd64 (20211012) SourcePackage: mate-panel UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: mate-panel (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug impish -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1955335 Title: clock-applet doesn't contain majority of Russian cities To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mate-panel/+bug/1955335/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-5615-updating-release-notes-for-nuget-bug Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-5615-updating-release-notes-for-nuget-bug/3e60f3-00%40github.com.
[jenkinsci/nexus-platform-plugin] 3e60f3: Updating release notes
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 3e60f3fd7db705a78ebdbc1de17ad6be0e1c3ec6 https://github.com/jenkinsci/nexus-platform-plugin/commit/3e60f3fd7db705a78ebdbc1de17ad6be0e1c3ec6 Author: Hector Hurtado Date: 2021-12-16 (Thu, 16 Dec 2021) Changed paths: M README.md Log Message: --- Updating release notes Commit: 3c2aa7a13c77139e7a6bda5c3d0276e78c284002 https://github.com/jenkinsci/nexus-platform-plugin/commit/3c2aa7a13c77139e7a6bda5c3d0276e78c284002 Author: Hector Danilo Hurtado Olaya Date: 2021-12-16 (Thu, 16 Dec 2021) Changed paths: M README.md Log Message: --- Merge pull request #171 from jenkinsci/INT-5615-updating-release-notes-for-nuget-bug INT-5615 Updating release notes Compare: https://github.com/jenkinsci/nexus-platform-plugin/compare/f1ef3f90738c...3c2aa7a13c77 -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/f1ef3f-3c2aa7%40github.com.
[jenkinsci/nexus-platform-plugin] 3e60f3: Updating release notes
Branch: refs/heads/INT-5615-updating-release-notes-for-nuget-bug Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 3e60f3fd7db705a78ebdbc1de17ad6be0e1c3ec6 https://github.com/jenkinsci/nexus-platform-plugin/commit/3e60f3fd7db705a78ebdbc1de17ad6be0e1c3ec6 Author: Hector Hurtado Date: 2021-12-16 (Thu, 16 Dec 2021) Changed paths: M README.md Log Message: --- Updating release notes -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-5615-updating-release-notes-for-nuget-bug/00-3e60f3%40github.com.
Re: [Koha] difficulties authenticating after samba/openldap -> samba4 AD migration with koha
Try removing id="dc1" from the ldapserver line, so it looks like this: On 12/12/21 10:53 AM, Web Developer wrote: help me to solve this. On Sat, 11 Dec 2021 at 22:23, Web Developer wrote: Hello Team, I am trying to authenticate samba4/AD to koha LDAP but I am getting the following error. However, I can't login, koha OPAC log says : LDAP search failed to return object : 2020: Operation unavailable without authentication at /usr/share/test_koha/lib/C4/Auth_with_ldap.pm line 98. So, before I start doing bigger things (like updating koha, which has always been running fine) I'd like to know if I'm missing something obvious? I'm sure many people here are using (native) active directory to authenticate to? Any tips..? Here is my AD samba4 config: dc1.my.domain CN=Users,DC=samba,DC=my,DC=domain username password 1 1 1 CN=%s,CN=Users,DC=samba,DC=my,DC=domain Regards, Amar ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator
Hi kafka-devs, I would like a second review to the proposed changes on KIP-795: Add public APIs for AbstractCoordinator [ https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for +AbstractCoordinator ] I've amended the KIP addressing Tom's feedback, and also opened a PR with the proposed changes [ https://github.com/apache/kafka/pull/11515 ]. There's also a PR for the kafka-monitor tool [ https://github.com/linkedin/kafka-monitor/pull/355 ] that demonstrates how we leveraged AbstractCoordinator to implement a High-Availability mode for it, with the intention of showing how this feature can be leveraged by other services within the Kafka ecosystem and outside it. Thanks for your time and consideration. Hector From: dev@kafka.apache.org At: 11/29/21 13:31:26 UTC-5:00To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator Hello again Tom, kafka devs First, congrats on becoming a PMC member! That's so cool. Since your last reply I've updated the KIP trying to address some of your suggestions. A few more details have been added to the motivation section, and also went ahead and opened a draft pull-request with the changes I think are needed for this KIP, with the hope that it makes it easier to discuss the general approach and any other concerns the community may have. KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for +AbstractCoordinator PR: https://github.com/apache/kafka/pull/11515 Looking forward for some community feedback. Regards, Hector From: dev@kafka.apache.org At: 11/11/21 17:15:17 UTC-5:00To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator Hi Tom, Thanks for taking time reviewing the KIP. I think it's reasonable to ask if Kafka's Group Coordination protocol should be used for use cases other than the distributed event log. This was actually briefly addressed by Gwen Shapira during her presentation at the strangeloop conference in '18 (a link to the video is included in the KIP), in which she explain in greater details the protocol internals. We should also keep in mind that this protocol is already being used for other use cases outside of core Kafka: Confluent Schema Registry uses it to determine leadership between members of a cluster, Kafka Connect uses it for task assignments, same with Kafka Stream for partition and task distributions, and so on. So having a public, stable API not just for new use cases (like ours) but existing ones is IMHO a good thing to have. I'll amend the KIP and add a bit more details to the motivation and alternatives sections, so the usefulness of this KIP is better understood. Now, for the first point of your technical observations (regarding protocolTypes()), I don't think it matters in this context, as the protocol name and subtype are only relevant in the context of a consumer group and group rebalance. It really doesn't matter if two different libraries decide to name their protocols the same. For item #2, I was under the impression that, because these classes all implement the org.apache.kafka.common.protocol.[Message, ApiMessage] interface, they are implicitly part of the Kafka protocol and the top-level API. Isn't that really the case? And finally, for #3, the goal I had in mind when creating this KPI was a small one: to provide an interface that users can rely on when extending the AbstactCoordinator. So my thought was that, while the AbstractCoordinator itself uses some internal APIs (like ConsumerNetworkClient, ConsumerMetadata and so on) those can remain internal. But it probably makes sense to at least explore the possibility of moving the whole AbstractCoordinator class to be part of the public API. I'll do that exercise, see what it entails, and update the KIP with my findings. Thanks again! Hector From: dev@kafka.apache.org At: 11/10/21 06:43:59 UTC-5:00To: Hector Geraldino (BLOOMBERG/ 919 3RD A ) , dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator Hi Hector, Thanks for the KIP. At a high level, I think the question to be answered by the community is "Should Kafka really be providing this kind of cluster management API?". While Kafka clients need this to provide their functionality it's a different thing to expose that as a public API of the project, which is otherwise about providing a distributed event log/data streaming platform/. Having a public API brings a significant commitment for API compatibility, which could impair the ability of the project to change the API in order to make improvements to the Kafka clients. The current AbstractCoordinator not being a supported API means we don't currently have to reason about compatibility here. So I think it would help the motivation section of the KIP to describe in a bit more detail the use case(s) you have for implementing your own co
Re: [PATCH v3 1/3] of: Move simple-framebuffer device handling from simplefb to of
On 13/12/2021 20.30, Javier Martinez Canillas wrote: On Mon, Dec 13, 2021 at 11:46 AM Hector Martin wrote: On 13/12/2021 17.44, Javier Martinez Canillas wrote: Hello Hector, On Sun, Dec 12, 2021 at 7:24 AM Hector Martin wrote: This code is required for both simplefb and simpledrm, so let's move it into the OF core instead of having it as an ad-hoc initcall in the drivers. Acked-by: Thomas Zimmermann Signed-off-by: Hector Martin --- drivers/of/platform.c | 4 drivers/video/fbdev/simplefb.c | 21 + 2 files changed, 5 insertions(+), 20 deletions(-) This is indeed a much better approach than what I suggested. I just have one comment. diff --git a/drivers/of/platform.c b/drivers/of/platform.c index b3faf89744aa..793350028906 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -540,6 +540,10 @@ static int __init of_platform_default_populate_init(void) of_node_put(node); } + node = of_get_compatible_child(of_chosen, "simple-framebuffer"); You have to check if the node variable is NULL here. + of_platform_device_create(node, NULL, NULL); Otherwise this could lead to a NULL pointer dereference if debug output is enabled (the node->full_name is printed). Where is it printed? I thought I might need a NULL check, but this code Sorry, I misread of_amba_device_create() as of_platform_device_create(), which uses the "%pOF" printk format specifier [0] to print the node's full name as a debug output [1]. [0]: https://elixir.bootlin.com/linux/v5.16-rc5/source/Documentation/core-api/printk-formats.rst#L462 [1]: https://elixir.bootlin.com/linux/v5.16-rc5/source/drivers/of/platform.c#L233 was suggested verbatim by Rob in v2 without the NULL check and digging through I found that the NULL codepath is safe. You are right that passing NULL is a safe code path for now due the of_device_is_available(node) check, but that seems fragile to me since just adding a similar debug output to of_platform_device_create() could trigger the NULL pointer dereference. Since Rob is the DT maintainer, I'm going to defer to his opinion on this one :-) -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
Re: [PATCH v3 1/3] of: Move simple-framebuffer device handling from simplefb to of
On 13/12/2021 17.44, Javier Martinez Canillas wrote: Hello Hector, On Sun, Dec 12, 2021 at 7:24 AM Hector Martin wrote: This code is required for both simplefb and simpledrm, so let's move it into the OF core instead of having it as an ad-hoc initcall in the drivers. Acked-by: Thomas Zimmermann Signed-off-by: Hector Martin --- drivers/of/platform.c | 4 drivers/video/fbdev/simplefb.c | 21 + 2 files changed, 5 insertions(+), 20 deletions(-) This is indeed a much better approach than what I suggested. I just have one comment. diff --git a/drivers/of/platform.c b/drivers/of/platform.c index b3faf89744aa..793350028906 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -540,6 +540,10 @@ static int __init of_platform_default_populate_init(void) of_node_put(node); } + node = of_get_compatible_child(of_chosen, "simple-framebuffer"); You have to check if the node variable is NULL here. + of_platform_device_create(node, NULL, NULL); Otherwise this could lead to a NULL pointer dereference if debug output is enabled (the node->full_name is printed). Where is it printed? I thought I might need a NULL check, but this code was suggested verbatim by Rob in v2 without the NULL check and digging through I found that the NULL codepath is safe. of_platform_device_create calls of_platform_device_create_pdata directly, and: static struct platform_device *of_platform_device_create_pdata( struct device_node *np, const char *bus_id, void *platform_data, struct device *parent) { struct platform_device *dev; if (!of_device_is_available(np) || of_node_test_and_set_flag(np, OF_POPULATED)) return NULL; of_device_is_available takes a global spinlock and then calls __of_device_is_available, and that does: static bool __of_device_is_available(const struct device_node *device) { const char *status; int statlen; if (!device) return false; ... so I don't see how this can do anything but immediately return false if node is NULL. -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
[PATCH v3 3/3] drm/simpledrm: Add [AX]RGB2101010 formats
This is the format used by the bootloader framebuffer on Apple ARM64 platforms. Reviewed-by: Thomas Zimmermann Signed-off-by: Hector Martin --- drivers/gpu/drm/tiny/simpledrm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c index 2f15b9aa..b977f5c94562 100644 --- a/drivers/gpu/drm/tiny/simpledrm.c +++ b/drivers/gpu/drm/tiny/simpledrm.c @@ -571,8 +571,8 @@ static const uint32_t simpledrm_default_formats[] = { //DRM_FORMAT_XRGB1555, //DRM_FORMAT_ARGB1555, DRM_FORMAT_RGB888, - //DRM_FORMAT_XRGB2101010, - //DRM_FORMAT_ARGB2101010, + DRM_FORMAT_XRGB2101010, + DRM_FORMAT_ARGB2101010, }; static const uint64_t simpledrm_format_modifiers[] = { -- 2.33.0
[PATCH v3 2/3] drm/format-helper: Add drm_fb_xrgb8888_to_xrgb2101010_toio()
Add XRGB emulation support for devices that can only do XRGB2101010. This is chiefly useful for simpledrm on Apple devices where the bootloader-provided framebuffer is 10-bit. Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 64 + include/drm/drm_format_helper.h | 3 ++ 2 files changed, 67 insertions(+) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index dbe3e830096e..0f28dd2bdd72 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -409,6 +409,61 @@ void drm_fb_xrgb_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch, } EXPORT_SYMBOL(drm_fb_xrgb_to_rgb888_toio); +static void drm_fb_xrgb_to_xrgb2101010_line(u32 *dbuf, const u32 *sbuf, + unsigned int pixels) +{ + unsigned int x; + u32 val32; + + for (x = 0; x < pixels; x++) { + val32 = ((sbuf[x] & 0x00FF) << 2) | + ((sbuf[x] & 0xFF00) << 4) | + ((sbuf[x] & 0x00FF) << 6); + *dbuf++ = val32 | ((val32 >> 8) & 0x00300C03); + } +} + +/** + * drm_fb_xrgb_to_xrgb2101010_toio - Convert XRGB to XRGB2101010 clip + * buffer + * @dst: XRGB2101010 destination buffer (iomem) + * @dst_pitch: Number of bytes between two consecutive scanlines within dst + * @vaddr: XRGB source buffer + * @fb: DRM framebuffer + * @clip: Clip rectangle area to copy + * + * Drivers can use this function for XRGB2101010 devices that don't natively + * support XRGB. + */ +void drm_fb_xrgb_to_xrgb2101010_toio(void __iomem *dst, +unsigned int dst_pitch, const void *vaddr, +const struct drm_framebuffer *fb, +const struct drm_rect *clip) +{ + size_t linepixels = clip->x2 - clip->x1; + size_t dst_len = linepixels * sizeof(u32); + unsigned int y, lines = clip->y2 - clip->y1; + void *dbuf; + + if (!dst_pitch) + dst_pitch = dst_len; + + dbuf = kmalloc(dst_len, GFP_KERNEL); + if (!dbuf) + return; + + vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32)); + for (y = 0; y < lines; y++) { + drm_fb_xrgb_to_xrgb2101010_line(dbuf, vaddr, linepixels); + memcpy_toio(dst, dbuf, dst_len); + vaddr += fb->pitches[0]; + dst += dst_pitch; + } + + kfree(dbuf); +} +EXPORT_SYMBOL(drm_fb_xrgb_to_xrgb2101010_toio); + /** * drm_fb_xrgb_to_gray8 - Convert XRGB to grayscale * @dst: 8-bit grayscale destination buffer @@ -500,6 +555,10 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for fb_format = DRM_FORMAT_XRGB; if (dst_format == DRM_FORMAT_ARGB) dst_format = DRM_FORMAT_XRGB; + if (fb_format == DRM_FORMAT_ARGB2101010) + fb_format = DRM_FORMAT_XRGB2101010; + if (dst_format == DRM_FORMAT_ARGB2101010) + dst_format = DRM_FORMAT_XRGB2101010; if (dst_format == fb_format) { drm_fb_memcpy_toio(dst, dst_pitch, vmap, fb, clip); @@ -515,6 +574,11 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for drm_fb_xrgb_to_rgb888_toio(dst, dst_pitch, vmap, fb, clip); return 0; } + } else if (dst_format == DRM_FORMAT_XRGB2101010) { + if (fb_format == DRM_FORMAT_XRGB) { + drm_fb_xrgb_to_xrgb2101010_toio(dst, dst_pitch, vmap, fb, clip); + return 0; + } } return -EINVAL; diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h index 97e4c3223af3..b30ed5de0a33 100644 --- a/include/drm/drm_format_helper.h +++ b/include/drm/drm_format_helper.h @@ -33,6 +33,9 @@ void drm_fb_xrgb_to_rgb888(void *dst, unsigned int dst_pitch, const void *sr void drm_fb_xrgb_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch, const void *vaddr, const struct drm_framebuffer *fb, const struct drm_rect *clip); +void drm_fb_xrgb_to_xrgb2101010_toio(void __iomem *dst, unsigned int dst_pitch, +const void *vaddr, const struct drm_framebuffer *fb, +const struct drm_rect *clip); void drm_fb_xrgb_to_gray8(void *dst, unsigned int dst_pitch, const void *vaddr, const struct drm_framebuffer *fb, const struct drm_rect *clip); -- 2.33.0
[PATCH v3 1/3] of: Move simple-framebuffer device handling from simplefb to of
This code is required for both simplefb and simpledrm, so let's move it into the OF core instead of having it as an ad-hoc initcall in the drivers. Acked-by: Thomas Zimmermann Signed-off-by: Hector Martin --- drivers/of/platform.c | 4 drivers/video/fbdev/simplefb.c | 21 + 2 files changed, 5 insertions(+), 20 deletions(-) diff --git a/drivers/of/platform.c b/drivers/of/platform.c index b3faf89744aa..793350028906 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -540,6 +540,10 @@ static int __init of_platform_default_populate_init(void) of_node_put(node); } + node = of_get_compatible_child(of_chosen, "simple-framebuffer"); + of_platform_device_create(node, NULL, NULL); + of_node_put(node); + /* Populate everything else. */ of_platform_default_populate(NULL, NULL, NULL); diff --git a/drivers/video/fbdev/simplefb.c b/drivers/video/fbdev/simplefb.c index b63074fd892e..57541887188b 100644 --- a/drivers/video/fbdev/simplefb.c +++ b/drivers/video/fbdev/simplefb.c @@ -541,26 +541,7 @@ static struct platform_driver simplefb_driver = { .remove = simplefb_remove, }; -static int __init simplefb_init(void) -{ - int ret; - struct device_node *np; - - ret = platform_driver_register(_driver); - if (ret) - return ret; - - if (IS_ENABLED(CONFIG_OF_ADDRESS) && of_chosen) { - for_each_child_of_node(of_chosen, np) { - if (of_device_is_compatible(np, "simple-framebuffer")) - of_platform_device_create(np, NULL, NULL); - } - } - - return 0; -} - -fs_initcall(simplefb_init); +module_platform_driver(simplefb_driver); MODULE_AUTHOR("Stephen Warren "); MODULE_DESCRIPTION("Simple framebuffer driver"); -- 2.33.0
[PATCH v3 0/3] drm/simpledrm: Apple M1 / DT platform support fixes
Hi DRM folks, This short series makes simpledrm work on Apple M1 (including Pro/Max) platforms the way simplefb already does, by adding XRGB2101010 support and making it bind to framebuffers in /chosen the same way simplefb does. This avoids breaking the bootloader-provided framebuffer console when simpledrm is selected to replace simplefb, as these FBs always seem to be 10-bit (at least when a real screen is attached). Changes since v2: - Made 10-bit conversion code fill the LSBs - Added ARGB2101010 to supported formats list - Simplified OF core code per review feedback Hector Martin (3): of: Move simple-framebuffer device handling from simplefb to of drm/format-helper: Add drm_fb_xrgb_to_xrgb2101010_toio() drm/simpledrm: Add [AX]RGB2101010 formats drivers/gpu/drm/drm_format_helper.c | 64 + drivers/gpu/drm/tiny/simpledrm.c| 4 +- drivers/of/platform.c | 4 ++ drivers/video/fbdev/simplefb.c | 21 +- include/drm/drm_format_helper.h | 3 ++ 5 files changed, 74 insertions(+), 22 deletions(-) -- 2.33.0
[jira] [Updated] (KAFKA-13521) Supress changelog schema version breaks migration
[ https://issues.apache.org/jira/browse/KAFKA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-13521: - Issue Type: Bug (was: Improvement) > Supress changelog schema version breaks migration > - > > Key: KAFKA-13521 > URL: https://issues.apache.org/jira/browse/KAFKA-13521 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.4.0, 2.5.0, 2.5.1 >Reporter: Hector Geraldino >Priority: Major > > Hi, > We recently updated the kafka-streams library in one of our apps from v2.5.0 > to v2.5.1. This upgrade changes the header format of the state store for > suppress changelog topics (see > https://issues.apache.org/jira/browse/KAFKA-10173 and > [https://github.com/apache/kafka/pull/8905)] > What we noticed was that, introducing a new version on the binary schema > header breaks older clients. I.e. applications running on v2.5.1 can parse > the v3, v2, v1 and 0 headers, while the ones running on 2.5.0 (and, I assume, > previous versions) cannot read headers in v3 format. > The logged exception is: > > {code:java} > java.lang.IllegalArgumentException: Restoring apparently invalid changelog > record: ConsumerRecord(topic = > msgequator-kfns-msgequator-msgequator-suppress-buffer-store-changelog, > partition = 8, leaderEpoch = 405, offset = 711400430, CreateTime = > 1638828473341, serialized key size = 32, serialized value size = 90, headers > = RecordHeaders(headers = [RecordHeader(key = v, value = [3])], isReadOnly = > false), key = [B@5cf0e540, value = [B@40abc004) > at > org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.restoreBatch(InMemoryTimeOrderedKeyValueBuffer.java:372) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.CompositeRestoreListener.restoreBatch(CompositeRestoreListener.java:89) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.StateRestorer.restore(StateRestorer.java:92) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.StoreChangelogReader.processNext(StoreChangelogReader.java:350) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:94) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:401) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:779) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697) > ~[msgequator-1.59.3.jar:1.59.3] > at > org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670) > ~[msgequator-1.59.3.jar:1.59.3] {code} > > > There's obviously no clear solution for this other than stopping/starting all > instances at once. A rolling bounce that takes some time to complete (in our > case, days) will break instances that haven't been upgraded yet after a > rebalance that causes older clients to pick up the newly encoded changelog > partition(s) > > I don't know if adding a flag on the client side that lists the supported > protocol versions (so it behaves like Kafka Consumers when picking the > rebalance protocol - cooperative or eager), or if it just needs to be > explicitly stated on the migration guide that a full stop/start migration is > required in cases where the protocol version changes. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13521) Supress changelog schema version breaks migration
Hector Geraldino created KAFKA-13521: Summary: Supress changelog schema version breaks migration Key: KAFKA-13521 URL: https://issues.apache.org/jira/browse/KAFKA-13521 Project: Kafka Issue Type: Improvement Components: streams Affects Versions: 2.5.1, 2.5.0, 2.4.0 Reporter: Hector Geraldino Hi, We recently updated the kafka-streams library in one of our apps from v2.5.0 to v2.5.1. This upgrade changes the header format of the state store for suppress changelog topics (see https://issues.apache.org/jira/browse/KAFKA-10173 and [https://github.com/apache/kafka/pull/8905)] What we noticed was that, introducing a new version on the binary schema header breaks older clients. I.e. applications running on v2.5.1 can parse the v3, v2, v1 and 0 headers, while the ones running on 2.5.0 (and, I assume, previous versions) cannot read headers in v3 format. The logged exception is: {code:java} java.lang.IllegalArgumentException: Restoring apparently invalid changelog record: ConsumerRecord(topic = msgequator-kfns-msgequator-msgequator-suppress-buffer-store-changelog, partition = 8, leaderEpoch = 405, offset = 711400430, CreateTime = 1638828473341, serialized key size = 32, serialized value size = 90, headers = RecordHeaders(headers = [RecordHeader(key = v, value = [3])], isReadOnly = false), key = [B@5cf0e540, value = [B@40abc004) at org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.restoreBatch(InMemoryTimeOrderedKeyValueBuffer.java:372) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.CompositeRestoreListener.restoreBatch(CompositeRestoreListener.java:89) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StateRestorer.restore(StateRestorer.java:92) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.processNext(StoreChangelogReader.java:350) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:94) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:401) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:779) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670) ~[msgequator-1.59.3.jar:1.59.3] {code} There's obviously no clear solution for this other than stopping/starting all instances at once. A rolling bounce that takes some time to complete (in our case, days) will break instances that haven't been upgraded yet after a rebalance that causes older clients to pick up the newly encoded changelog partition(s) I don't know if adding a flag on the client side that lists the supported protocol versions (so it behaves like Kafka Consumers when picking the rebalance protocol - cooperative or eager), or if it just needs to be explicitly stated on the migration guide that a full stop/start migration is required in cases where the protocol version changes. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13521) Supress changelog schema version breaks migration
Hector Geraldino created KAFKA-13521: Summary: Supress changelog schema version breaks migration Key: KAFKA-13521 URL: https://issues.apache.org/jira/browse/KAFKA-13521 Project: Kafka Issue Type: Improvement Components: streams Affects Versions: 2.5.1, 2.5.0, 2.4.0 Reporter: Hector Geraldino Hi, We recently updated the kafka-streams library in one of our apps from v2.5.0 to v2.5.1. This upgrade changes the header format of the state store for suppress changelog topics (see https://issues.apache.org/jira/browse/KAFKA-10173 and [https://github.com/apache/kafka/pull/8905)] What we noticed was that, introducing a new version on the binary schema header breaks older clients. I.e. applications running on v2.5.1 can parse the v3, v2, v1 and 0 headers, while the ones running on 2.5.0 (and, I assume, previous versions) cannot read headers in v3 format. The logged exception is: {code:java} java.lang.IllegalArgumentException: Restoring apparently invalid changelog record: ConsumerRecord(topic = msgequator-kfns-msgequator-msgequator-suppress-buffer-store-changelog, partition = 8, leaderEpoch = 405, offset = 711400430, CreateTime = 1638828473341, serialized key size = 32, serialized value size = 90, headers = RecordHeaders(headers = [RecordHeader(key = v, value = [3])], isReadOnly = false), key = [B@5cf0e540, value = [B@40abc004) at org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.restoreBatch(InMemoryTimeOrderedKeyValueBuffer.java:372) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.CompositeRestoreListener.restoreBatch(CompositeRestoreListener.java:89) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StateRestorer.restore(StateRestorer.java:92) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.processNext(StoreChangelogReader.java:350) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:94) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:401) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:779) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697) ~[msgequator-1.59.3.jar:1.59.3] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670) ~[msgequator-1.59.3.jar:1.59.3] {code} There's obviously no clear solution for this other than stopping/starting all instances at once. A rolling bounce that takes some time to complete (in our case, days) will break instances that haven't been upgraded yet after a rebalance that causes older clients to pick up the newly encoded changelog partition(s) I don't know if adding a flag on the client side that lists the supported protocol versions (so it behaves like Kafka Consumers when picking the rebalance protocol - cooperative or eager), or if it just needs to be explicitly stated on the migration guide that a full stop/start migration is required in cases where the protocol version changes. -- This message was sent by Atlassian Jira (v8.20.1#820001)
Re: [PATCH v2 2/3] drm/format-helper: Add drm_fb_xrgb8888_to_xrgb2101010_toio()
Hi, thanks for the review! On 07/12/2021 18.40, Thomas Zimmermann wrote: Hi Am 07.12.21 um 08:29 schrieb Hector Martin: Add XRGB emulation support for devices that can only do XRGB2101010. This is chiefly useful for simpledrm on Apple devices where the bootloader-provided framebuffer is 10-bit. Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 62 + include/drm/drm_format_helper.h | 3 ++ 2 files changed, 65 insertions(+) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index dbe3e830096e..edd611d3ab6a 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -409,6 +409,59 @@ void drm_fb_xrgb_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch, } EXPORT_SYMBOL(drm_fb_xrgb_to_rgb888_toio); +static void drm_fb_xrgb_to_xrgb2101010_line(u32 *dbuf, const u32 *sbuf, + unsigned int pixels) +{ + unsigned int x; + + for (x = 0; x < pixels; x++) { + *dbuf++ = ((sbuf[x] & 0x00FF) << 2) | + ((sbuf[x] & 0xFF00) << 4) | + ((sbuf[x] & 0x00FF) << 6); This isn't quite right. The lowest two destination bits in each component will always be zero. You have to do the shifting as above and for each component the two highest source bits have to be OR'ed into the two lowest destination bits. For example the source bits in a component are numbered 7 to 0 | 7 6 5 4 3 2 1 0 | then the destination bits should be | 7 6 5 4 3 2 1 0 7 6 | I think both approaches have pros and cons. Leaving the two LSBs always at 0 yields a fully linear transfer curve with no discontinuities, but means the maximum brightness is slightly less than full. Setting them fully maps the brightness range, but creates 4 double wide steps in the transfer curve (also it's potentially slightly slower CPU-wise). If you prefer the latter I'll do that for v2. -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
[PATCH v2 3/3] drm/simpledrm: Add XRGB2101010 format
This is the format used by the bootloader framebuffer on Apple ARM64 platforms. Reviewed-by: Thomas Zimmermann Signed-off-by: Hector Martin --- drivers/gpu/drm/tiny/simpledrm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c index 2f15b9aa..edadfd9ee882 100644 --- a/drivers/gpu/drm/tiny/simpledrm.c +++ b/drivers/gpu/drm/tiny/simpledrm.c @@ -571,7 +571,7 @@ static const uint32_t simpledrm_default_formats[] = { //DRM_FORMAT_XRGB1555, //DRM_FORMAT_ARGB1555, DRM_FORMAT_RGB888, - //DRM_FORMAT_XRGB2101010, + DRM_FORMAT_XRGB2101010, //DRM_FORMAT_ARGB2101010, }; -- 2.33.0
[PATCH v2 2/3] drm/format-helper: Add drm_fb_xrgb8888_to_xrgb2101010_toio()
Add XRGB emulation support for devices that can only do XRGB2101010. This is chiefly useful for simpledrm on Apple devices where the bootloader-provided framebuffer is 10-bit. Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 62 + include/drm/drm_format_helper.h | 3 ++ 2 files changed, 65 insertions(+) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index dbe3e830096e..edd611d3ab6a 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -409,6 +409,59 @@ void drm_fb_xrgb_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch, } EXPORT_SYMBOL(drm_fb_xrgb_to_rgb888_toio); +static void drm_fb_xrgb_to_xrgb2101010_line(u32 *dbuf, const u32 *sbuf, + unsigned int pixels) +{ + unsigned int x; + + for (x = 0; x < pixels; x++) { + *dbuf++ = ((sbuf[x] & 0x00FF) << 2) | + ((sbuf[x] & 0xFF00) << 4) | + ((sbuf[x] & 0x00FF) << 6); + } +} + +/** + * drm_fb_xrgb_to_xrgb2101010_toio - Convert XRGB to XRGB2101010 clip + * buffer + * @dst: XRGB2101010 destination buffer (iomem) + * @dst_pitch: Number of bytes between two consecutive scanlines within dst + * @vaddr: XRGB source buffer + * @fb: DRM framebuffer + * @clip: Clip rectangle area to copy + * + * Drivers can use this function for XRGB2101010 devices that don't natively + * support XRGB. + */ +void drm_fb_xrgb_to_xrgb2101010_toio(void __iomem *dst, +unsigned int dst_pitch, const void *vaddr, +const struct drm_framebuffer *fb, +const struct drm_rect *clip) +{ + size_t linepixels = clip->x2 - clip->x1; + size_t dst_len = linepixels * sizeof(u32); + unsigned y, lines = clip->y2 - clip->y1; + void *dbuf; + + if (!dst_pitch) + dst_pitch = dst_len; + + dbuf = kmalloc(dst_len, GFP_KERNEL); + if (!dbuf) + return; + + vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32)); + for (y = 0; y < lines; y++) { + drm_fb_xrgb_to_xrgb2101010_line(dbuf, vaddr, linepixels); + memcpy_toio(dst, dbuf, dst_len); + vaddr += fb->pitches[0]; + dst += dst_pitch; + } + + kfree(dbuf); +} +EXPORT_SYMBOL(drm_fb_xrgb_to_xrgb2101010_toio); + /** * drm_fb_xrgb_to_gray8 - Convert XRGB to grayscale * @dst: 8-bit grayscale destination buffer @@ -500,6 +553,10 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for fb_format = DRM_FORMAT_XRGB; if (dst_format == DRM_FORMAT_ARGB) dst_format = DRM_FORMAT_XRGB; + if (fb_format == DRM_FORMAT_ARGB2101010) + fb_format = DRM_FORMAT_XRGB2101010; + if (dst_format == DRM_FORMAT_ARGB2101010) + dst_format = DRM_FORMAT_XRGB2101010; if (dst_format == fb_format) { drm_fb_memcpy_toio(dst, dst_pitch, vmap, fb, clip); @@ -515,6 +572,11 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for drm_fb_xrgb_to_rgb888_toio(dst, dst_pitch, vmap, fb, clip); return 0; } + } else if (dst_format == DRM_FORMAT_XRGB2101010) { + if (fb_format == DRM_FORMAT_XRGB) { + drm_fb_xrgb_to_xrgb2101010_toio(dst, dst_pitch, vmap, fb, clip); + return 0; + } } return -EINVAL; diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h index 97e4c3223af3..b30ed5de0a33 100644 --- a/include/drm/drm_format_helper.h +++ b/include/drm/drm_format_helper.h @@ -33,6 +33,9 @@ void drm_fb_xrgb_to_rgb888(void *dst, unsigned int dst_pitch, const void *sr void drm_fb_xrgb_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch, const void *vaddr, const struct drm_framebuffer *fb, const struct drm_rect *clip); +void drm_fb_xrgb_to_xrgb2101010_toio(void __iomem *dst, unsigned int dst_pitch, +const void *vaddr, const struct drm_framebuffer *fb, +const struct drm_rect *clip); void drm_fb_xrgb_to_gray8(void *dst, unsigned int dst_pitch, const void *vaddr, const struct drm_framebuffer *fb, const struct drm_rect *clip); -- 2.33.0
[PATCH v2 1/3] of: Move simple-framebuffer device handling from simplefb to of
This code is required for both simplefb and simpledrm, so let's move it into the OF core instead of having it as an ad-hoc initcall in the drivers. Signed-off-by: Hector Martin --- drivers/of/platform.c | 5 + drivers/video/fbdev/simplefb.c | 21 + 2 files changed, 6 insertions(+), 20 deletions(-) diff --git a/drivers/of/platform.c b/drivers/of/platform.c index b3faf89744aa..e097f40b03c0 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -540,6 +540,11 @@ static int __init of_platform_default_populate_init(void) of_node_put(node); } + for_each_child_of_node(of_chosen, node) { + if (of_device_is_compatible(node, "simple-framebuffer")) + of_platform_device_create(node, NULL, NULL); + } + /* Populate everything else. */ of_platform_default_populate(NULL, NULL, NULL); diff --git a/drivers/video/fbdev/simplefb.c b/drivers/video/fbdev/simplefb.c index b63074fd892e..57541887188b 100644 --- a/drivers/video/fbdev/simplefb.c +++ b/drivers/video/fbdev/simplefb.c @@ -541,26 +541,7 @@ static struct platform_driver simplefb_driver = { .remove = simplefb_remove, }; -static int __init simplefb_init(void) -{ - int ret; - struct device_node *np; - - ret = platform_driver_register(_driver); - if (ret) - return ret; - - if (IS_ENABLED(CONFIG_OF_ADDRESS) && of_chosen) { - for_each_child_of_node(of_chosen, np) { - if (of_device_is_compatible(np, "simple-framebuffer")) - of_platform_device_create(np, NULL, NULL); - } - } - - return 0; -} - -fs_initcall(simplefb_init); +module_platform_driver(simplefb_driver); MODULE_AUTHOR("Stephen Warren "); MODULE_DESCRIPTION("Simple framebuffer driver"); -- 2.33.0
[PATCH v2 0/3] drm/simpledrm: Apple M1 / DT platform support fixes
Hi DRM folks, This short series makes simpledrm work on Apple M1 (including Pro/Max) platforms the way simplefb already does, by adding XRGB2101010 support and making it bind to framebuffers in /chosen the same way simplefb does. This avoids breaking the bootloader-provided framebuffer console when simpledrm is selected to replace simplefb, as these FBs always seem to be 10-bit (at least when a real screen is attached). Changes since v1: - Moved the OF platform device setup code from simplefb into common code, instead of duplicating it in simpledrm - Rebased on drm-tip Hector Martin (3): of: Move simple-framebuffer device handling from simplefb to of drm/format-helper: Add drm_fb_xrgb_to_xrgb2101010_toio() drm/simpledrm: Add XRGB2101010 format drivers/gpu/drm/drm_format_helper.c | 62 + drivers/gpu/drm/tiny/simpledrm.c| 2 +- drivers/of/platform.c | 5 +++ drivers/video/fbdev/simplefb.c | 21 +- include/drm/drm_format_helper.h | 3 ++ 5 files changed, 72 insertions(+), 21 deletions(-) -- 2.33.0
Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator
Hello again Tom, kafka devs First, congrats on becoming a PMC member! That's so cool. Since your last reply I've updated the KIP trying to address some of your suggestions. A few more details have been added to the motivation section, and also went ahead and opened a draft pull-request with the changes I think are needed for this KIP, with the hope that it makes it easier to discuss the general approach and any other concerns the community may have. KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator PR: https://github.com/apache/kafka/pull/11515 Looking forward for some community feedback. Regards, Hector From: dev@kafka.apache.org At: 11/11/21 17:15:17 UTC-5:00To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator Hi Tom, Thanks for taking time reviewing the KIP. I think it's reasonable to ask if Kafka's Group Coordination protocol should be used for use cases other than the distributed event log. This was actually briefly addressed by Gwen Shapira during her presentation at the strangeloop conference in '18 (a link to the video is included in the KIP), in which she explain in greater details the protocol internals. We should also keep in mind that this protocol is already being used for other use cases outside of core Kafka: Confluent Schema Registry uses it to determine leadership between members of a cluster, Kafka Connect uses it for task assignments, same with Kafka Stream for partition and task distributions, and so on. So having a public, stable API not just for new use cases (like ours) but existing ones is IMHO a good thing to have. I'll amend the KIP and add a bit more details to the motivation and alternatives sections, so the usefulness of this KIP is better understood. Now, for the first point of your technical observations (regarding protocolTypes()), I don't think it matters in this context, as the protocol name and subtype are only relevant in the context of a consumer group and group rebalance. It really doesn't matter if two different libraries decide to name their protocols the same. For item #2, I was under the impression that, because these classes all implement the org.apache.kafka.common.protocol.[Message, ApiMessage] interface, they are implicitly part of the Kafka protocol and the top-level API. Isn't that really the case? And finally, for #3, the goal I had in mind when creating this KPI was a small one: to provide an interface that users can rely on when extending the AbstactCoordinator. So my thought was that, while the AbstractCoordinator itself uses some internal APIs (like ConsumerNetworkClient, ConsumerMetadata and so on) those can remain internal. But it probably makes sense to at least explore the possibility of moving the whole AbstractCoordinator class to be part of the public API. I'll do that exercise, see what it entails, and update the KIP with my findings. Thanks again! Hector From: dev@kafka.apache.org At: 11/10/21 06:43:59 UTC-5:00To: Hector Geraldino (BLOOMBERG/ 919 3RD A ) , dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator Hi Hector, Thanks for the KIP. At a high level, I think the question to be answered by the community is "Should Kafka really be providing this kind of cluster management API?". While Kafka clients need this to provide their functionality it's a different thing to expose that as a public API of the project, which is otherwise about providing a distributed event log/data streaming platform/. Having a public API brings a significant commitment for API compatibility, which could impair the ability of the project to change the API in order to make improvements to the Kafka clients. The current AbstractCoordinator not being a supported API means we don't currently have to reason about compatibility here. So I think it would help the motivation section of the KIP to describe in a bit more detail the use case(s) you have for implementing your own coordinators. For example, are these applications using Kafka otherwise, or just to leverage this API? And what alternatives to implementing your own coordinators did you consider, and why did you reject them? From a technical point of view, there are a number of issues I think would need addressing in order to do something like this: 1. There probably ought to be a way to ensure that protocolTypes() don't collide, or at least reduce the chances of a collision. While probably unlikely in practice the consequences of different protocols having the same name could be pretty confusing to debug. 2. JoinGroupRequestData and JoinGroupResponseData are not public classes (none of the *RequestData or *ResponseData classes are, intentionally), so there would have to be an abstraction for them. 3. It's all well and good having an interface that anyone can implement, but there is no supported Kafka API w
Re: [PATCH 2/3] drm/format-helper: Add drm_fb_xrgb8888_to_xrgb2101010_dstclip()
On 22/11/2021 18.52, Pekka Paalanen wrote: On Wed, 17 Nov 2021 23:58:28 +0900 Hector Martin wrote: Add XRGB emulation support for devices that can only do XRGB2101010. This is chiefly useful for simpledrm on Apple devices where the bootloader-provided framebuffer is 10-bit, which already works fine with simplefb. This is required to make simpledrm support this too. Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 64 + include/drm/drm_format_helper.h | 4 ++ 2 files changed, 68 insertions(+) Hi Hector, I'm curious, since the bootloader seems to always set up a 10-bit mode, is there a reason for it that you can guess? Is the monitor in WCG or even HDR mode? My guess is that Apple prefer to use 10-bit framebuffers for seamless handover with their graphics stack, which presumably uses 10-bit framebuffers these days. It seems to be unconditional; I've never seen anything but 10 bits across all Apple devices, both with the internal panels on laptops and with bog standard external displays on the Mac Mini via HDMI. HDR is not necessary, even very dumb capture cards and old screens get a 10-bit framebufer in memory. The only time I see an 8-bit framebuffer is with *no* monitor connected on the Mini, in which case you get an 8-bit 640x1136 dummy framebuffer (that's the iPhone 5 screen resolution... :-) ) -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
Re: reportbug fail
On 21/11/21 3:04 am, Lee wrote: I wanted to create a bug report for meld but couldn't find any info on how to other than "use reportbug" :( I see your problem is solved, but for future reference, this page has info on reporting bugs via email: https://www.debian.org/Bugs/Reporting Cheers, Richard
Re: [PATCH 1/3] drm/simpledrm: Bind to OF framebuffers in /chosen
On 18/11/2021 18.19, Thomas Zimmermann wrote: Hi Am 17.11.21 um 15:58 schrieb Hector Martin: @@ -897,5 +898,21 @@ static struct platform_driver simpledrm_platform_driver = { module_platform_driver(simpledrm_platform_driver); +static int __init simpledrm_init(void) +{ + struct device_node *np; + + if (IS_ENABLED(CONFIG_OF_ADDRESS) && of_chosen) { + for_each_child_of_node(of_chosen, np) { + if (of_device_is_compatible(np, "simple-framebuffer")) + of_platform_device_create(np, NULL, NULL); + } + } + + return 0; +} + +fs_initcall(simpledrm_init); + Simpledrm is just a driver, but this is platform setup code. Why is this code located here and not under arch/ or drivers/firmware/? I know that other drivers do similar things, it doesn't seem to belong here. This definitely doesn't belong in either of those, since it is not arch- or firmware-specific. It is implementing support for the standard simple-framebuffer OF binding, which specifies that it must be located within the /chosen node (and thus the default OF setup code won't do the matching for you); this applies to all OF platforms [1] Adding Rob; do you think this should move from simplefb/simpledrm to common OF code? (where?) [1] Documentation/devicetree/bindings/display/simple-framebuffer.yaml -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
[PATCH v2] iommu/io-pgtable-arm: Fix table descriptor paddr formatting
Table descriptors were being installed without properly formatting the address using paddr_to_iopte, which does not match up with the iopte_deref in __arm_lpae_map. This is incorrect for the LPAE pte format, as it does not handle the high bits properly. This was found on Apple T6000 DARTs, which require a new pte format (different shift); adding support for that to paddr_to_iopte/iopte_to_paddr caused it to break badly, as even <48-bit addresses would end up incorrect in that case. Fixes: 6c89928ff7a0 ("iommu/io-pgtable-arm: Support 52-bit physical address") Acked-by: Robin Murphy Signed-off-by: Hector Martin --- drivers/iommu/io-pgtable-arm.c | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index dd9e47189d0d..94ff319ae8ac 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -315,11 +315,12 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, arm_lpae_iopte *ptep, arm_lpae_iopte curr, -struct io_pgtable_cfg *cfg) +struct arm_lpae_io_pgtable *data) { arm_lpae_iopte old, new; + struct io_pgtable_cfg *cfg = >iop.cfg; - new = __pa(table) | ARM_LPAE_PTE_TYPE_TABLE; + new = paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE; if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) new |= ARM_LPAE_PTE_NSTABLE; @@ -380,7 +381,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, if (!cptep) return -ENOMEM; - pte = arm_lpae_install_table(cptep, ptep, 0, cfg); + pte = arm_lpae_install_table(cptep, ptep, 0, data); if (pte) __arm_lpae_free_pages(cptep, tblsz, cfg); } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { @@ -592,7 +593,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, [i]); } - pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg); + pte = arm_lpae_install_table(tablep, ptep, blk_pte, data); if (pte != blk_pte) { __arm_lpae_free_pages(tablep, tablesz, cfg); /* -- 2.33.0 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Bug#1000139: lxc-templates: Bad security sources on bullseye container built on buster
Package: lxc-templates Version: 3.0.4-0+deb10u1 Severity: normal Dear Maintainer, Bug #970067 has been fixed, enabling the building of bullseye machines with a correct sources.list It is only available in bullseye+, however, so building a bullseye container on buster doesn't work correctly (the security sources line is wrong). Could this be fixed in buster as well? Thanks, Richard
[ansible-project] lineinfile in lxc_container?
Hi all, I'm using ansible to set up lxc containers, using delegation to the container host. One task looks like this: - name: add ansible user to sudoers lineinfile: dest: "/var/lib/lxc/{{ inventory_hostname }}/rootfs/etc/sudoers" state: present regexp: "^ansible" line: 'ansible ALL=(ALL) NOPASSWD: ALL' insertafter: '^root' validate: '/usr/sbin/visudo -cf %s' delegate_to: "{{ container_host }}" when: start_container|bool That has been working fine, until I tried to create a debian bullseye container on a buster host. Unfortunately, the sudoers format has changed slightly, so the buster visudo won't accept the bullseye sudoers file (#includedir is now @includedir). I tried giving the path to the bullseye visudo, but it's dynamically linked and doesn't work on the buster system. I could potentially use the lxc_container module to run a command in the container, but that means I lose lineinfile, and have to do more stuff manually. Or I could use my temporary workaround, and just assume my sudoers file is ok, and skip validation. Another option is to add an extra lineinfile task (before that one) to replace @includedir with #includedir, since it's backwards compatible, but that seems too hackish. Any other suggestions? Cheers, Richard -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/39771264-b079-ff6e-15a6-e018d95dd6fd%40walnut.gen.nz.
[PATCH] iommu/io-pgtable-arm: Fix table descriptor paddr formatting
Table descriptors were being installed without properly formatting the address using paddr_to_iopte, which does not match up with the iopte_deref in __arm_lpae_map. This is incorrect for the LPAE pte format, as it does not handle the high bits properly. This was found on Apple T6000 DARTs, which require a new pte format (different shift); adding support for that to paddr_to_iopte/iopte_to_paddr caused it to break badly, as even <48-bit addresses would end up incorrect in that case. Signed-off-by: Hector Martin --- drivers/iommu/io-pgtable-arm.c | 14 +++--- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index dd9e47189d0d..b636e2737607 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -315,12 +315,12 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, arm_lpae_iopte *ptep, arm_lpae_iopte curr, -struct io_pgtable_cfg *cfg) +struct arm_lpae_io_pgtable *data) { arm_lpae_iopte old, new; - new = __pa(table) | ARM_LPAE_PTE_TYPE_TABLE; - if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) + new = paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE; + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS) new |= ARM_LPAE_PTE_NSTABLE; /* @@ -332,11 +332,11 @@ static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, old = cmpxchg64_relaxed(ptep, curr, new); - if (cfg->coherent_walk || (old & ARM_LPAE_PTE_SW_SYNC)) + if (data->iop.cfg.coherent_walk || (old & ARM_LPAE_PTE_SW_SYNC)) return old; /* Even if it's not ours, there's no point waiting; just kick it */ - __arm_lpae_sync_pte(ptep, 1, cfg); + __arm_lpae_sync_pte(ptep, 1, >iop.cfg); if (old == curr) WRITE_ONCE(*ptep, new | ARM_LPAE_PTE_SW_SYNC); @@ -380,7 +380,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, if (!cptep) return -ENOMEM; - pte = arm_lpae_install_table(cptep, ptep, 0, cfg); + pte = arm_lpae_install_table(cptep, ptep, 0, data); if (pte) __arm_lpae_free_pages(cptep, tblsz, cfg); } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { @@ -592,7 +592,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, [i]); } - pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg); + pte = arm_lpae_install_table(tablep, ptep, blk_pte, data); if (pte != blk_pte) { __arm_lpae_free_pages(tablep, tablesz, cfg); /* -- 2.33.0 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] drm/format-helper: Fix dst computation in drm_fb_xrgb8888_to_rgb888_dstclip()
On 17/11/2021 23.56, Thomas Zimmermann wrote: Hi Am 17.11.21 um 15:22 schrieb Hector Martin: The dst pointer was being advanced by the clip width, not the full line stride, resulting in corruption. The clip offset was also calculated incorrectly. Cc: sta...@vger.kernel.org Signed-off-by: Hector Martin Thanks for your patch, but you're probably on the wrong branch. We rewrote this code recently and fixed the issue in drm-misc-next. [1][2] Oops. I was on linux-next as of Nov 1. Looks like I missed it by a week! Sounds like I'm going to have to rebase/rewrite the other series I just sent too... -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
[PATCH 3/3] drm/simpledrm: Enable XRGB2101010 format
This is the format used by the bootloader framebuffer on Apple ARM64 platforms, and is already supported by simplefb. This avoids regressing on these platforms when simpledrm is enabled and replaces simplefb. Signed-off-by: Hector Martin --- drivers/gpu/drm/tiny/simpledrm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c index 2c84f2ea1fa2..b4b69f3a7e79 100644 --- a/drivers/gpu/drm/tiny/simpledrm.c +++ b/drivers/gpu/drm/tiny/simpledrm.c @@ -571,7 +571,7 @@ static const uint32_t simpledrm_default_formats[] = { //DRM_FORMAT_XRGB1555, //DRM_FORMAT_ARGB1555, DRM_FORMAT_RGB888, - //DRM_FORMAT_XRGB2101010, + DRM_FORMAT_XRGB2101010, //DRM_FORMAT_ARGB2101010, }; -- 2.33.0
[PATCH 2/3] drm/format-helper: Add drm_fb_xrgb8888_to_xrgb2101010_dstclip()
Add XRGB emulation support for devices that can only do XRGB2101010. This is chiefly useful for simpledrm on Apple devices where the bootloader-provided framebuffer is 10-bit, which already works fine with simplefb. This is required to make simpledrm support this too. Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 64 + include/drm/drm_format_helper.h | 4 ++ 2 files changed, 68 insertions(+) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index 69fde60e36b3..5998e57d6ff2 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -378,6 +378,60 @@ void drm_fb_xrgb_to_rgb888_dstclip(void __iomem *dst, unsigned int dst_pitch } EXPORT_SYMBOL(drm_fb_xrgb_to_rgb888_dstclip); +static void drm_fb_xrgb_to_xrgb2101010_line(u32 *dbuf, u32 *sbuf, + unsigned int pixels) +{ + unsigned int x; + + for (x = 0; x < pixels; x++) { + *dbuf++ = ((sbuf[x] & 0x00FF) << 2) | + ((sbuf[x] & 0xFF00) << 4) | + ((sbuf[x] & 0x00FF) << 6); + } +} + +/** + * drm_fb_xrgb_to_xrgb2101010_dstclip - Convert XRGB to XRGB2101010 clip + * buffer + * @dst: XRGB2101010 destination buffer (iomem) + * @dst_pitch: destination buffer pitch + * @vaddr: XRGB source buffer + * @fb: DRM framebuffer + * @clip: Clip rectangle area to copy + * + * Drivers can use this function for XRGB2101010 devices that don't natively + * support XRGB. + * + * This function applies clipping on dst, i.e. the destination is a + * full (iomem) framebuffer but only the clip rect content is copied over. + */ +void drm_fb_xrgb_to_xrgb2101010_dstclip(void __iomem *dst, + unsigned int dst_pitch, void *vaddr, + struct drm_framebuffer *fb, + struct drm_rect *clip) +{ + size_t linepixels = clip->x2 - clip->x1; + size_t dst_len = linepixels * 4; + unsigned int y, lines = clip->y2 - clip->y1; + void *dbuf; + + dbuf = kmalloc(dst_len, GFP_KERNEL); + if (!dbuf) + return; + + vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32)); + dst += clip_offset(clip, dst_pitch, sizeof(u32)); + for (y = 0; y < lines; y++) { + drm_fb_xrgb_to_xrgb2101010_line(dbuf, vaddr, linepixels); + memcpy_toio(dst, dbuf, dst_len); + vaddr += fb->pitches[0]; + dst += dst_pitch; + } + + kfree(dbuf); +} +EXPORT_SYMBOL(drm_fb_xrgb_to_xrgb2101010_dstclip); + /** * drm_fb_xrgb_to_gray8 - Convert XRGB to grayscale * @dst: 8-bit grayscale destination buffer @@ -464,6 +518,10 @@ int drm_fb_blit_rect_dstclip(void __iomem *dst, unsigned int dst_pitch, fb_format = DRM_FORMAT_XRGB; if (dst_format == DRM_FORMAT_ARGB) dst_format = DRM_FORMAT_XRGB; + if (fb_format == DRM_FORMAT_ARGB2101010) + fb_format = DRM_FORMAT_XRGB2101010; + if (dst_format == DRM_FORMAT_ARGB2101010) + dst_format = DRM_FORMAT_XRGB2101010; if (dst_format == fb_format) { drm_fb_memcpy_dstclip(dst, dst_pitch, vmap, fb, clip); @@ -482,6 +540,12 @@ int drm_fb_blit_rect_dstclip(void __iomem *dst, unsigned int dst_pitch, vmap, fb, clip); return 0; } + } else if (dst_format == DRM_FORMAT_XRGB2101010) { + if (fb_format == DRM_FORMAT_XRGB) { + drm_fb_xrgb_to_xrgb2101010_dstclip(dst, dst_pitch, + vmap, fb, clip); + return 0; + } } return -EINVAL; diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h index e86925cf07b9..a0faa710878b 100644 --- a/include/drm/drm_format_helper.h +++ b/include/drm/drm_format_helper.h @@ -29,6 +29,10 @@ void drm_fb_xrgb_to_rgb888(void *dst, void *src, struct drm_framebuffer *fb, void drm_fb_xrgb_to_rgb888_dstclip(void __iomem *dst, unsigned int dst_pitch, void *vaddr, struct drm_framebuffer *fb, struct drm_rect *clip); +void drm_fb_xrgb_to_xrgb2101010_dstclip(void __iomem *dst, + unsigned int dst_pitch, void *vaddr, + struct drm_framebuffer *fb, + struct drm_rect *clip); void drm_fb_xrgb_to_gray8(u8 *dst, void *vaddr, struct drm_framebuffer *fb, struct drm_rect *clip); -- 2.33.0
[PATCH 1/3] drm/simpledrm: Bind to OF framebuffers in /chosen
This matches the simplefb behavior; these nodes are not matched by the standard OF machinery. This fixes a regression when simpledrm replaces simeplefb. Signed-off-by: Hector Martin --- drivers/gpu/drm/tiny/simpledrm.c | 17 + 1 file changed, 17 insertions(+) diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c index 481b48bde047..2c84f2ea1fa2 100644 --- a/drivers/gpu/drm/tiny/simpledrm.c +++ b/drivers/gpu/drm/tiny/simpledrm.c @@ -2,6 +2,7 @@ #include #include +#include #include #include #include @@ -897,5 +898,21 @@ static struct platform_driver simpledrm_platform_driver = { module_platform_driver(simpledrm_platform_driver); +static int __init simpledrm_init(void) +{ + struct device_node *np; + + if (IS_ENABLED(CONFIG_OF_ADDRESS) && of_chosen) { + for_each_child_of_node(of_chosen, np) { + if (of_device_is_compatible(np, "simple-framebuffer")) + of_platform_device_create(np, NULL, NULL); + } + } + + return 0; +} + +fs_initcall(simpledrm_init); + MODULE_DESCRIPTION(DRIVER_DESC); MODULE_LICENSE("GPL v2"); -- 2.33.0
[PATCH 0/3] drm/simpledrm: Apple M1 / DT platform support fixes
Hi DRM folks, This short series makes simpledrm work on Apple M1 (including Pro/Max) platforms the way simplefb already does, by adding XRGB2101010 support and making it bind to framebuffers in /chosen the same way simplefb does. This avoids breaking the bootloader-provided framebuffer console when simpledrm is selected to replace simplefb, as these FBs always seem to be 10-bit (at least when a real screen is attached). Hector Martin (3): drm/simpledrm: Bind to OF framebuffers in /chosen drm/format-helper: Add drm_fb_xrgb_to_xrgb2101010_dstclip() drm/simpledrm: Enable XRGB2101010 format drivers/gpu/drm/drm_format_helper.c | 64 + drivers/gpu/drm/tiny/simpledrm.c| 19 - include/drm/drm_format_helper.h | 4 ++ 3 files changed, 86 insertions(+), 1 deletion(-) -- 2.33.0
[PATCH] drm/format-helper: Fix dst computation in drm_fb_xrgb8888_to_rgb888_dstclip()
The dst pointer was being advanced by the clip width, not the full line stride, resulting in corruption. The clip offset was also calculated incorrectly. Cc: sta...@vger.kernel.org Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index e676921422b8..12bc6b45e95b 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -366,12 +366,12 @@ void drm_fb_xrgb_to_rgb888_dstclip(void __iomem *dst, unsigned int dst_pitch return; vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32)); - dst += clip_offset(clip, dst_pitch, sizeof(u16)); + dst += clip_offset(clip, dst_pitch, 3); for (y = 0; y < lines; y++) { drm_fb_xrgb_to_rgb888_line(dbuf, vaddr, linepixels); memcpy_toio(dst, dbuf, dst_len); vaddr += fb->pitches[0]; - dst += dst_len; + dst += dst_pitch; } kfree(dbuf); -- 2.33.0
Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator
Hi Tom, Thanks for taking time reviewing the KIP. I think it's reasonable to ask if Kafka's Group Coordination protocol should be used for use cases other than the distributed event log. This was actually briefly addressed by Gwen Shapira during her presentation at the strangeloop conference in '18 (a link to the video is included in the KIP), in which she explain in greater details the protocol internals. We should also keep in mind that this protocol is already being used for other use cases outside of core Kafka: Confluent Schema Registry uses it to determine leadership between members of a cluster, Kafka Connect uses it for task assignments, same with Kafka Stream for partition and task distributions, and so on. So having a public, stable API not just for new use cases (like ours) but existing ones is IMHO a good thing to have. I'll amend the KIP and add a bit more details to the motivation and alternatives sections, so the usefulness of this KIP is better understood. Now, for the first point of your technical observations (regarding protocolTypes()), I don't think it matters in this context, as the protocol name and subtype are only relevant in the context of a consumer group and group rebalance. It really doesn't matter if two different libraries decide to name their protocols the same. For item #2, I was under the impression that, because these classes all implement the org.apache.kafka.common.protocol.[Message, ApiMessage] interface, they are implicitly part of the Kafka protocol and the top-level API. Isn't that really the case? And finally, for #3, the goal I had in mind when creating this KPI was a small one: to provide an interface that users can rely on when extending the AbstactCoordinator. So my thought was that, while the AbstractCoordinator itself uses some internal APIs (like ConsumerNetworkClient, ConsumerMetadata and so on) those can remain internal. But it probably makes sense to at least explore the possibility of moving the whole AbstractCoordinator class to be part of the public API. I'll do that exercise, see what it entails, and update the KIP with my findings. Thanks again! Hector From: dev@kafka.apache.org At: 11/10/21 06:43:59 UTC-5:00To: Hector Geraldino (BLOOMBERG/ 919 3RD A ) , dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-795: Add public APIs for AbstractCoordinator Hi Hector, Thanks for the KIP. At a high level, I think the question to be answered by the community is "Should Kafka really be providing this kind of cluster management API?". While Kafka clients need this to provide their functionality it's a different thing to expose that as a public API of the project, which is otherwise about providing a distributed event log/data streaming platform/. Having a public API brings a significant commitment for API compatibility, which could impair the ability of the project to change the API in order to make improvements to the Kafka clients. The current AbstractCoordinator not being a supported API means we don't currently have to reason about compatibility here. So I think it would help the motivation section of the KIP to describe in a bit more detail the use case(s) you have for implementing your own coordinators. For example, are these applications using Kafka otherwise, or just to leverage this API? And what alternatives to implementing your own coordinators did you consider, and why did you reject them? From a technical point of view, there are a number of issues I think would need addressing in order to do something like this: 1. There probably ought to be a way to ensure that protocolTypes() don't collide, or at least reduce the chances of a collision. While probably unlikely in practice the consequences of different protocols having the same name could be pretty confusing to debug. 2. JoinGroupRequestData and JoinGroupResponseData are not public classes (none of the *RequestData or *ResponseData classes are, intentionally), so there would have to be an abstraction for them. 3. It's all well and good having an interface that anyone can implement, but there is no supported Kafka API which takes an instance as a parameter (i.e. where do you plug your implementation in without having to use a bunch of other non-public Kafka APIs?) I assume in your current usage you're having to make use of other non-supported client APIs to make use of your coordinator. The KIP isn't really a complete piece of work without a way to use a custom implementation, in my opinion. It would be confusing if it looked like we were encouraging people to use those other non-supported APIs by making this coordinator public. Kind regards, Tom On Mon, Nov 8, 2021 at 2:01 PM Hector Geraldino (BLOOMBERG/ 919 3RD A) < hgerald...@bloomberg.net> wrote: > Hi Kafka devs, > > I would like to start the discussion of KIP-795: Add public APIs for > AbstractCoordinator > > > https://cwiki.apache.org/confluence/dis
[DISCUSS] KIP-795: Add public APIs for AbstractCoordinator
Hi Kafka devs, I would like to start the discussion of KIP-795: Add public APIs for AbstractCoordinator https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator Looking forward for some feedback from the community. Regards, Hector
DISMISS - Re:[DISCUSS] KIP-784: Add public APIs for AbstractCoordinator
Please dismiss this message, as the KIP number is wrong. I'll send a new message with the correct KIP shortly. Apologies From: dev@kafka.apache.org At: 11/08/21 08:45:22 UTC-5:00To: dev@kafka.apache.org Subject: [DISCUSS] KIP-784: Add public APIs for AbstractCoordinator Hi Kafka devs, I would like to start the discussion of KIP-784: Add public APIs for AbstractCoordinator https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for +AbstractCoordinator Looking forward for some feedback from the community. Regards, Hector
[jira] [Updated] (KAFKA-13434) Add a public API for AbstractCoordinator
[ https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-13434: - Description: KIP-795: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator|https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries was: KIP-784: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries > Add a public API for AbstractCoordinator > > > Key: KAFKA-13434 > URL: https://issues.apache.org/jira/browse/KAFKA-13434 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Major > > KIP-795: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator|https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (KAFKA-13434) Add a public API for AbstractCoordinator
[ https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-13434: - Description: KIP-795: https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries was: KIP-795: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator|https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries > Add a public API for AbstractCoordinator > > > Key: KAFKA-13434 > URL: https://issues.apache.org/jira/browse/KAFKA-13434 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Major > > KIP-795: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.1#820001)
[DISCUSS] KIP-784: Add public APIs for AbstractCoordinator
Hi Kafka devs, I would like to start the discussion of KIP-784: Add public APIs for AbstractCoordinator https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator Looking forward for some feedback from the community. Regards, Hector
[jira] [Updated] (KAFKA-13434) Add a public API for AbstractCoordinator
[ https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-13434: - Summary: Add a public API for AbstractCoordinator (was: Add a public API for AbstractCoordinatos) > Add a public API for AbstractCoordinator > > > Key: KAFKA-13434 > URL: https://issues.apache.org/jira/browse/KAFKA-13434 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Major > > KIP-784: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13434) Add a public API for AbstractCoordinatos
Hector G created KAFKA-13434: Summary: Add a public API for AbstractCoordinatos Key: KAFKA-13434 URL: https://issues.apache.org/jira/browse/KAFKA-13434 Project: Kafka Issue Type: Improvement Components: clients Reporter: Hector G Assignee: Hector G KIP-784: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13434) Add a public API for AbstractCoordinatos
Hector G created KAFKA-13434: Summary: Add a public API for AbstractCoordinatos Key: KAFKA-13434 URL: https://issues.apache.org/jira/browse/KAFKA-13434 Project: Kafka Issue Type: Improvement Components: clients Reporter: Hector G Assignee: Hector G KIP-784: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+public+APIs+for+AbstractCoordinator] The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries -- This message was sent by Atlassian Jira (v8.3.4#803005)
Wiki Permissions
Hello, I'd like to be added to the contributors list, so I can submit a KIP. My Jira ID is: hgeraldino Wiki ID: hgeraldino Thanks, Hector
Re: question from total newbie. a little help please
On 18/10/21 2:55 am, john doe wrote: With W10 you have also the possibility of using 'WLS' an order alternative would be to install Debian as a VM. I think perhaps you mean WSL - Windows Subsystem for Linux? https://docs.microsoft.com/en-us/windows/wsl/install I've never used it myself. Richard
Re: [Sid] Firefox problem
On 17/10/21 9:55 pm, Grzesiek wrote: Hi there, On some of machines I use, after opening of Firefox I get empty browser window (with menus, decorations etc) but nothing else is displayed. Its impossible to open menu, type address, etc. The only thing you can do is to close the window. After changing display configuration (rotate to portrait, adding external monitor..) it starts to work as expected. You do not even need to reopen. Moreover, it looks that Firefox was running ok all the time but nothing was displayed. After recent updates on some machines I get the same problem using firefox-esr. The only error mesg I get is: ###!!! [Parent][RunMessage] Error: Channel closing: too late to send/recv, messages will be lost Are you seeing that message in the shell that you started it from? If not, and if you're not running it in a shell, try that to see if there are more messages? Cheers, Richard
Re: replacement of sqsh for debian 11
On 28/10/21 3:05 pm, Greg Wooledge wrote: Nobody could figure out that you were trying to connect to an existing proprietary database. Well, I did. Because that's what sqsh is for - it's a client, not a DBMS. But I guess it could have been clearer. Cheers, Richard
Re: [PATCH] iommu/dart: Initialize DART_STREAMS_ENABLE
On 20/10/2021 01.22, Sven Peter wrote: DART has an additional global register to control which streams are isolated. This register is a bit redundant since DART_TCR can already be used to control isolation and is usually initialized to DART_STREAM_ALL by the time we get control. Some DARTs (namely the one used for the audio controller) however have some streams disabled initially. Make sure those work by initializing DART_STREAMS_ENABLE during reset. Reported-by: Martin Povišer Signed-off-by: Sven Peter --- While this could technically count as a fix I don't think it needs to go to 5.15 since no driver that requires this is in there. The first driver that needs this will likely only be ready for the 5.17 merge window. drivers/iommu/apple-dart.c | 5 + 1 file changed, 5 insertions(+) diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index ce92195db638..6f8c240d8d40 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -70,6 +70,8 @@ #define DART_ERROR_ADDR_HI 0x54 #define DART_ERROR_ADDR_LO 0x50 +#define DART_STREAMS_ENABLE 0xfc + #define DART_TCR(sid) (0x100 + 4 * (sid)) #define DART_TCR_TRANSLATE_ENABLE BIT(7) #define DART_TCR_BYPASS0_ENABLE BIT(8) @@ -299,6 +301,9 @@ static int apple_dart_hw_reset(struct apple_dart *dart) apple_dart_hw_disable_dma(_map); apple_dart_hw_clear_all_ttbrs(_map); + /* enable all streams globally since TCR is used to control isolation */ + writel(DART_STREAM_ALL, dart->regs + DART_STREAMS_ENABLE); + /* clear any pending errors before the interrupt is unmasked */ writel(readl(dart->regs + DART_ERROR), dart->regs + DART_ERROR); Reviewed-by: Hector Martin -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [Koha] LDAP changes from 19.xx to 21.05 ?
I'm using the debian package, and upgraded with apt-get update; apt-get dist-upgrade. Just ran them again now and it doesn't update koha-common, so I should be at the last 21.05.xx version, dpkg reports it is: 21.05.04-1 The upgrade itself didn't show any errors, but plack-error.log was filled with that one about ldapserver, so I created a new koha site, and used that configuration file to adapt our old configuration to the new koha version, but just copied the LDAP part over as it was untouched. After reading the bug report I removed id="ldapserver" and listenref="ldapserver" from my configuration and it works now. On 10/12/21 2:31 AM, Jonathan Druart wrote: Looks like you hit bug 28385, but it's supposed to be fixed on 21.05. Are you using the debian package? How did you upgrade? https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=28385 Le mar. 12 oct. 2021 à 02:17, Hector Gonzalez Jaime a écrit : Hello, we recently tried to update our development server to 21.05, and it mostly works, but it does not like our LDAP setup, which is unchanged. plack-error.log is filled with errors from this one: Error while loading /etc/koha/plack.psgi: No ldapserver "hostname" defined in KOHA_CONF: /etc/koha/sites/clavius/koha-conf.xml at /usr/share/koha/lib/C4/Auth_with_ldap.pm line 58, line 755. Now, we did not touch our LDAP configuration, which was correct and working with 19.11. Can somebody help find out what changed? Our ldap configuration is like this now: ip.address.for.server ou=users,dc=domain,dc=example,dc=org 1 1 1 uid=%s,ou=users,dc=domain,dc=example,dc=org Tijuana, BCN TJNA STUDENT 1 Thanks. -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
[Koha] LDAP changes from 19.xx to 21.05 ?
Hello, we recently tried to update our development server to 21.05, and it mostly works, but it does not like our LDAP setup, which is unchanged. plack-error.log is filled with errors from this one: Error while loading /etc/koha/plack.psgi: No ldapserver "hostname" defined in KOHA_CONF: /etc/koha/sites/clavius/koha-conf.xml at /usr/share/koha/lib/C4/Auth_with_ldap.pm line 58, line 755. Now, we did not touch our LDAP configuration, which was correct and working with 19.11. Can somebody help find out what changed? Our ldap configuration is like this now: ip.address.for.server ou=users,dc=domain,dc=example,dc=org 1 1 1 uid=%s,ou=users,dc=domain,dc=example,dc=org Tijuana, BCN TJNA STUDENT 1 Thanks. -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
Re: [PATCH v2 00/11] Add Apple M1 support to PASemi i2c driver
On 11/10/2021 17.54, Wolfram Sang wrote: MAINTAINERS. It'll probably apply cleanly to 5.15-rc5 but if that happens again It doesn't because Linus' git doesn't have: Documentation/devicetree/bindings/pci/apple,pcie.yaml Because MAINTAINER dependencies can be a bit nasty, I suggest I drop the MAINTAINER additions for now and we add them later. Then, you can add the pasemi-core as well. D'accord? We can just split the MAINTAINERS changes into a separate patch and I can push that one through the SoC tree, along with other MAINTAINERS updates. Does that work for everyone? -- Hector Martin (mar...@marcan.st) Public Key: https://mrcn.st/pub
Re: buggy N-M (was: Debian 11: Unable to detect wireless interface on an old laptop) computer
This isn't really a good place to chip in, but the best I can find from the messages I haven't deleted ... On 29/09/21 2:00 am, Henning Follmann wrote: My comment to the OP was basically on the nebulous source (most VPN Providers) and the generalized categorization (N-M is buggy), which I disagree with. My own problems with NM, which may be related, seem to be shared with many in the 'OpenVPN community'. It seems that for configuring OpenVPN, NM does its own thing, and mostly ignores the 'standard' configuration files etc. that are covered by the OpenVPN documentation. Even some of the terminology seems to be different. That makes it very difficult to match up the changes I might make on my server to those that are needed on the (NM-managed) client. I had it working on my buster laptop, but with a reinstall of bullseye, combined with some changed defaults in the OpenVPN setup, I ended up giving up - I now start my VPN with systemctl rather than clicking through NM. I should probably write a script rather than trying to remember how vpns map to systemctl unit targets (?), but I don't use it very often, and have moved on ... I feel the NM maintainers could do more to talk to the OpenVPN folks, and maybe provide some tools to import standard configs, or generate something that NM can consume? Maybe vice-versa too. I don't know where those responsibilities lie. Cheers, Richard
Re: silence audio on locked screen?
On 28/09/21 11:33 pm, Dan Ritter wrote: Richard Hector wrote: On 27/09/21 11:39 pm, Dan Ritter wrote: > > One option is to run a mute and stop-playing command immediately > on screensaver interaction. > > For XFCE4, that's as easy as adding a panel object which runs an > application, pointing that at a script, and adding an > appropriate icon. Install xmacro. > > ~/bin/quiet-and-dark > > #!/bin/sh > #not actually tested > echo 'KeyStrPress XF86AudioPlay KeyStrRelease XF86AudioPlay' | xmacroplay :0 > echo 'KeyStrPress XF86AudioMute KeyStrRelease XF86AudioMute' | xmacroplay :0 > xscreensaver -command activate > > > You can also assign it to run as a keyboard shortcut. Thanks Dan, If I understand correctly, you're suggesting to create a clickable button which will mute the audio, and then creating a macro to do that from within a script, which I then need to run manually? That sounds inverted. The button executes a script when you click it; the script mutes and pauses audio, then activates the screensaver. There might be a way to invoke the mute-and-pause from the screensaver when it activates by itself, but I don't know that one and a few minutes searching didn't reveal it. I'd like this to still happen if the screen locks due to inactivity. I haven't found yet what triggers that, or where to configure the timeout. That's in Settings, Screensaver. Try right clicking on an empty area of desktop. Secondly, will it re-enable audio when the screen is unlocked? This won't, but the same invocations without the final screensaver activation will un-mute and start playing whatever is listening to XF86 media keys. Thanks Dan, I think I understand some of that. However, I'm reluctant to embark on one-off efforts, partly because I don't understand enough of the underlying structure, and partly because it only solves it for me (if I understood more, maybe I could contribute back, but I don't). As I say, I consider this to be a security flaw - people can hear something of what my computer's doing when I'm not there and it's supposedly locked. Do others agree with that? Also, I don't know how much of the screen locking function is shared between the many tools that are available for this purpose. Ideally, this problem (if it's a problem) should be fixed in all such tools. Does it sound reasonable then to submit a bug report to light-locker, with the suggestion that the maintainers contact the maintainers of similar/related packages as they see fit? Cheers, Richard
Re: silence audio on locked screen?
On 27/09/21 11:39 pm, Dan Ritter wrote: Richard Hector wrote: I'm using buster with xfce4, pulseaudio, and (I think) light-locker. When I lock my screen, audio continues to play (and system sounds are still heard). This seems to me like a way to leak information, and is also annoying to anyone nearby. It's then annoying for me when I discover somebody has unplugged my headphones to make them shut up :-) Any suggestions for making it be quiet? Perhaps a wishlist bug for light-locker? I don't know if it's even feasible, given the various combinations of audio system and screen lockers. One option is to run a mute and stop-playing command immediately on screensaver interaction. For XFCE4, that's as easy as adding a panel object which runs an application, pointing that at a script, and adding an appropriate icon. Install xmacro. ~/bin/quiet-and-dark #!/bin/sh #not actually tested echo 'KeyStrPress XF86AudioPlay KeyStrRelease XF86AudioPlay' | xmacroplay :0 echo 'KeyStrPress XF86AudioMute KeyStrRelease XF86AudioMute' | xmacroplay :0 xscreensaver -command activate You can also assign it to run as a keyboard shortcut. Thanks Dan, If I understand correctly, you're suggesting to create a clickable button which will mute the audio, and then creating a macro to do that from within a script, which I then need to run manually? I see two issues there, which were admittedly not in my original statement of requirements :-) I'd like this to still happen if the screen locks due to inactivity. I haven't found yet what triggers that, or where to configure the timeout. Secondly, will it re-enable audio when the screen is unlocked? Cheers, Richard
silence audio on locked screen?
Hi all, I'm using buster with xfce4, pulseaudio, and (I think) light-locker. When I lock my screen, audio continues to play (and system sounds are still heard). This seems to me like a way to leak information, and is also annoying to anyone nearby. It's then annoying for me when I discover somebody has unplugged my headphones to make them shut up :-) Any suggestions for making it be quiet? Perhaps a wishlist bug for light-locker? I don't know if it's even feasible, given the various combinations of audio system and screen lockers. Cheers, Richard
Re: [Koha] How to Update and Upgrade Koha
Hi, I'd like to, but I didn't have an account for the wiki, so I'm waiting for it to be enabled. On 9/20/21 4:47 AM, Jonathan Druart wrote: Hi Hector, Well detailed instructions, it would help to have them on the wiki page :) Would you mind updating the existing section? https://wiki.koha-community.org/wiki/Koha_on_Debian#Upgrade -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
Re: Bug#994750: RFS: mazeofgalious/0.63-1 [ITA] -- The Maze of Galious
On 21/09/21 1:24 am, Parodper wrote: * URL : http://www.braingames.getput.com/mog/ No such site? Cheers, Richard
Re: [Koha] How to Update and Upgrade Koha
I forgot something, if koha requires additional software to be installed, when you run apt-get upgrade it will say something like this: The following packages have been kept back: koha-common In this case, you will need to do: apt-get dist-upgrade which will install the needed packages and then it will upgrade koha. On 9/15/21 6:25 PM, Hector Gonzalez Jaime wrote: Hi, as with anything else, you should first have a good (tested) backup of everything. Koha, server software, and database. Then, if you used the debian packages, you should check which version you are "tracking", something like: grep -Ri koha /etc/apt/* would return something like: /etc/apt/sources.list.d/koha.list:deb http://debian.koha-community.org/koha 19.11 main That means there is a file /etc/apt/sources.list.d/koha.list which you should edit and change to the version you want to track now. The content of the file would be: deb http://debian.koha-community.org/koha 21.05 main if you want to track koha 21.05 Then you would: apt-get update apt-get upgrade and it should do the upgrade. With the following exception, if apt-get upgrade says it would need to update mariadb-server or mysql-server, then you should upgrade that first and separately, like this: apt-get update apt-get install mariadb-server-10.3 apt-get upgrade If you don't upgrade mariadb first, it will be DOWN when koha wants to upgrade the database, leaving you with a half upgraded system. (And using that backup). Hope this helps. On 9/15/21 6:04 PM, Charles Kelley wrote: Hi, all! I think I have asked about this before, but I don't recall getting a definitive answer. But it's awhile, so here goes. I have seen announcements about updates and upgrades to Koha. ("Come and get it!") Alas, I don't find instructions for installing updates and upgrades. There are initial installation instructions aplenty, but alas, if there are ones for updates and upgrades, I don't see them. So I ask: How does one update or upgrades Koha? Thanks for your help, everyone. -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
Re: [Koha] How to Update and Upgrade Koha
Hi, as with anything else, you should first have a good (tested) backup of everything. Koha, server software, and database. Then, if you used the debian packages, you should check which version you are "tracking", something like: grep -Ri koha /etc/apt/* would return something like: /etc/apt/sources.list.d/koha.list:deb http://debian.koha-community.org/koha 19.11 main That means there is a file /etc/apt/sources.list.d/koha.list which you should edit and change to the version you want to track now. The content of the file would be: deb http://debian.koha-community.org/koha 21.05 main if you want to track koha 21.05 Then you would: apt-get update apt-get upgrade and it should do the upgrade. With the following exception, if apt-get upgrade says it would need to update mariadb-server or mysql-server, then you should upgrade that first and separately, like this: apt-get update apt-get install mariadb-server-10.3 apt-get upgrade If you don't upgrade mariadb first, it will be DOWN when koha wants to upgrade the database, leaving you with a half upgraded system. (And using that backup). Hope this helps. On 9/15/21 6:04 PM, Charles Kelley wrote: Hi, all! I think I have asked about this before, but I don't recall getting a definitive answer. But it's awhile, so here goes. I have seen announcements about updates and upgrades to Koha. ("Come and get it!") Alas, I don't find instructions for installing updates and upgrades. There are initial installation instructions aplenty, but alas, if there are ones for updates and upgrades, I don't see them. So I ask: How does one update or upgrades Koha? Thanks for your help, everyone. -- Hector Gonzalez ca...@genac.org ___ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
Re: copy directory tree, mapping to new owners
On 14/09/21 6:50 pm, to...@tuxteam.de wrote: On Tue, Sep 14, 2021 at 12:17:05PM +1200, Richard Hector wrote: On 13/09/21 7:04 pm, to...@tuxteam.de wrote: >On Mon, Sep 13, 2021 at 11:45:02AM +1200, Richard Hector wrote: >>On 12/09/21 6:52 pm, john doe wrote: > >[...] > >>>If you are doing this in a script, I would use a temporary directory. >>>That way, in case of failure the destination directory is not rongly >>>modified. >>> >>>EG: >>> >>>$ rsync >>> >>>Make the way you want it to be. >>> >>>$ rsync >> >>That is true, but firstly it would require more available space [...] > >This isn't necessary, as you could replace the second `rsync' by a `mv' >(provided your temp tree is on the same storage volume as your target >dir, that is). I was assuming the suggestion was to rsync the source to the temp while the destination still exists, before rsyncing or mv'ing over the top of it. Total of 3 copies (temporarily) rather than 2. Then, it's different. But in your scenario you would probably want to take down whatever "service" relies on the destination dir while the copy is in progress. This is all academic, since rsync with --usermap and --groupmap does what I want, in place. But john doe's proposal had the rationale of "That way, in case of failure the destination directory is not rongly modified." That implies the destination is staying put. Well, I guess deleting it entirely avoids "wrongly modifying" it too :-) In any case, if you haven't the space, you haven't it. Sysadmin's life ain't always nice :) It's available if I want it; everything is on LVM. It's easy to grow. It's also all on a VPS, so expanding the total is a matter of tweaking the settings in a control panel. But it's harder to shrink (especially since I use xfs), so I prefer not to grow it if it's not necessary. Cheers :-) Richard
Re: copy directory tree, mapping to new owners
On 13/09/21 7:04 pm, to...@tuxteam.de wrote: On Mon, Sep 13, 2021 at 11:45:02AM +1200, Richard Hector wrote: On 12/09/21 6:52 pm, john doe wrote: [...] >If you are doing this in a script, I would use a temporary directory. >That way, in case of failure the destination directory is not rongly >modified. > >EG: > >$ rsync > >Make the way you want it to be. > >$ rsync That is true, but firstly it would require more available space [...] This isn't necessary, as you could replace the second `rsync' by a `mv' (provided your temp tree is on the same storage volume as your target dir, that is). I was assuming the suggestion was to rsync the source to the temp while the destination still exists, before rsyncing or mv'ing over the top of it. Total of 3 copies (temporarily) rather than 2. Cheers, Richard
Re: copy directory tree, mapping to new owners
On 12/09/21 7:46 pm, Teemu Likonen wrote: * 2021-09-12 12:43:29+1200, Richard Hector wrote: The context of my question is that I'm creating (or updating) a test copy of a website. The files are owned by one of two owners, depending on whether they were written by the server (actually php-fpm). To do that, I want all the permissions to remain the same, but the ownership should be changed according to a provided map. Looks exactly like what "rsync --usermap=FROM:TO" can do. There is also "--groupmap" option for mapping groups. Aha - thanks a lot :-) I guess I've been caught not reading the man page thoroughly enough. The pitfalls of thinking that you know everything that a tool does, just because you use it often ... The way the docs are written seems to imply a sender and receiver; I'll have to check it works for a local copy ... it does. Now I need to rewrite my script somewhat, since to do this it's going to have to run as root ... Thanks, Richard
Re: copy directory tree, mapping to new owners
On 12/09/21 6:53 pm, l0f...@tuta.io wrote: # actually not necessary? rsync will create it mkdir -p mysite_test/doc_root You can make a simple test to know that but I would say that rsync doesn't create your destination "root" directory (the one you specify on the command line) unless `--mkpath` is used. I was fairly confident of that, from my regular usage. # The trailing / matters. Does it matter on the source as well? # I generally include it. rsync -a mysite/doc_root/ mysite_test/doc_root/ # The trailing / matters. Actually, I'm not sure to understand Greg's remark here. In my opinion, trailing slash doesn't matter for destination folder on the contrary of *source* folder. In other words, for me, the following are equal: rsync -a mysite/doc_root/ mysite_test/doc_root/ rsync -a mysite/doc_root/ mysite_test/doc_root But not the following: rsync -a mysite/doc_root mysite_test/doc_root => you will get an extra "doc_root" folder (the source one) in your dest, i.e. : mysite_test/doc_root/doc_root and then the content of doc_root source rsync -a mysite/doc_root/ mysite_test/doc_root => your doc_root (destination) folder will get doc_root content (source) directly Yep. As I've mentioned elsewhere, I habitually include trailing slashes for both source and destination when I'm replicating a whole tree. I couldn't remember the details of what happens when you don't, but I know what happens when I do :-) Cheers, Richard
Re: copy directory tree, mapping to new owners
On 12/09/21 6:52 pm, john doe wrote: On 9/12/2021 3:45 AM, Richard Hector wrote: Thanks, that looks reasonable. It does mean, though, that the files exist for a while with the wrong ownership. That probably doesn't matter, but somehow 'feels wrong' to me. If you are doing this in a script, I would use a temporary directory. That way, in case of failure the destination directory is not rongly modified. EG: $ rsync Make the way you want it to be. $ rsync That is true, but firstly it would require more available space, and secondly, as long as I know about the failure, it doesn't worry me too much. This is a script I will call manually. Given that you want to change the ownership, you may want to emulate the options implied by '-a' but without the ownership option ('-o'). I do that in my existing script (also not -g (group) or -D (specials/devices (which don't exist there anyway)) - and sometimes -p (permissions), if I've pre-created the tree with the permissions I want. The last one is a slightly different use case, but I should make sure I know which use case I'm trying for ... My habits would also lead me to do all of the above from above the two directories in question (in my case, /srv - for /srv/mysite/doc_root and /srv/mysite_test/doc_root) So: cd /srv # actually not necessary? rsync will create it mkdir -p mysite_test/doc_root # The trailing / matters. Does it matter on the source as well? According to the man page it does (1): "CWrsync -avz foo:src/bar/ /data/tmp A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. You can think of a trailing / on a source as meaning lqcopy the contents" # I generally include it. In a script, you need to be sure what the cmd does do or not do!!! :) True. But many options I've learned from my everyday usage, and sometimes forgotten the rationale - I just know it does what I want :-) rsync -a mysite/doc_root/ mysite_test/doc_root/ # The trailing / matters. find mysite_dest -user mysite -exec chown mysite_test {} + # I prefer mysite_test-run; it keeps consistency with # the ${sitename}-run pattern used elsewhere find mysite_dest -user mysite-run -exec chown mysite_test-run {} + Have I broken anything there? Not that I can see but testing will tell! This is what I would do. And I would do it *interactively*. If you insist on making a script, then it will be slightly more complicated, because you'll need to add error checking. The trouble with doing it interactively, when it needs to be done many times (and on several sites), is that each time there's opportunity to make a mistake. And it means the process needs to be documented separately from the script. In fact, I'd incorporate the above in a larger script, which does things like copying the database (and changing the db name in the site config). Error checking in shell scripts is something I certainly need to learn At the very least, ' ['&&'|'||'] handle the error>. If you use Bash and don't need to be POSIX compliant and /or portable you can be a bit more creative. Heh - you've cut my sentence in two, which appears to change the meaning slightly :-) " ... is something I need to learn and practice more". You make it look like I need to learn error checking from scratch :-) I am using bash. And I am currently using 'set -e' to handle that kind of error globally. As I say, as long as I can see that it broke, it doesn't matter too much. and practice more - I tend to rely somewhat on 'knowing' what I'm working with, which is probably not a good idea. > Yes in a script it is like shooting yourself in the foot. I do test it. Cheers, Richard
Re: copy directory tree, mapping to new owners
On 12/09/21 12:52 pm, Greg Wooledge wrote: On Sun, Sep 12, 2021 at 12:43:29PM +1200, Richard Hector wrote: The context of my question is that I'm creating (or updating) a test copy of a website. The files are owned by one of two owners, depending on whether they were written by the server (actually php-fpm). To do that, I want all the permissions to remain the same, but the ownership should be changed according to a provided map. For example, if the old file was owned by 'mysite', the copy should be owned by 'mysite_test'. If the old file was owned by 'mysite-run' (the user php runs as), the copy should be owned by 'mysite_test-run' (if that has to be 'mysite-run_test' to make things easier, I can live with that). cd /src mkdir -p /dest rsync -a . /dest/ # The trailing / matters. cd /dest find . -user mysite -exec chown mysite_test {} + find . -user mysite-run -exec chown mysite-run_test {} + Thanks, that looks reasonable. It does mean, though, that the files exist for a while with the wrong ownership. That probably doesn't matter, but somehow 'feels wrong' to me. My habits would also lead me to do all of the above from above the two directories in question (in my case, /srv - for /srv/mysite/doc_root and /srv/mysite_test/doc_root) So: cd /srv # actually not necessary? rsync will create it mkdir -p mysite_test/doc_root # The trailing / matters. Does it matter on the source as well? # I generally include it. rsync -a mysite/doc_root/ mysite_test/doc_root/ # The trailing / matters. find mysite_dest -user mysite -exec chown mysite_test {} + # I prefer mysite_test-run; it keeps consistency with # the ${sitename}-run pattern used elsewhere find mysite_dest -user mysite-run -exec chown mysite_test-run {} + Have I broken anything there? This is what I would do. And I would do it *interactively*. If you insist on making a script, then it will be slightly more complicated, because you'll need to add error checking. The trouble with doing it interactively, when it needs to be done many times (and on several sites), is that each time there's opportunity to make a mistake. And it means the process needs to be documented separately from the script. In fact, I'd incorporate the above in a larger script, which does things like copying the database (and changing the db name in the site config). Error checking in shell scripts is something I certainly need to learn and practice more - I tend to rely somewhat on 'knowing' what I'm working with, which is probably not a good idea. Thanks, Richard
copy directory tree, mapping to new owners
Hi all, The context of my question is that I'm creating (or updating) a test copy of a website. The files are owned by one of two owners, depending on whether they were written by the server (actually php-fpm). To do that, I want all the permissions to remain the same, but the ownership should be changed according to a provided map. For example, if the old file was owned by 'mysite', the copy should be owned by 'mysite_test'. If the old file was owned by 'mysite-run' (the user php runs as), the copy should be owned by 'mysite_test-run' (if that has to be 'mysite-run_test' to make things easier, I can live with that). Group ownership is or would be the same, but in fact it's simpler because both users are members of teh same group - all files are or should be group-owned by the same group (mysite, mapping to mysite_test). Is there any pre-existing tool that will do this? Or will I need to write a perl script or similar? What I've done in the past is use the same users for both production and testing, and do the copy by running rsync as the mysite user, but firstly I'd rather have more isolation between the two, secondly the mysite user might not be able to read all the mysite-run files, and thirdly the ownership of those (mysite-run) files gets changed, making it an imperfect copy. Thanks in advance, Richard
Re: Trouble upgrading Debian (reply to David Wright)
On 7/09/21 5:25 am, John Hasler wrote: Curt writes: I suggest you follow the earlier advice, and set Thunderbird to compose your email as plain text Curt didn't write that; I did. Please be careful with your attributions. I'm intrigued to know how this mistake happened, however. Were you perhaps replying to a digest message instead of a normal individual one? Cheers, Richard
RE: rough start of semester on 9800-80 WLCs
There are two commands you can use: show wireless loadbalance tag affinity wncd show wireless stats ap loadbalance summary Hector Rios UT Austin From: The EDUCAUSE Wireless Issues Community Group Listserv On Behalf Of Chad Sawyer Sent: Tuesday, September 7, 2021 1:48 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU Subject: Re: [WIRELESS-LAN] rough start of semester on 9800-80 WLCs Thanks for the info. I'll look into the AP service pack. We haven't done one of those yet so kind of curious to see it in action. Yeah we had mixed results with the 500 APs in a site tag. Some of the areas on campus were fine. I think client count had something to do with provoking it. Our highest population areas were the ones that saw the most capwap timeouts. Just curious- how are you checking the number of APs and site tags assigned to a wncd process? From: The EDUCAUSE Wireless Issues Community Group Listserv mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU>> On Behalf Of Rios, Hector J Sent: Tuesday, September 7, 2021 12:57 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU<mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU> Subject: Re: [WIRELESS-LAN] rough start of semester on 9800-80 WLCs Chad, Sorry to hear about the issues you ran into. We also started the semester with 9800-80s, but we chose to go with 17.3.4. Things went well for most of the day on the first day of classes, except for a single controller crash after business hours. Cisco has identified this as a bug on the 17.3.X: CSCvx71141 - CPU HOG in RRM Process. You should contact TAC to get more details. They might also be able to provide a workaround, depending on your configuration. We also ran into the bug below, but this was fixed with an AP service pack. Cool feature BTW, it actually works. CSCvz08781 Symptom: AP2800/3800/4800/1560/IW6300/ESW6300 Firmware Radio Crash on 17.3.4 while passing client traffic. There is also an issue on 17.3.4 that is impacting 9120s. Cisco is working on a service pack for this as well. Don't have more details on this. Thank you on the information regarding the wncd processes. We also followed the best practices, but we do have controllers that have a few wncd processes with a little over 500 APs. No issues so far, other than we have noticed in a few instances that even though we only have 8 custom site tags, some WLCs will assign two sitetags to a single wncd process. We are working with TAC on this. We also have a substantial number of 2700 series AP. We encountered no major issues during the upgrade process. Finally, we have noticed that L3 roaming is not working on our 802.1X and PSK SSIDs. I wonder if anyone has run into this issue as well?. Best, Hector Rios UT Austin From: The EDUCAUSE Wireless Issues Community Group Listserv mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU>> On Behalf Of Chad Sawyer Sent: Tuesday, September 7, 2021 9:21 AM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU<mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU> Subject: [WIRELESS-LAN] rough start of semester on 9800-80 WLCs Just sending a heads up in case anyone else hits these. This was our first semester with a full campus since moving everything over to our 9800-80 pairs. They've been in production for much of the past 12 months and the performance was fine when campus was empty. Under load was another story. First issue: Code 17.3.3 has the following bugs that were causing frequent HA failovers that reference the wncd process. This was resolved by upgrading to 17.4.4. CSCvx37499- Controller reloads with the reason "Critical process wncd fault on rp_0_0 (rc=139) CSCvy20300- Primary controller in HA frequently ends abnormally Second issue: Unfortunately these failovers also provoked one of the units to lose the contents of its bootflash and get stuck in rommon mode, so we had to recover it via the booting to USB routine. This was also due to a 17.3.3 bug and has been hopefully resolved so far by upgrading to 17.4.4. CSCvy73836- C9800-80 controller goes to rommon after multiple failovers due to power cycling Third issue: The nastiest thing though was unrelated to bugs. It was CAPWAP timeouts that only occurred in busy areas of campus. AP uptime would show months, but CAPWAP uptimes were constantly resetting to zero. The logs on the AP would show the following message: "Going to restart CAPWAP (reason : data keepalive not received)" We wasted a lot of time troubleshooting this as a connectivity issue between our APs and controller, but that wasn't the cause. This problem was a result of our following Cisco's 9800 best practice guide<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fproducts%2Fcollateral%2Fwireless%2Fcatalyst-9800-series-wireless-controllers%2Fguide-c07-743627.html=04%7C01%7Cchadsawyer%40USF.EDU%7C70dd56e07e8d48c6afc308d9722075bf%7C741bf7dee2e546df8d6782607df9d
RE: rough start of semester on 9800-80 WLCs
Chad, Sorry to hear about the issues you ran into. We also started the semester with 9800-80s, but we chose to go with 17.3.4. Things went well for most of the day on the first day of classes, except for a single controller crash after business hours. Cisco has identified this as a bug on the 17.3.X: CSCvx71141 - CPU HOG in RRM Process. You should contact TAC to get more details. They might also be able to provide a workaround, depending on your configuration. We also ran into the bug below, but this was fixed with an AP service pack. Cool feature BTW, it actually works. CSCvz08781 Symptom: AP2800/3800/4800/1560/IW6300/ESW6300 Firmware Radio Crash on 17.3.4 while passing client traffic. There is also an issue on 17.3.4 that is impacting 9120s. Cisco is working on a service pack for this as well. Don't have more details on this. Thank you on the information regarding the wncd processes. We also followed the best practices, but we do have controllers that have a few wncd processes with a little over 500 APs. No issues so far, other than we have noticed in a few instances that even though we only have 8 custom site tags, some WLCs will assign two sitetags to a single wncd process. We are working with TAC on this. We also have a substantial number of 2700 series AP. We encountered no major issues during the upgrade process. Finally, we have noticed that L3 roaming is not working on our 802.1X and PSK SSIDs. I wonder if anyone has run into this issue as well?. Best, Hector Rios UT Austin From: The EDUCAUSE Wireless Issues Community Group Listserv On Behalf Of Chad Sawyer Sent: Tuesday, September 7, 2021 9:21 AM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU Subject: [WIRELESS-LAN] rough start of semester on 9800-80 WLCs Just sending a heads up in case anyone else hits these. This was our first semester with a full campus since moving everything over to our 9800-80 pairs. They've been in production for much of the past 12 months and the performance was fine when campus was empty. Under load was another story. First issue: Code 17.3.3 has the following bugs that were causing frequent HA failovers that reference the wncd process. This was resolved by upgrading to 17.4.4. CSCvx37499- Controller reloads with the reason "Critical process wncd fault on rp_0_0 (rc=139) CSCvy20300- Primary controller in HA frequently ends abnormally Second issue: Unfortunately these failovers also provoked one of the units to lose the contents of its bootflash and get stuck in rommon mode, so we had to recover it via the booting to USB routine. This was also due to a 17.3.3 bug and has been hopefully resolved so far by upgrading to 17.4.4. CSCvy73836- C9800-80 controller goes to rommon after multiple failovers due to power cycling Third issue: The nastiest thing though was unrelated to bugs. It was CAPWAP timeouts that only occurred in busy areas of campus. AP uptime would show months, but CAPWAP uptimes were constantly resetting to zero. The logs on the AP would show the following message: "Going to restart CAPWAP (reason : data keepalive not received)" We wasted a lot of time troubleshooting this as a connectivity issue between our APs and controller, but that wasn't the cause. This problem was a result of our following Cisco's 9800 best practice guide<https://www.cisco.com/c/en/us/products/collateral/wireless/catalyst-9800-series-wireless-controllers/guide-c07-743627.html>, specifically on site tag sizing. Although the guide says up to 500 APs can safely be assigned to a site tag, that was far from the truth in our experience. Several TAC folks missed it and it took our rep escalating the issue to a senior wireless design person from Cisco to finally find it. She advised breaking up our site tags so that they didn't exceed 250 APs, which instantly resolved the CAPWAP timeouts. Fourth issue: Apparently some of the 2702i APs don't handle code upgrades gracefully with the 9800s. Cisco made it sound like this was a common issue. After upgrading from 17.3.3 to 17.3.4, several 2700s on campus were showing "%CAPWAP-3-ERRORLOG: Certificate verification failed!" when attempting to establish CAPWAP with the controllers. This was resolved by manually recovering the APs by pushing an image from the downloads page to them via TFTP. Luckily we have a staff member who's pretty skilled at automating this type of stuff. These were the commands: SSH to the affected AP enable ! (enter password if there is one) ! debug capwap console cli ! archive download-sw /overwrite /force-reload tftp://(tftp server IP)/ap3g2-k9w8-tar.153-3.JPJ7.tar ! The AP will automatically reload, establish capwap with the controller, download the proper image, reload, and re-join the controller successfully. Chad Sawyer Network Engineer USF Information Technology www.usf.edu/it<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.usf.edu%2
Re: Trouble upgrading Debian (reply to David Wright)
On 6/09/21 1:20 pm, Dedeco Balaco wrote: 3. Tried to do 'apt update' as root, but it does not work. GPG signature error. 21:18:54 [ 0] root@compo: /etc/apt # apt-mark hold firefox-esr firefox-esr-l10n-pt-br thunderbird thunderbird-l10n-pt-br firefox-esr set on hold. firefox-esr-l10n-pt-br set on hold. thunderbird set on hold. thunderbird-l10n-pt-br set on hold. 21:46:26 [ 0] root@debian: /etc/apt # apt update Get:1http://security.debian.org buster/updates InRelease [65.4 kB] Err:1http://security.debian.org buster/updates InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 112695A0E562B32A NO_PUBKEY 54404762BBB6E853 Get:2http://deb.debian.org/debian buster InRelease [122 kB] Get:3http://deb.debian.org/debian buster-updates InRelease [51.9 kB] Err:3http://deb.debian.org/debian buster-updates InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 648ACFD622F3D138 NO_PUBKEY 0E98404D386FA1D9 Get:4http://deb.debian.org/debian buster/main Sources [7,836 kB] Get:5http://deb.debian.org/debian buster/main amd64 Packages [7,907 kB] Get:6http://deb.debian.org/debian buster/main i386 Packages [7,863 kB] Get:7 http://deb.debian.org/debian buster/main Translation-pt_BR [683 kB] Get:8http://deb.debian.org/debian buster/main Translation-en [5,968 kB] Get:9http://deb.debian.org/debian buster/main Translation-pt [309 kB] Get:10http://deb.debian.org/debian buster/main amd64 Contents (deb) [37.3 MB] Get:11http://deb.debian.org/debian buster/main i386 Contents (deb) [37.3 MB] Reading package lists... Done W: GPG error: http://security.debian.org buster/updates InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 112695A0E562B32A NO_PUBKEY 54404762BBB6E853 E: The repository 'http://security.debian.org buster/updates InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: GPG error:http://deb.debian.org/debian buster-updates InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 648ACFD622F3D138 NO_PUBKEY 0E98404D386FA1D9 E: The repository 'http://deb.debian.org/debian buster-updates InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. 21:46:50 [ 0] root@debian: /etc/apt # As you can see, the plain text version of your email is not very readable. I suggest you follow the earlier advice, and set Thunderbird to compose your email as plain text, even if only for list mail. Cheers, Richard
Bug#991631: nfs-utils: please update to newer upstream version
Hello Anibal, El ds., 4 de set. 2021, 9:33, Anibal Monsalve Salazar va escriure: > On Thu, Jul 29, 2021, 21:42 Salvatore Bonaccorso > wrote: > >> Control: forcemerge 917706 991631 >> >> Hi Hector, >> >> On Thu, Jul 29, 2021 at 12:21:47PM +0200, Hector Oron wrote: >> > Package: src:nfs-utils >> > Severity: wishlist >> > >> > Dear Maintainer, >> > >> > Hello, >> > >> > nfs-utils package is quite old, even in SID, what are the plans for >> > this package? Could it be updated to a more recent upstream version? >> >> It should absolutely, there was an attempt to try to do this before >> the bullseye release, but it failed, WIP in >> https://salsa.debian.org/kernel-team/nfs-utils/-/merge_requests/3 . >> Some help welcome! >> >> Debian is lacking too much behind. For after the bullseye release this >> shouldbe tackled ASAP, we should not land in the same situation again >> for bookworm. >> >> Regards, >> Salvatore >> > > Hello Salvatore, Héctor, > > Just to let you know that I have been working with version 2.5.4 and the > 2.5.5 RCs for the last few days, almost full time. Currently, I'm reviewing > once again all the patches. > Excellent! I worked on it sometime, but my work is not ready, I should post it once it is ready for review and testing. Anibal, do you have some WIP you can publish (on salsa) for review and/or testing? I feel we are pretty close for a real update. :-) Thanks all for the work > Kind regards, > > Aníbal > >>
Bug#991631: nfs-utils: please update to newer upstream version
Hello Anibal, El ds., 4 de set. 2021, 9:33, Anibal Monsalve Salazar va escriure: > On Thu, Jul 29, 2021, 21:42 Salvatore Bonaccorso > wrote: > >> Control: forcemerge 917706 991631 >> >> Hi Hector, >> >> On Thu, Jul 29, 2021 at 12:21:47PM +0200, Hector Oron wrote: >> > Package: src:nfs-utils >> > Severity: wishlist >> > >> > Dear Maintainer, >> > >> > Hello, >> > >> > nfs-utils package is quite old, even in SID, what are the plans for >> > this package? Could it be updated to a more recent upstream version? >> >> It should absolutely, there was an attempt to try to do this before >> the bullseye release, but it failed, WIP in >> https://salsa.debian.org/kernel-team/nfs-utils/-/merge_requests/3 . >> Some help welcome! >> >> Debian is lacking too much behind. For after the bullseye release this >> shouldbe tackled ASAP, we should not land in the same situation again >> for bookworm. >> >> Regards, >> Salvatore >> > > Hello Salvatore, Héctor, > > Just to let you know that I have been working with version 2.5.4 and the > 2.5.5 RCs for the last few days, almost full time. Currently, I'm reviewing > once again all the patches. > Excellent! I worked on it sometime, but my work is not ready, I should post it once it is ready for review and testing. Anibal, do you have some WIP you can publish (on salsa) for review and/or testing? I feel we are pretty close for a real update. :-) Thanks all for the work > Kind regards, > > Aníbal > >>
Re: which vs. type, and recursion?
On 4/09/21 9:26 pm, Brian wrote: On Sat 04 Sep 2021 at 21:21:38 +1200, Richard Hector wrote: Greg Wooledge pointed out in another thread that 'type' is often better than 'which' for finding out what kind of command you're about to run, and where it comes from. A quick test, however, threw up another issue: richard@zircon:~$ type ls ls is aliased to `ls --color=auto' Great, so it's an alias. But what is the underlying ls? How do I find out? I did find out, by unaliasing ls and trying again, which showed that it's an actual executable, /usr/bin/ls, and not a shell builtin. But is there an easier/better way? Can 'type' be asked to recursively decode aliases? I looked at the relevant section of bash(1) (when I eventually found it), but was not particularly enlightened. Use 'help type' and try 'type -a ls'. That ('help type') is much more readable than bash(1), thanks. I think I might have known about 'help', but had forgotten ... Cheers, Richard
Re: Tips/advice for installing latest version of fzf?
On 1/09/21 3:32 am, Greg Wooledge wrote: In bash, which is *not* a shell builtin -- it's a separate program, /usr/bin/which. Well _that_ took a while to parse correctly :-) I know bash is not a shell builtin, that would be weird ... Cheers, Richard
which vs. type, and recursion?
Greg Wooledge pointed out in another thread that 'type' is often better than 'which' for finding out what kind of command you're about to run, and where it comes from. A quick test, however, threw up another issue: richard@zircon:~$ type ls ls is aliased to `ls --color=auto' Great, so it's an alias. But what is the underlying ls? How do I find out? I did find out, by unaliasing ls and trying again, which showed that it's an actual executable, /usr/bin/ls, and not a shell builtin. But is there an easier/better way? Can 'type' be asked to recursively decode aliases? I looked at the relevant section of bash(1) (when I eventually found it), but was not particularly enlightened. Cheers, Richard
Re: How to update Debian 11 source.list to testing?
On 4/09/21 2:17 am, Roberto C. Sánchez wrote: You might consider using bookwork rather than testing, however. Or bookworm, even. Richard
RE: Ekahau Update
Ian, Thank you for putting this together. Let's hope Ekahau is truly receptive and they are able to come with alternatives that benefit all of us. Hector Rios UT Austin From: The EDUCAUSE Wireless Issues Community Group Listserv On Behalf Of Ian Lyons Sent: Monday, August 9, 2021 12:50 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU Subject: [WIRELESS-LAN] Ekahau Update Good Day Everyone! Eric and I were happy to host a meeting with many of you about Ekahau last Friday. We had a peak of 28 folks and an average of 18! Thank you for coming! The meeting started with introductions and that lasted about the first 20 min or so. Steve (VP Global Sales) and Stewart (SE North America) were Ekahau representatives. Both started ~2 years ago Then we segued into how people used the product: Sidekick, AP on a stick, Design, Analysis, Engineering, and proof of engineering were the common threads. Steve opened the introductions and brought up a point that Ekahau EULA was always 1:1. Members that have been using the product for 8+ years have evidence that it was initially concurrent users' vs 1:1. Further the "teeth" that made sharing the gear difficult became active in version 10.3. Many schools, large and small, with disparate sized teams as well as healthcare indicated that there isn't a 1 size fits all. Pro's and Con's: Some folks have deep pockets and will fund other active users. Others stated that the device is used periodically and could be used by interns for site surveys up to proof of design and engineering validation by FTE's. Use of a physical hardware license key was discussed: On the one hand it makes it easier to tie to license to something, but that has the impact of requiring people to come into contact to hand it off. The spirit of the device was a sporadically used tool but only 1 person at a time. Some suggestions by the group and Ekahau, were a tiered approach of access. Where we left things is that Stephen (SVP of Sales) will work with his management to determine an alternate EULA\connection model that will better fit our needs (those on the call). We agreed to another meeting, ideally in 8 weeks' time to review Stephen's work on our behalf. Steve was adamant that any problems by the group accessing a tool because of lock out/access please send an email to him (email info below) and he will help get you access to the tool again. steve.lit...@ekahau.com<mailto:steve.lit...@ekahau.com> stewart.goum...@ekahau.com<mailto:stewart.goum...@ekahau.com> Link to the Meeting Webex meeting recording: Ekahau and Educause WIFI Group Password: EducauseWifi Recording link: https://rollins.webex.com/rollins/ldr.php?RCID=12596eece193961c0a7e8c4c5e51a99e *Any mistakes in the summarization are mine, on how the product works or ties together. I do not have the product, so my knowledge gaps were a result of unfamiliarity of the product and a poor google search to educate myself. Cheers Ian J Lyons Network Architect - Rollins College 401.413.1661 Cell 407.628.6396 Desk This message is from an external sender. Learn more about why this matters.<https://ut.service-now.com/sp?id=kb_article=KB0011401> ** Replies to EDUCAUSE Community Group emails are sent to the entire community list. If you want to reply only to the person who sent the message, copy and paste their email address and forward the email reply. Additional participation and subscription information can be found at https://www.educause.edu/community ** Replies to EDUCAUSE Community Group emails are sent to the entire community list. If you want to reply only to the person who sent the message, copy and paste their email address and forward the email reply. Additional participation and subscription information can be found at https://www.educause.edu/community
Bug#991631: nfs-utils: please update to newer upstream version
Package: src:nfs-utils Severity: wishlist Dear Maintainer, Hello, nfs-utils package is quite old, even in SID, what are the plans for this package? Could it be updated to a more recent upstream version? Regards, -- Héctor Orón -.. . -... .. .- -. -.. . ...- . .-.. --- .--. . .-.
Bug#991631: nfs-utils: please update to newer upstream version
Package: src:nfs-utils Severity: wishlist Dear Maintainer, Hello, nfs-utils package is quite old, even in SID, what are the plans for this package? Could it be updated to a more recent upstream version? Regards, -- Héctor Orón -.. . -... .. .- -. -.. . ...- . .-.. --- .--. . .-.
RE: [EXTERNAL] [WIRELESS-LAN] Fast Transition Enable
The challenge with testing FT, either "enabled" or "adaptive" is that it will most likely work with the few devices you can test with, but the minute you enable it and expose it to all your client devices, there will be some that will just not play nice. At that point you either revert your config, or take a stance of "this is what we support moving forward, so, sorry". It's the nature of the game. Hector Rios, UT Austin From: The EDUCAUSE Wireless Issues Community Group Listserv On Behalf Of Dennis Xu Sent: Wednesday, July 28, 2021 2:05 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU Subject: Re: [WIRELESS-LAN] [EXTERNAL] [WIRELESS-LAN] Fast Transition Enable Thanks for all the information. We might want to test the FT "enabled" setting. Dennis Xu From: The EDUCAUSE Wireless Issues Community Group Listserv mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU>> On Behalf Of Steve J Wenger Sent: Monday, July 26, 2021 3:00 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU<mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU> Subject: Re: [WIRELESS-LAN] [EXTERNAL] [WIRELESS-LAN] Fast Transition Enable CAUTION: This email originated from outside of the University of Guelph. Do not click links or open attachments unless you recognize the sender and know the content is safe. If in doubt, forward suspicious emails to ith...@uoguelph.ca<mailto:ith...@uoguelph.ca> We learned that FT "Adaptive Enabled" was on by default when we deployed IOS-XE 17.3. Certain Motorola cell phones had difficulty connecting intermittently, regardless if the phones were Android 10 or 11. When we set FT to "disabled", the Android clients in question were able to connect and roam between AP's and buildings without problems. Discovered this only after reading about the Cisco bug CSCvu24770. Have not tried to set FT to "enabled" to experiment yet. Thanks, Steve Wenger Viterbo University Wi-Fi / Telecom Administrator | Instructional and Information Technology 608-796-3950 [EmailSignatureLogo] www.viterbo.edu<http://www.viterbo.edu> | 900 Viterbo Drive, La Crosse, WI 54601 From: The EDUCAUSE Wireless Issues Community Group Listserv mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU>> On Behalf Of Jason Mallon Sent: Monday, July 26, 2021 1:46 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU<mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU> Subject: Re: [WIRELESS-LAN] [EXTERNAL] [WIRELESS-LAN] Fast Transition Enable EXTERNAL: This email originated from a sender outside of Viterbo. Use caution when clicking on links or opening attachments. We have FT enabled on ours, and it allowed the Andorid devices to connect that were unable to while we had FT adaptive. I have not heard, up to this point, of any devices failing to connect since we made the swap a couple months ago. Jason Mallon Network Engineer Office of Information Technology The University of Alabama<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ua.edu%2F=04%7C01%7Csjwenger%40VITERBO.EDU%7C88bbd628b7ba444d799f08d95065a966%7C6b9fc982e8d74958976cb08441cc9b0b%7C0%7C0%7C637629219836423912%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=qdV3Ds4cPlwdvWDcq%2FLUThQuyVNHe16NSlUTCJ84IuA%3D=0> jemal...@ua.edu<mailto:jemal...@ua.edu> [The University of Alabama stacked logo with box A]<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ua.edu%2F=04%7C01%7Csjwenger%40VITERBO.EDU%7C88bbd628b7ba444d799f08d95065a966%7C6b9fc982e8d74958976cb08441cc9b0b%7C0%7C0%7C637629219836423912%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=qdV3Ds4cPlwdvWDcq%2FLUThQuyVNHe16NSlUTCJ84IuA%3D=0> From: The EDUCAUSE Wireless Issues Community Group Listserv mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU>> on behalf of Dennis Xu mailto:d...@uoguelph.ca>> Date: Monday, July 26, 2021 at 1:19 PM To: WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU<mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU> mailto:WIRELESS-LAN@LISTSERV.EDUCAUSE.EDU>> Subject: [EXTERNAL] [WIRELESS-LAN] Fast Transition Enable Hi, Has anyone set Fast Transition to enable for Cisco WLCs? Have you had any compatibility issues with client devices with FT enabled? I am asking because of the Android bug CSCvu24770 which caused some Android devices not able to connect with adaptive FT. Thanks. Dennis ** Replies to EDUCAUSE Community Group emails are sent to the entire community list. If you want to reply only to the person who sent the message, copy and paste their email address and forward the email reply. Additional participation and subscription information can be found at https://www.educause.edu/community<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.educause.edu%2Fcommunity=04%7C01%7Csjwenger%40VITERBO.EDU%7C88bbd628b7ba444d799f08d95065a966%7C6b9fc982
Bug#929006: intel-microcode: Atom fails to boot when loading microcode
On Fri, 17 May 2019 09:14:51 +0900 Hideki Yamane wrote: Hi, I'm not intel-microcode package maintainer, On Wed, 15 May 2019 16:48:38 +1200 Richard Hector wrote: > Package: intel-microcode > Version: 3.20180807a.2~deb9u1 How about install newer package? Today 3.20190514.1~deb9u1 have came to stretch-security, update your system and try it again. I'm afraid I didn't get round to that at the time - the machine is generally run headless, on a fairly inaccessible shelf, so booting it with a screen and keyboard is a pain. However, I did later upgrade it to buster, and today tried again, with intel-microcode 3.20210608.2~deb10u1. The problem persists. I'm not sure how long I'll keep the machine though; it's pretty long in the tooth. Richard
Re: explanation of first column "v" is hiding
On 28/07/21 7:55 am, Greg Wooledge wrote: https://bugs.debian.org/991578 Nice. I looked at the patch, but I'm not familiar with what processing gets done on that code. Does your reference to the reference manual, in the last of the diff, get expanded to tell me where to find the reference manual? Is that feasible? Cheers, Richard
Re: location of screenshots during debian install
On 27/07/21 7:14 pm, Jupiter777 wrote: hello, I am in the middle of installing buster 10.10.x on my computer. I see that I can take screenshots as the dialog boxes tell me: Screenshot Saved as /var/log/ But /var/log is not on the bootable usb I am using ... Where are the screenshots? I like to use them for troubleshooting? I've never used this facility, so I'm only guessing. But if they're not on the installer media, then they're presumably on the disk you're installing to, which is mounted on /target/ during the installation - so /target/var/log/. Whether and how you can get them off that disk if you haven't finished the installation is a different matter, of course :-) Richard
[jira] [Created] (HADOOP-17819) Add extensions to ProtobufRpcEngine RequestHeaderProto
Hector Sandoval Chaverri created HADOOP-17819: - Summary: Add extensions to ProtobufRpcEngine RequestHeaderProto Key: HADOOP-17819 URL: https://issues.apache.org/jira/browse/HADOOP-17819 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Hector Sandoval Chaverri The header used in ProtobufRpcEngine messages doesn't allow for new properties to be added by child classes. We can add a range of extensions that can be useful for proto classes that need to extend RequestHeaderProto. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17819) Add extensions to ProtobufRpcEngine RequestHeaderProto
Hector Sandoval Chaverri created HADOOP-17819: - Summary: Add extensions to ProtobufRpcEngine RequestHeaderProto Key: HADOOP-17819 URL: https://issues.apache.org/jira/browse/HADOOP-17819 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Hector Sandoval Chaverri The header used in ProtobufRpcEngine messages doesn't allow for new properties to be added by child classes. We can add a range of extensions that can be useful for proto classes that need to extend RequestHeaderProto. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: explanation of first column "v" is hiding
On 27/07/21 5:22 am, Greg Wooledge wrote: P.S. If we're complaining about the lack of documentation for the cryptic output of the Debian tool set, can we say some words about aptitude? Seriously. This command searches for packages that require or conflict with the given package. It displays a sequence of dependencies leading to the target package, along with a note indicating the installed state of each package in the dependency chain: $ aptitude why kdepim i nautilus-data Recommends nautilus i A nautilus Recommends desktop-base (>= 0.2) i A desktop-base Suggests gnome | kde | xfce4 | wmaker p kde Dependskdepim (>= 4:3.4.3) What do *any* of those column-ish letters mean? I can guess "i", maybe, but not "A" or "p". (I might have guessed "purged" for "p", but that doesn't seem to fit the picture being painted by the example, which is of a system that *does* have KDE installed. In any case, why should I have to guess these things?) My main issue with aptitude documentation is that most of it isn't in the manpage, but in the 'aptitude reference manual' which is referred to without a link. The path given in the SEE ALSO section might be that, but it doesn't say so. But experience suggests that A means 'automatically installed' (and p stands for purged, which linguistically doesn't really mean 'maybe has been purged; maybe has never been installed'). Cheers, Richard
Re: Installation comment
On 26/07/21 5:42 pm, Geert Stappers wrote: On Sun, Jul 25, 2021 at 03:43:18AM +1200, Richard Hector wrote: Hi all, Sorry for the minimal detail - hopefully someone else will understand this much better than I do, and be able to fill in if required. Failing that, I might be able to do better later. Just spent many more hours on this than anticipated, and need sleep. I installed bullseye on my HP ProBook 430 G3 (attempting to encrypt everything including /boot, but backed out of that - I don't think it was relevant in the end) - using UEFI. Grub failed to install. I looked at the logs on VT 3, and saw stuff about No space left on device. df showed my devices all seemed fine. It turned out that my NVRAM was full of dump files. I don't know what they're for and why they had accumulated, but a page on the Arch wiki suggested deleting them, which allowed grub to install. Is it worth putting comments about that in the release notes? The way to go is having a closer look, finding out what is caused it, solving the problem. Thank Geert, Yes ... that might solve my particular problem. But I'm not sure where to start researching, especially since I deleted the dump files, and they don't appear to have come back, which means I'm unlikely to be able to reproduce the problem either. The only references I can find to these dump files (eg https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface) relate to deleting them if required, rather than explaining where they came from. Partly I was hoping that somebody here understands UEFI and efivars better than I do (it wouldn't be hard). Also, unrelated to my specific issues with my specific laptop, I wondered if it was worth detecting this condition, perhaps from grub-install, and displaying a message that might help the next person make progress. After all, a message that 'No space left on device' when the 'device' in question doesn't even show up in the output of df (why?), is of limited value. Should I file a bug against grub-install (or grub2-common?)?
Installation comment
Hi all, Sorry for the minimal detail - hopefully someone else will understand this much better than I do, and be able to fill in if required. Failing that, I might be able to do better later. Just spent many more hours on this than anticipated, and need sleep. I installed bullseye on my HP ProBook 430 G3 (attempting to encrypt everything including /boot, but backed out of that - I don't think it was relevant in the end) - using UEFI. Grub failed to install. I looked at the logs on VT 3, and saw stuff about No space left on device. df showed my devices all seemed fine. It turned out that my NVRAM was full of dump files. I don't know what they're for and why they had accumulated, but a page on the Arch wiki suggested deleting them, which allowed grub to install. Is it worth putting comments about that in the release notes? Or is it too obscure a condition? Thanks, Richard
Re: How do I mount the USB stick containing the installer in Rescue Mode?
On 21/07/21 11:39 pm, Greg Wooledge wrote: No, a bind mount doesn't take a device name as an argument. It takes two directory names. From the man page: mount --bind|--rbind|--move olddir newdir It's used when you've already got the device mounted somewhere (the first directory), and you'd also like it to appear in a second place. That could be interpreted to mean it still applies to a (mounted) device, but it doesn't - olddir can be anywhere, not just a mount point. Richard
Re: MDs & Dentists
On 22/07/21 3:38 am, Reco wrote: One sure way to beat ransomware is to take immutable backups That's fine if keeping access to your data is all you care about. With the more modern ransomware that threatens to publish your (and/or your customers') data, not so much. Richard
Re: [geos-devel] lbgeos intersect skips first coordinates
Hi, it is consistent this different which causes different results. Thanks On Wednesday, July 14, 2021, 11:55:28 a.m. EDT, Martin Davis wrote: The overlay operations do not necessarily preserve the exact sequence of input vertices. This is the case for both GEOS and JTS. If you are seeing a different, preferred order in JTS it's just by chance. On Wed, Jul 14, 2021 at 7:46 AM Hector Nunez wrote: Hi, I found that the I use intersects the result shows that the first coordinates is not used.Here is a sample that I did using two same geometries that shows the issue.I tested similar code in Java using JTS and I don't have this issue. // SIMPLE SIMULATION OF THE PROBLEM USING INTERSECTION using GEOS 3.9.1 geos::geom::GeometryFactory::Ptr Factory= geos::geom::GeometryFactory::create(); geos::geom::CoordinateSequence::Ptr coord_seq = Factory->getCoordinateSequenceFactory()->create((size_t)5, (size_t)0); coord_seq->setAt(geos::geom::Coordinate(-180, 85.0), 0); coord_seq->setAt(geos::geom::Coordinate(180, 85.0), 1); coord_seq->setAt(geos::geom::Coordinate(180, -85.0), 2); coord_seq->setAt(geos::geom::Coordinate(-180, -85.0), 3); coord_seq->setAt(geos::geom::Coordinate(-180, 85.0), 4); geos::geom::LinearRing * shell_test = Factory->createLinearRing(coord_seq.release()); geos::geom::Geometry::Ptr geom = std::unique_ptr(Factory->createPolygon(shell_test, NULL)); const geos::geom::Geometry::Ptr other = geom->clone(); for (size_t i = 0; igetNumPoints(); i++) printf( "geom %d [x=%f,y=%f]\n", i+1, geom->getCoordinates()->getX(i), geom->getCoordinates()->getY(i)); if (geom->intersects(other.get())){ std::cout << "GEOMETRIES INTERSECTS !"<< std::endl; const geos::geom::Geometry::Ptr geom_intersected = geom->intersection(other.get()); for (size_t i = 0; igetNumPoints(); i++) printf( "geom_intersected %d [x=%f,y=%f]\n", i+1, geom_intersected->getCoordinates()->getX(i), geom_intersected->getCoordinates()->getY(i)); if (geom->compareTo(geom_intersected.get()))std::cout << "GEOMETRIES AFTER INTERSECTION ARE NOT THE SAME !"<< std::endl;} /*geom 1 [x=-150.00,y=22.00]geom 2 [x=150.00,y=22.00]geom 3 [x=150.00,y=-22.00] geom 4 [x=-150.00,y=-22.00]geom 5 [x=-150.00,y=22.00] // it should start with -150geom_intersected 1 [x=150.00,y=22.00]geom_intersected 2 [x=150.00,y=-22.00]geom_intersected 3 [x=-150.00,y=-22.00]geom_intersected 4 [x=-150.00,y=22.00]geom_intersected 5 [x=150.00,y=22.00]*/ Thanks ___ geos-devel mailing list geos-devel@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/geos-devel ___ geos-devel mailing list geos-devel@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/geos-devel ___ geos-devel mailing list geos-devel@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/geos-devel
[geos-devel] lbgeos intersect skips first coordinates
Hi, I found that the I use intersects the result shows that the first coordinates is not used.Here is a sample that I did using two same geometries that shows the issue.I tested similar code in Java using JTS and I don't have this issue. // SIMPLE SIMULATION OF THE PROBLEM USING INTERSECTION using GEOS 3.9.1 geos::geom::GeometryFactory::Ptr Factory= geos::geom::GeometryFactory::create(); geos::geom::CoordinateSequence::Ptr coord_seq = Factory->getCoordinateSequenceFactory()->create((size_t)5, (size_t)0); coord_seq->setAt(geos::geom::Coordinate(-180, 85.0), 0); coord_seq->setAt(geos::geom::Coordinate(180, 85.0), 1); coord_seq->setAt(geos::geom::Coordinate(180, -85.0), 2); coord_seq->setAt(geos::geom::Coordinate(-180, -85.0), 3); coord_seq->setAt(geos::geom::Coordinate(-180, 85.0), 4); geos::geom::LinearRing * shell_test = Factory->createLinearRing(coord_seq.release()); geos::geom::Geometry::Ptr geom = std::unique_ptr(Factory->createPolygon(shell_test, NULL)); const geos::geom::Geometry::Ptr other = geom->clone(); for (size_t i = 0; igetNumPoints(); i++) printf( "geom %d [x=%f,y=%f]\n", i+1, geom->getCoordinates()->getX(i), geom->getCoordinates()->getY(i)); if (geom->intersects(other.get())){ std::cout << "GEOMETRIES INTERSECTS !"<< std::endl; const geos::geom::Geometry::Ptr geom_intersected = geom->intersection(other.get()); for (size_t i = 0; igetNumPoints(); i++) printf( "geom_intersected %d [x=%f,y=%f]\n", i+1, geom_intersected->getCoordinates()->getX(i), geom_intersected->getCoordinates()->getY(i)); if (geom->compareTo(geom_intersected.get()))std::cout << "GEOMETRIES AFTER INTERSECTION ARE NOT THE SAME !"<< std::endl;} /*geom 1 [x=-150.00,y=22.00]geom 2 [x=150.00,y=22.00]geom 3 [x=150.00,y=-22.00] geom 4 [x=-150.00,y=-22.00]geom 5 [x=-150.00,y=22.00] // it should start with -150geom_intersected 1 [x=150.00,y=22.00]geom_intersected 2 [x=150.00,y=-22.00]geom_intersected 3 [x=-150.00,y=-22.00]geom_intersected 4 [x=-150.00,y=22.00]geom_intersected 5 [x=150.00,y=22.00]*/ Thanks ___ geos-devel mailing list geos-devel@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/geos-devel
Re: [DNG] Refracta have a static IP
On 7/13/21 3:41 PM, Steve Litt wrote: Hi all, I'm trying to make my new Chimera based Refracta have a static IP address at 192.168.0.199/24, in order that every other computer on the 192.168.0.0/24 subnet can easily access it, and so I can put it on my LAN DNS. So I made my /etc/network/interfaces look like the following, which follows the guidelines of "man interfaces": === auto lo iface lo inet loopback allow-hotplug eth0 iface eth0 inet static address 192.168.0.199 gateway 192.168.0.1 === Unfortunately, instead of the IP address being 192.168.0.199, it's a DHCP supplied 192.168.0.204 . What additional steps must I take to get my desired 192.168.0.199? Steve, this happens when you still have a dhcp client running, Check if you have something like isc-dhcp-client running and stop it, then you can configure the interface on your own. Additional note: When I used 192.168.0.40, which I KNOW is not in my leased DHCP range, the result was the same. What must I do to get a static IP at 192.168.0.199/24 ? Thanks, SteveT Steve Litt Spring 2021 featured book: Troubleshooting Techniques of the Successful Technologist http://www.troubleshooters.com/techniques ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng -- Hector Gonzalez ca...@genac.org ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
Filter and recipientList requiring end()?
Dear all, When I run this route (Camel version 2.24.2), the filter is applied for the expected files with extension jp2, but the last instructions (log and process) seem to become part of the filter block as if it was not closed by the end(). The result is that for non jp2 files nothing is executed, while I expected them to get to the last two steps. from("seda:queue").routeId("science-file-persisting") ... .filter().simple("${file:ext} == 'jp2'") .log(LoggingLevel.INFO, "Moving ${headers.CamelFileNameOnly} to jpip stage area") .setBody(simple("${in.header.FileContent}")) .log(LoggingLevel.INFO, "IN Filter") .recipientList(simple("{ {jpip.stage.route.path}}/?fileName=${headers.CamelFileNameOnly}")) .end() .log(LoggingLevel.INFO, "OUT Filter") .process(reportLogger); I thought that maybe the end() was closing some other part of the route than the filter, and I managed to get it working appending an extra "end()" to the recipientList like this: from("seda:queue").routeId("science-file-persisting") ... .filter().simple("${file:ext} == 'jp2'") .log(LoggingLevel.INFO, "Moving ${headers.CamelFileNameOnly} to jpip stage area") .setBody(simple("${in.header.FileContent}")) .log(LoggingLevel.INFO, "IN Filter") .recipientList(simple("{ {jpip.stage.route.path}}/?fileName=${headers.CamelFileNameOnly}")) .end() .end() .log(LoggingLevel.INFO, "OUT Filter") .process(reportLogger); I have not seen the need for an end() for recipientList in the documentation, so it makes me doubt. Could someone help me to understand whether this behaviour is the expected one (i.e., why the former does not work and the latter does)? Thanks a lot in advance. Cheers, Hector This message is intended only for the recipient(s) named above. It may contain proprietary information and/or protected content. Any unauthorised disclosure, use, retention or dissemination is prohibited. If you have received this e-mail in error, please notify the sender immediately. ESA applies appropriate organisational measures to protect personal data, in case of data privacy queries, please contact the ESA Data Protection Officer (d...@esa.int).
Re: Necesito urgente debian
Puedes revisar acá la última imagen iso disponible para arquitecturas amd64. https://cdimage.debian.org/debian-cd/current/amd64/iso-dvd/ Para una instalación básica sólo requieres el iso Número 1. Luego, deberías leer acá https://www.debian.org/CD/faq/#record-windows para saber cómo grabar la imagen a un dvd o usb Finalmente, lee acá para que tengas una visión de todo el proceso de instalación https://www.debian.org/releases/stable/i386/index.es.html Saludos. El mar, 29 de jun. de 2021 a la(s) 16:21, José Gonzalez (jagiale...@gmail.com) escribió: > > Hola qué tal,w escuchado Aserca de debian , un buen sistema operativo creo k > puede ser , . > Pero el punto de este correo es simplemente que necesito saber el link de > descarga porque para la instalación creo y al igual que Windows se necesita > una imagen para butesr la USB > Fuese así o no fuese asi eso no importa , > Lo que me interesa es tener debian en mi PC me ayudas pprfavor -- ****** Hector Colina. Linux counter id 131637 Debian user, aka e1th0r Mérida-Venezuela http://blog.hectorcolina.com Key fingerprint = E81B 8228 8919 EE27 85B7 A59B 357F 81F5 5CFC B481 Long live and prosperity
jmeter, windows authentication and script recording don't work
Hi. I'm triying to create a recording from our internal page developping using .net If we use bzm - Arrivals Thread Group component we can do some load test without problems, using HTTP Authorization Manager and basic authentication mechanism However if we use script recording we have 401 error code. On script recording we're using the same HTTP Authorization Manager configuration from bzm - Arrivals Thread Group. We're using jmeter 5.4.1, Internet explorer and windows 8.1 Do you have some idea about how to deal with this iossue? Best regards and thank you -- ** Hector Colina. Linux counter id 131637 Debian user, aka e1th0r Mérida-Venezuela http://blog.hectorcolina.com Key fingerprint = E81B 8228 8919 EE27 85B7 A59B 357F 81F5 5CFC B481 Long live and prosperity - To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org For additional commands, e-mail: user-h...@jmeter.apache.org
Apparmor messages on LXC container, after host upgrade to buster
Hi all, This is a copy of a message I posted to lxc-users last week; maybe more people will see it here :-) I'm getting messages like this after an upgrade of the host from stretch to buster: Jun 18 12:09:08 postgres kernel: [131022.470073] audit: type=1400 audit(1623974948.239:107): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=15558 comm="(ionclean)" flags="rw, rslave" I've seen several similar things from web searches, such as this from the lxc-users list, 5 years ago: https://lxc-users.linuxcontainers.narkive.com/3t0leW0p/apparmor-denied-messages-in-the-logs The suggestion seems to be that it doesn't matter, as long as mounts are actually working ok (all filesystems seem to be mounted). But if the mounts are working, what triggers the error? If the mounts are set up outside the container, why is the container trying to mount anything? There's nothing in /etc/fstab in the container. In case it's relevant, /var/lib/lxc//rootfs is a mount on the host, for all containers. All containers have additional mounts defined in the lxc config, and those filesystems are also mounts on the host, living under /guestfs. They're all lvm volumes, with xfs, as are the root filesystems. Any tips welcome. Cheers, Richard
Re: Web log analysis
On 27/05/21 9:55 pm, Richard Hector wrote: Hi all, I need to get a handle on what my web servers are doing. Apologies for my lack of response. Thanks for all of the useful and interesting replies. I'll look into this further later; in the meantime I think I solved my immediate needs with tools like grep :-) Cheers, Richard
Re: debian installation issue
On 22/06/21 12:54 am, Steve McIntyre wrote: [ Apologies, missed this last week... ] to...@tuxteam.de wrote: On Mon, Jun 14, 2021 at 09:20:52AM +0300, Andrei POPESCU wrote: On Vi, 11 iun 21, 15:07:11, Greg Wooledge wrote: > > Secure Boot (Microsoft's attempt to stop you from using Linux) relies on > UEFI booting, and therefore this was one of the driving forces behind it, > but not the *only* driving force. If your machine doesn't use Secure Boot, > don't worry about it. It won't affect you. While I'm not a fan of Microsoft: https://wiki.debian.org/SecureBoot#What_is_UEFI_Secure_Boot_NOT.3F Quoting from there: "Microsoft act as a Certification Authority (CA) for SB, and they will sign programs on behalf of other trusted organisations so that their programs will also run." Now two questions: - do you know any other alternative CA besides Microsoft who is capable of effectively doing this? In a way that it'd "work" with most PC vendors? I've been in a number of discussions about this over the last few years, particularly when talking about adding arm64 Secure Boot and *maybe* finding somebody else to act as CA for that. There's a few important (but probably not well-understood) aspect ofs the CA role here: * the entity providing the CA needs to be stable (changing things is expensive and hard) * they need to be trustworthy - having an existing long-term business relationship with the OEMs is a major feature here * they need to be *large* - if there is a major mistake that might cause a problem on a lot of machines in production, the potential cost liability (and lawsuits) from OEMs is *huge* There are not many companies who would fit here. Intel and AMD are both very interested in enhancing trust and security at this kind of level, but have competing products and ideas, for example. Is that something that needs to be done by one company? Perhaps because of how SecureBoot is implemented? I'd prefer to be able to add Debian's key either in addition to or instead of Microsoft's, which could also be happily installed alongside those of Intel, AMD, your favourite government security agency or whoever. And Debian can get theirs signed by whichever of those they might think is appropriate. But I want to be able to reduce that list to just Debian's, or just the EFF's, or mine. Whatever combination I choose. I think that should all work ok? Changing things, rather than being expensive and hard, should just be a matter of either getting a new organisation to sign Debian's key, and/or having them revoke one. As one of those on the list. As an aside, I'd like to see this with web certificates too - I want to be able to get my cert signed by LetsEncrypt _and_ whatever commercial CA or CAs I choose, so if one of them does something stupid and needs to be removed from the list of approved CAs, it doesn't break the internet, because any significant site will have its certs signed by others as well. Richard