[PATCH] ui/gtk-egl.c: fix build break
Qemu build is failing due to an unused variable. Removing it to fix the build break. Signed-off-by: Anushree Mathur --- ui/gtk-egl.c | 1 - 1 file changed, 1 deletion(-) diff --git a/ui/gtk-egl.c b/ui/gtk-egl.c index 0473f689c9..3177992b91 100644 --- a/ui/gtk-egl.c +++ b/ui/gtk-egl.c @@ -70,7 +70,6 @@ void gd_egl_draw(VirtualConsole *vc) QemuDmaBuf *dmabuf = vc->gfx.guest_fb.dmabuf; #endif int ww, wh, ws; -int fence_fd; if (!vc->gfx.gls) { return; -- 2.45.0
[Touch-packages] [Bug 2063072] [NEW] Audio does not play on devices until multiple restarts/reinstall
Public bug reported: On startup, no audio is able to play until I restart pulseaudio multiple times. Only then is it able to play though my speakers (Anolog Stereo Output). I have headphones plugged into Digital Stereo (HDMI) Output and is even less likely to work. I am required to restart pulseaudio/reinstall it until it ocassionally works, even then after it sits without playing audio for a while, it reverts back to a corrupt state where I need to restart the process to play anything. While this error is in effect, no programs that emit audio are able to play (Spotify, Youtube,etc) This all began a couple of months ago when I misclicked a device when choosing my playback, I had selected and HDMI input that was not plugged in. From then on I've continually had this issue despite reinstalling pulseaudio. Release: Ubuntu 23.10 Pulseaudio version: 1:16.1+dfsg1-2ubuntu4.1 ProblemType: Bug DistroRelease: Ubuntu 23.10 Package: pulseaudio 1:16.1+dfsg1-2ubuntu4.1 ProcVersionSignature: Ubuntu 6.5.0-28.29-generic 6.5.13 Uname: Linux 6.5.0-28-generic x86_64 ApportVersion: 2.27.0-0ubuntu5 Architecture: amd64 CasperMD5CheckResult: unknown CurrentDesktop: KDE Date: Sun Apr 21 23:47:18 2024 InstallationDate: Installed on 2024-01-18 (94 days ago) InstallationMedia: Kubuntu 23.10 "Mantic Minotaur" - Release amd64 (20231010) SourcePackage: pulseaudio UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 07/01/2021 dmi.bios.release: 3.2 dmi.bios.vendor: INSYDE Corp. dmi.bios.version: 03.02 dmi.board.asset.tag: * dmi.board.name: FRANBMCP06 dmi.board.vendor: Framework dmi.board.version: A6 dmi.chassis.asset.tag: FRANBMCPA612360031 dmi.chassis.type: 10 dmi.chassis.vendor: Framework dmi.chassis.version: A6 dmi.modalias: dmi:bvnINSYDECorp.:bvr03.02:bd07/01/2021:br3.2:svnFramework:pnLaptop:pvrA6:rvnFramework:rnFRANBMCP06:rvrA6:cvnFramework:ct10:cvrA6:skuFRANBMCP06: dmi.product.family: FRANBMCP dmi.product.name: Laptop dmi.product.sku: FRANBMCP06 dmi.product.version: A6 dmi.sys.vendor: Framework ** Affects: pulseaudio (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug mantic -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to pulseaudio in Ubuntu. https://bugs.launchpad.net/bugs/2063072 Title: Audio does not play on devices until multiple restarts/reinstall Status in pulseaudio package in Ubuntu: New Bug description: On startup, no audio is able to play until I restart pulseaudio multiple times. Only then is it able to play though my speakers (Anolog Stereo Output). I have headphones plugged into Digital Stereo (HDMI) Output and is even less likely to work. I am required to restart pulseaudio/reinstall it until it ocassionally works, even then after it sits without playing audio for a while, it reverts back to a corrupt state where I need to restart the process to play anything. While this error is in effect, no programs that emit audio are able to play (Spotify, Youtube,etc) This all began a couple of months ago when I misclicked a device when choosing my playback, I had selected and HDMI input that was not plugged in. From then on I've continually had this issue despite reinstalling pulseaudio. Release: Ubuntu 23.10 Pulseaudio version: 1:16.1+dfsg1-2ubuntu4.1 ProblemType: Bug DistroRelease: Ubuntu 23.10 Package: pulseaudio 1:16.1+dfsg1-2ubuntu4.1 ProcVersionSignature: Ubuntu 6.5.0-28.29-generic 6.5.13 Uname: Linux 6.5.0-28-generic x86_64 ApportVersion: 2.27.0-0ubuntu5 Architecture: amd64 CasperMD5CheckResult: unknown CurrentDesktop: KDE Date: Sun Apr 21 23:47:18 2024 InstallationDate: Installed on 2024-01-18 (94 days ago) InstallationMedia: Kubuntu 23.10 "Mantic Minotaur" - Release amd64 (20231010) SourcePackage: pulseaudio UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 07/01/2021 dmi.bios.release: 3.2 dmi.bios.vendor: INSYDE Corp. dmi.bios.version: 03.02 dmi.board.asset.tag: * dmi.board.name: FRANBMCP06 dmi.board.vendor: Framework dmi.board.version: A6 dmi.chassis.asset.tag: FRANBMCPA612360031 dmi.chassis.type: 10 dmi.chassis.vendor: Framework dmi.chassis.version: A6 dmi.modalias: dmi:bvnINSYDECorp.:bvr03.02:bd07/01/2021:br3.2:svnFramework:pnLaptop:pvrA6:rvnFramework:rnFRANBMCP06:rvrA6:cvnFramework:ct10:cvrA6:skuFRANBMCP06: dmi.product.family: FRANBMCP dmi.product.name: Laptop dmi.product.sku: FRANBMCP06 dmi.product.version: A6 dmi.sys.vendor: Framework To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/2063072/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Desktop-packages] [Bug 2063072] [NEW] Audio does not play on devices until multiple restarts/reinstall
Public bug reported: On startup, no audio is able to play until I restart pulseaudio multiple times. Only then is it able to play though my speakers (Anolog Stereo Output). I have headphones plugged into Digital Stereo (HDMI) Output and is even less likely to work. I am required to restart pulseaudio/reinstall it until it ocassionally works, even then after it sits without playing audio for a while, it reverts back to a corrupt state where I need to restart the process to play anything. While this error is in effect, no programs that emit audio are able to play (Spotify, Youtube,etc) This all began a couple of months ago when I misclicked a device when choosing my playback, I had selected and HDMI input that was not plugged in. From then on I've continually had this issue despite reinstalling pulseaudio. Release: Ubuntu 23.10 Pulseaudio version: 1:16.1+dfsg1-2ubuntu4.1 ProblemType: Bug DistroRelease: Ubuntu 23.10 Package: pulseaudio 1:16.1+dfsg1-2ubuntu4.1 ProcVersionSignature: Ubuntu 6.5.0-28.29-generic 6.5.13 Uname: Linux 6.5.0-28-generic x86_64 ApportVersion: 2.27.0-0ubuntu5 Architecture: amd64 CasperMD5CheckResult: unknown CurrentDesktop: KDE Date: Sun Apr 21 23:47:18 2024 InstallationDate: Installed on 2024-01-18 (94 days ago) InstallationMedia: Kubuntu 23.10 "Mantic Minotaur" - Release amd64 (20231010) SourcePackage: pulseaudio UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 07/01/2021 dmi.bios.release: 3.2 dmi.bios.vendor: INSYDE Corp. dmi.bios.version: 03.02 dmi.board.asset.tag: * dmi.board.name: FRANBMCP06 dmi.board.vendor: Framework dmi.board.version: A6 dmi.chassis.asset.tag: FRANBMCPA612360031 dmi.chassis.type: 10 dmi.chassis.vendor: Framework dmi.chassis.version: A6 dmi.modalias: dmi:bvnINSYDECorp.:bvr03.02:bd07/01/2021:br3.2:svnFramework:pnLaptop:pvrA6:rvnFramework:rnFRANBMCP06:rvrA6:cvnFramework:ct10:cvrA6:skuFRANBMCP06: dmi.product.family: FRANBMCP dmi.product.name: Laptop dmi.product.sku: FRANBMCP06 dmi.product.version: A6 dmi.sys.vendor: Framework ** Affects: pulseaudio (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug mantic -- You received this bug notification because you are a member of Desktop Packages, which is subscribed to pulseaudio in Ubuntu. https://bugs.launchpad.net/bugs/2063072 Title: Audio does not play on devices until multiple restarts/reinstall Status in pulseaudio package in Ubuntu: New Bug description: On startup, no audio is able to play until I restart pulseaudio multiple times. Only then is it able to play though my speakers (Anolog Stereo Output). I have headphones plugged into Digital Stereo (HDMI) Output and is even less likely to work. I am required to restart pulseaudio/reinstall it until it ocassionally works, even then after it sits without playing audio for a while, it reverts back to a corrupt state where I need to restart the process to play anything. While this error is in effect, no programs that emit audio are able to play (Spotify, Youtube,etc) This all began a couple of months ago when I misclicked a device when choosing my playback, I had selected and HDMI input that was not plugged in. From then on I've continually had this issue despite reinstalling pulseaudio. Release: Ubuntu 23.10 Pulseaudio version: 1:16.1+dfsg1-2ubuntu4.1 ProblemType: Bug DistroRelease: Ubuntu 23.10 Package: pulseaudio 1:16.1+dfsg1-2ubuntu4.1 ProcVersionSignature: Ubuntu 6.5.0-28.29-generic 6.5.13 Uname: Linux 6.5.0-28-generic x86_64 ApportVersion: 2.27.0-0ubuntu5 Architecture: amd64 CasperMD5CheckResult: unknown CurrentDesktop: KDE Date: Sun Apr 21 23:47:18 2024 InstallationDate: Installed on 2024-01-18 (94 days ago) InstallationMedia: Kubuntu 23.10 "Mantic Minotaur" - Release amd64 (20231010) SourcePackage: pulseaudio UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 07/01/2021 dmi.bios.release: 3.2 dmi.bios.vendor: INSYDE Corp. dmi.bios.version: 03.02 dmi.board.asset.tag: * dmi.board.name: FRANBMCP06 dmi.board.vendor: Framework dmi.board.version: A6 dmi.chassis.asset.tag: FRANBMCPA612360031 dmi.chassis.type: 10 dmi.chassis.vendor: Framework dmi.chassis.version: A6 dmi.modalias: dmi:bvnINSYDECorp.:bvr03.02:bd07/01/2021:br3.2:svnFramework:pnLaptop:pvrA6:rvnFramework:rnFRANBMCP06:rvrA6:cvnFramework:ct10:cvrA6:skuFRANBMCP06: dmi.product.family: FRANBMCP dmi.product.name: Laptop dmi.product.sku: FRANBMCP06 dmi.product.version: A6 dmi.sys.vendor: Framework To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/2063072/+subscriptions -- Mailing list: https://launchpad.net/~desktop-packages Post to : desktop-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~desktop-packages More help : https://help.launchpad.net/ListHelp
[Bug 2063072] [NEW] Audio does not play on devices until multiple restarts/reinstall
Public bug reported: On startup, no audio is able to play until I restart pulseaudio multiple times. Only then is it able to play though my speakers (Anolog Stereo Output). I have headphones plugged into Digital Stereo (HDMI) Output and is even less likely to work. I am required to restart pulseaudio/reinstall it until it ocassionally works, even then after it sits without playing audio for a while, it reverts back to a corrupt state where I need to restart the process to play anything. While this error is in effect, no programs that emit audio are able to play (Spotify, Youtube,etc) This all began a couple of months ago when I misclicked a device when choosing my playback, I had selected and HDMI input that was not plugged in. From then on I've continually had this issue despite reinstalling pulseaudio. Release: Ubuntu 23.10 Pulseaudio version: 1:16.1+dfsg1-2ubuntu4.1 ProblemType: Bug DistroRelease: Ubuntu 23.10 Package: pulseaudio 1:16.1+dfsg1-2ubuntu4.1 ProcVersionSignature: Ubuntu 6.5.0-28.29-generic 6.5.13 Uname: Linux 6.5.0-28-generic x86_64 ApportVersion: 2.27.0-0ubuntu5 Architecture: amd64 CasperMD5CheckResult: unknown CurrentDesktop: KDE Date: Sun Apr 21 23:47:18 2024 InstallationDate: Installed on 2024-01-18 (94 days ago) InstallationMedia: Kubuntu 23.10 "Mantic Minotaur" - Release amd64 (20231010) SourcePackage: pulseaudio UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 07/01/2021 dmi.bios.release: 3.2 dmi.bios.vendor: INSYDE Corp. dmi.bios.version: 03.02 dmi.board.asset.tag: * dmi.board.name: FRANBMCP06 dmi.board.vendor: Framework dmi.board.version: A6 dmi.chassis.asset.tag: FRANBMCPA612360031 dmi.chassis.type: 10 dmi.chassis.vendor: Framework dmi.chassis.version: A6 dmi.modalias: dmi:bvnINSYDECorp.:bvr03.02:bd07/01/2021:br3.2:svnFramework:pnLaptop:pvrA6:rvnFramework:rnFRANBMCP06:rvrA6:cvnFramework:ct10:cvrA6:skuFRANBMCP06: dmi.product.family: FRANBMCP dmi.product.name: Laptop dmi.product.sku: FRANBMCP06 dmi.product.version: A6 dmi.sys.vendor: Framework ** Affects: pulseaudio (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug mantic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2063072 Title: Audio does not play on devices until multiple restarts/reinstall To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/2063072/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[PHP-DEV] VCS Account Request: shivam
Maintaining infrastructure for the PHP project. -- PHP Internals - PHP Runtime Development Mailing List To unsubscribe, visit: https://www.php.net/unsub.php
[jira] [Created] (HIVE-27969) Add verbose logging for schematool and metastore service for Docker container
Akshat Mathur created HIVE-27969: Summary: Add verbose logging for schematool and metastore service for Docker container Key: HIVE-27969 URL: https://issues.apache.org/jira/browse/HIVE-27969 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Assignee: Akshat Mathur Adding capability to print verbose logs for schematool and metastore service inside docker container. Note: hiveserver2 doesnt support verbose option. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27963) Build failure when license-maven-plugin downloads bsd-license.php
[ https://issues.apache.org/jira/browse/HIVE-27963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798902#comment-17798902 ] Akshat Mathur commented on HIVE-27963: -- Thanks [~zabetak], I was running it from hive-packaging, I have updated the PR. It was intresting to checkout this part of the code :), Thank again for your help. > Build failure when license-maven-plugin downloads bsd-license.php > - > > Key: HIVE-27963 > URL: https://issues.apache.org/jira/browse/HIVE-27963 > Project: Hive > Issue Type: Bug > Components: Hive > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > Test are failing due to the following issue: > > [ERROR] Failed to execute goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses > (license-fetch) on project hive-packaging: Execution license-fetch of goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses failed: URL ' > [http://www.opensource.org/licenses/bsd-license.php] > ' should belong to licenseUrlFileName having key > 'bsd-2-clause-license-bsd-2-clause.html' together with URLs ' > [https://opensource.org/licenses/BSD-2-Clause] > ' -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27963) Fix license fetch issue with license-maven-plugin
[ https://issues.apache.org/jira/browse/HIVE-27963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798892#comment-17798892 ] Akshat Mathur edited comment on HIVE-27963 at 12/20/23 10:01 AM: - [~zabetak] I tried adding {code:java} https?://(www\.)?opensource.org/licenses/bsd-license.php https?://(www\.)?opensource.org/licenses/BSD-2-Clause {code} Post which I started getting assembly mismatch errors: {code:java} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.2.0:single (assemble) on project hive-packaging: Assembly is incorrectly configured: bin: Assembly is incorrectly configured: bin: [ERROR] Assembly: bin is not configured correctly: One or more filters had unmatched criteria. Check debug log for more information. [ERROR] -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.2.0:single (assemble) on project hive-packaging: Assembly is incorrectly configured: bin {code} Do you have any idea around this? was (Author: JIRAUSER298271): [~zabetak] I tried adding {code:java} https?://(www\.)?opensource.org/licenses/bsd-license.php https?://(www\.)?opensource.org/licenses/BSD-2-Clause {code} Post which I started getting assembly mismatch errors: {code:java} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.2.0:single (assemble) on project hive-packaging: Assembly is incorrectly configured: bin: Assembly is incorrectly configured: bin: [ERROR] Assembly: bin is not configured correctly: One or more filters had unmatched criteria. Check debug log for more information. [ERROR] -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.2.0:single (assemble) on project hive-packaging: Assembly is incorrectly configured: bin {code} Do you have nay idea around this? > Fix license fetch issue with license-maven-plugin > - > > Key: HIVE-27963 > URL: https://issues.apache.org/jira/browse/HIVE-27963 > Project: Hive > Issue Type: Bug > Components: Hive > Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > Test are failing due to the following issue: > > [ERROR] Failed to execute goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses > (license-fetch) on project hive-packaging: Execution license-fetch of goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses failed: URL ' > [http://www.opensource.org/licenses/bsd-license.php] > ' should belong to licenseUrlFileName having key > 'bsd-2-clause-license-bsd-2-clause.html' together with URLs ' > [https://opensource.org/licenses/BSD-2-Clause] > ' -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27963) Fix license fetch issue with license-maven-plugin
[ https://issues.apache.org/jira/browse/HIVE-27963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798892#comment-17798892 ] Akshat Mathur commented on HIVE-27963: -- [~zabetak] I tried adding {code:java} https?://(www\.)?opensource.org/licenses/bsd-license.php https?://(www\.)?opensource.org/licenses/BSD-2-Clause {code} Post which I started getting assembly mismatch errors: {code:java} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.2.0:single (assemble) on project hive-packaging: Assembly is incorrectly configured: bin: Assembly is incorrectly configured: bin: [ERROR] Assembly: bin is not configured correctly: One or more filters had unmatched criteria. Check debug log for more information. [ERROR] -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.2.0:single (assemble) on project hive-packaging: Assembly is incorrectly configured: bin {code} Do you have nay idea around this? > Fix license fetch issue with license-maven-plugin > - > > Key: HIVE-27963 > URL: https://issues.apache.org/jira/browse/HIVE-27963 > Project: Hive > Issue Type: Bug > Components: Hive > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > Test are failing due to the following issue: > > [ERROR] Failed to execute goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses > (license-fetch) on project hive-packaging: Execution license-fetch of goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses failed: URL ' > [http://www.opensource.org/licenses/bsd-license.php] > ' should belong to licenseUrlFileName having key > 'bsd-2-clause-license-bsd-2-clause.html' together with URLs ' > [https://opensource.org/licenses/BSD-2-Clause] > ' -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27963) Fix license fetch issue with license-maven-plugin
Akshat Mathur created HIVE-27963: Summary: Fix license fetch issue with license-maven-plugin Key: HIVE-27963 URL: https://issues.apache.org/jira/browse/HIVE-27963 Project: Hive Issue Type: Bug Components: Hive Reporter: Akshat Mathur Test are failing due to the following issue: [ERROR] Failed to execute goal org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses (license-fetch) on project hive-packaging: Execution license-fetch of goal org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses failed: URL ' [http://www.opensource.org/licenses/bsd-license.php] ' should belong to licenseUrlFileName having key 'bsd-2-clause-license-bsd-2-clause.html' together with URLs ' [https://opensource.org/licenses/BSD-2-Clause] ' -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27963) Fix license fetch issue with license-maven-plugin
[ https://issues.apache.org/jira/browse/HIVE-27963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27963: Assignee: Akshat Mathur > Fix license fetch issue with license-maven-plugin > - > > Key: HIVE-27963 > URL: https://issues.apache.org/jira/browse/HIVE-27963 > Project: Hive > Issue Type: Bug > Components: Hive > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > Test are failing due to the following issue: > > [ERROR] Failed to execute goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses > (license-fetch) on project hive-packaging: Execution license-fetch of goal > org.codehaus.mojo:license-maven-plugin:2.1.0:download-licenses failed: URL ' > [http://www.opensource.org/licenses/bsd-license.php] > ' should belong to licenseUrlFileName having key > 'bsd-2-clause-license-bsd-2-clause.html' together with URLs ' > [https://opensource.org/licenses/BSD-2-Clause] > ' -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (TEZ-4525) Remove broken links from site
Akshat Mathur created TEZ-4525: -- Summary: Remove broken links from site Key: TEZ-4525 URL: https://issues.apache.org/jira/browse/TEZ-4525 Project: Apache Tez Issue Type: Improvement Components: UI Reporter: Akshat Mathur The links under *Presentations and Talks on Tez* section for User meetup recrodings is broken. Link for recording: [https://hortonworks.webex.com/hortonworks/ldr.php?AT=pb=MC=125516477=d147a3c924b64496] as well as meetup group link: [http://www.meetup.com/Apache-Tez-User-Group/events/130852782/] doesn't exist anymore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (TEZ-4525) Remove broken links from site
Akshat Mathur created TEZ-4525: -- Summary: Remove broken links from site Key: TEZ-4525 URL: https://issues.apache.org/jira/browse/TEZ-4525 Project: Apache Tez Issue Type: Improvement Components: UI Reporter: Akshat Mathur The links under *Presentations and Talks on Tez* section for User meetup recrodings is broken. Link for recording: [https://hortonworks.webex.com/hortonworks/ldr.php?AT=pb=MC=125516477=d147a3c924b64496] as well as meetup group link: [http://www.meetup.com/Apache-Tez-User-Group/events/130852782/] doesn't exist anymore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27800) Metastore: Make database installation tests running on ARM chipset
[ https://issues.apache.org/jira/browse/HIVE-27800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27800: - Labels: Arm64 (was: ) > Metastore: Make database installation tests running on ARM chipset > -- > > Key: HIVE-27800 > URL: https://issues.apache.org/jira/browse/HIVE-27800 > Project: Hive > Issue Type: Task > Components: Standalone Metastore >Reporter: Zsolt Miskolczi >Priority: Major > Labels: Arm64 > > Those tests are running docker containers, on linux/x86_64 platform and they > cannot start a docker image at all on ARM based processor. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27803) Bump org.apache.avro:avro from 1.11.1 to 1.11.3
[ https://issues.apache.org/jira/browse/HIVE-27803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17795853#comment-17795853 ] Akshat Mathur commented on HIVE-27803: -- [~ayushtkn] as the PR is merged along with the qtests [https://github.com/apache/hive/pull/4918] should we resolve this ticket? > Bump org.apache.avro:avro from 1.11.1 to 1.11.3 > --- > > Key: HIVE-27803 > URL: https://issues.apache.org/jira/browse/HIVE-27803 > Project: Hive > Issue Type: Improvement >Reporter: Ayush Saxena >Priority: Major > > PR from *[dependabot|https://github.com/apps/dependabot]* > https://github.com/apache/hive/pull/4764 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27935) Add qtest for Avro invalid schema and field names
[ https://issues.apache.org/jira/browse/HIVE-27935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793836#comment-17793836 ] Akshat Mathur commented on HIVE-27935: -- Thanks [~zhangbutao] for the review :) > Add qtest for Avro invalid schema and field names > - > > Key: HIVE-27935 > URL: https://issues.apache.org/jira/browse/HIVE-27935 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-beta-1 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Fix For: 4.1.0 > > > Add qtest to verify working of AVRO-3827 and AVRO-3820 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-27935) Add qtest for Avro invalid schema and field names
[ https://issues.apache.org/jira/browse/HIVE-27935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-27935 started by Akshat Mathur. > Add qtest for Avro invalid schema and field names > - > > Key: HIVE-27935 > URL: https://issues.apache.org/jira/browse/HIVE-27935 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-beta-1 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > Add qtest to verify working of AVRO-3827 and AVRO-3820 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27935) Add qtest for Avro invalid schema and field names
Akshat Mathur created HIVE-27935: Summary: Add qtest for Avro invalid schema and field names Key: HIVE-27935 URL: https://issues.apache.org/jira/browse/HIVE-27935 Project: Hive Issue Type: Improvement Affects Versions: 4.0.0-beta-1 Reporter: Akshat Mathur Assignee: Akshat Mathur Add qtest to verify working of AVRO-3827 and AVRO-3820 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27845) Upgrade protobuf to 3.24.4 to fix CVEs
Akshat Mathur created HIVE-27845: Summary: Upgrade protobuf to 3.24.4 to fix CVEs Key: HIVE-27845 URL: https://issues.apache.org/jira/browse/HIVE-27845 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Assignee: Akshat Mathur Current protobuf-java(3.21.3) has the following CVE Direct vulnerabilities: [CVE-2022-3510|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3510] [CVE-2022-3509|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3509] [CVE-2022-3171|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3171] Vulnerabilities from dependencies: [CVE-2023-2976|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2976] [CVE-2020-8908|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27752) Remove DagUtils duplicate class
[ https://issues.apache.org/jira/browse/HIVE-27752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27752: - Target Version/s: 4.0.0 Status: Patch Available (was: In Progress) > Remove DagUtils duplicate class > --- > > Key: HIVE-27752 > URL: https://issues.apache.org/jira/browse/HIVE-27752 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor > Assignee: Akshat Mathur >Priority: Minor > Labels: newbie, pull-request-available > > remove this small orphaned stuff: > https://github.com/apache/hive/blob/57c15936d7a69e215c986d62aa959e70cb352da4/ql/src/java/org/apache/hadoop/hive/ql/exec/DagUtils.java > and place method to > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-27752) Remove DagUtils duplicate class
[ https://issues.apache.org/jira/browse/HIVE-27752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-27752 started by Akshat Mathur. > Remove DagUtils duplicate class > --- > > Key: HIVE-27752 > URL: https://issues.apache.org/jira/browse/HIVE-27752 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor > Assignee: Akshat Mathur >Priority: Minor > Labels: newbie, pull-request-available > > remove this small orphaned stuff: > https://github.com/apache/hive/blob/57c15936d7a69e215c986d62aa959e70cb352da4/ql/src/java/org/apache/hadoop/hive/ql/exec/DagUtils.java > and place method to > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27752) Remove DagUtils duplicate class
[ https://issues.apache.org/jira/browse/HIVE-27752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27752: Assignee: Akshat Mathur > Remove DagUtils duplicate class > --- > > Key: HIVE-27752 > URL: https://issues.apache.org/jira/browse/HIVE-27752 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor > Assignee: Akshat Mathur >Priority: Minor > Labels: newbie > > remove this small orphaned stuff: > https://github.com/apache/hive/blob/57c15936d7a69e215c986d62aa959e70cb352da4/ql/src/java/org/apache/hadoop/hive/ql/exec/DagUtils.java > and place method to > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27684) Bump org.slf4j:slf4j-api to 2.0.9
Akshat Mathur created HIVE-27684: Summary: Bump org.slf4j:slf4j-api to 2.0.9 Key: HIVE-27684 URL: https://issues.apache.org/jira/browse/HIVE-27684 Project: Hive Issue Type: Improvement Affects Versions: 4.0.0-beta-1 Reporter: Akshat Mathur Assignee: Akshat Mathur Upgrading to latest version -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH] tcg/ppc: Fix race in goto_tb implementation
On 7/17/23 06:53, Jordan Niethe wrote: Commit 20b6643324 ("tcg/ppc: Reorg goto_tb implementation") modified goto_tb to ensure only a single instruction was patched to prevent incorrect behaviour if a thread was in the middle of multiple instructions when they were replaced. However this introduced a race between loading the jmp target into TCG_REG_TB and patching and executing the direct branch. The relevent part of the goto_tb implementation: ld TCG_REG_TB, TARGET_ADDR_LOCATION(TCG_REG_TB) patch_location: mtctr TCG_REG_TB bctr tb_target_set_jmp_target() will replace 'patch_location' with a direct branch if the target is in range. The direct branch now relies on TCG_REG_TB being set up correctly by the ld. Prior to this commit multiple instructions were patched in for the direct branch case; these instructions would initalise TCG_REG_TB to the same value as the branch target. Imagine the following sequence: 1) Thread A is executing the goto_tb sequence and loads the jmp target into TCG_REG_TB. 2) Thread B updates the jmp target address and calls tb_target_set_jmp_target(). This patches a new direct branch into the goto_tb sequence. 3) Thread A executes the newly patched direct branch. The value in TCG_REG_TB still contains the old jmp target. TCG_REG_TB MUST contain the translation block's tc.ptr. Execution will eventually crash after performing memory accesses generated from a faulty value in TCG_REG_TB. This presents as segfaults or illegal instruction exceptions. Do not revert commit 20b6643324 as it did fix a different race condition. Instead remove the direct branch optimization and always use indirect branches. The direct branch optimization can be re-added later with a race free sequence. Gitlab issue: https://gitlab.com/qemu-project/qemu/-/issues/1726 Fixes: 20b6643324 ("tcg/ppc: Reorg goto_tb implementation") Reported-by: Anushree Mathur Co-developed-by: Benjamin Gray Signed-off-by: Jordan Niethe I tested with -smp 2/4/8/16 and it works fine with all. Tested-by: Anushree Mathur Thanks, Anushree-Mathur --- tcg/ppc/tcg-target.c.inc | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 8d6899cf40..a7323f479b 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2533,11 +2533,10 @@ static void tcg_out_goto_tb(TCGContext *s, int which) ptrdiff_t offset = tcg_tbrel_diff(s, (void *)ptr); tcg_out_mem_long(s, LD, LDX, TCG_REG_TB, TCG_REG_TB, offset); -/* Direct branch will be patched by tb_target_set_jmp_target. */ +/* TODO: Use direct branches when possible. */ set_jmp_insn_offset(s, which); tcg_out32(s, MTSPR | RS(TCG_REG_TB) | CTR); -/* When branch is out of range, fall through to indirect. */ tcg_out32(s, BCCTR | BO_ALWAYS); /* For the unlinked case, need to reset TCG_REG_TB. */ @@ -2565,10 +2564,11 @@ void tb_target_set_jmp_target(const TranslationBlock *tb, int n, intptr_t diff = addr - jmp_rx; tcg_insn_unit insn; +if (USE_REG_TB) +return; + if (in_range_b(diff)) { insn = B | (diff & 0x3fc); -} else if (USE_REG_TB) { -insn = MTSPR | RS(TCG_REG_TB) | CTR; } else { insn = NOP; }
[jira] [Updated] (HIVE-27502) Fix flaky test HiveKafkaProducerTest
[ https://issues.apache.org/jira/browse/HIVE-27502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27502: - Description: HiveKafkaProducerTest was marked flaky in HIVE-23693, post downgrade of Kafka Version HIVE-27475. The flaky test started passing [http://ci.hive.apache.org/job/hive-flaky-check/717/] hence enabling the test was:HiveKafkaProducerTest was marked flaky in HIVE-23693, need fix > Fix flaky test HiveKafkaProducerTest > > > Key: HIVE-27502 > URL: https://issues.apache.org/jira/browse/HIVE-27502 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > HiveKafkaProducerTest was marked flaky in HIVE-23693, post downgrade of Kafka > Version HIVE-27475. > The flaky test started passing > [http://ci.hive.apache.org/job/hive-flaky-check/717/] hence enabling the test -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27502) Fix flaky test HiveKafkaProducerTest
[ https://issues.apache.org/jira/browse/HIVE-27502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27502: Assignee: Akshat Mathur > Fix flaky test HiveKafkaProducerTest > > > Key: HIVE-27502 > URL: https://issues.apache.org/jira/browse/HIVE-27502 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > HiveKafkaProducerTest was marked flaky in HIVE-23693, need fix -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27502) Fix flaky test HiveKafkaProducerTest
Akshat Mathur created HIVE-27502: Summary: Fix flaky test HiveKafkaProducerTest Key: HIVE-27502 URL: https://issues.apache.org/jira/browse/HIVE-27502 Project: Hive Issue Type: Bug Affects Versions: 4.0.0-alpha-2 Reporter: Akshat Mathur HiveKafkaProducerTest was marked flaky in HIVE-23693, need fix -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: qemu-system-ppc64 option -smp 2 broken with commit 20b6643324a79860dcdfe811ffe4a79942bca21e
Hi Alex, On 6/23/23 20:52, Alex Bennée wrote: Cédric Le Goater writes: Hello Anushree, On 6/23/23 13:09, Anushree Mathur wrote: Hi everyone, I was trying to boot rhel9.3 image with upstream qemu-system-ppc64 -smp 2 option and observed a segfault (qemu crash). qemu command line used: qemu-system-ppc64 -name Rhel9.3.ppc64le -smp 2 -m 16G -vga none -nographic -machine pseries -cpu POWER10 -accel tcg -device virtio-scsi-pci -drive file=/home/rh93.qcow2,if=none,format=qcow2,id=hd0 -device scsi-hd,drive=hd0 -boot c After doing a git bisect, I found the first bad commit which introduced this issue is below: Could you please open a gitlab issue on QEMU project ? https://gitlab.com/qemu-project/qemu/-/issues Is it broken generated code that faults or does the goto_tb code break the execution sequence in some subtle way further down the line? If you can isolate the guest address the output from: -dfilter 0xBADADDR+0x100 -d in_asm,op,out_asm I tried as suggested above but didn't get much info collected. I have shared my observation on the gitlab issue page. https://gitlab.com/qemu-project/qemu/-/issues/1726 Thanks, Anushree-Mathur would be useful for the bug report. Although conceivably the out_asm output might make sense at translation time and then be broken when it is patched. Having rr on power would be really useful to debug this sort of thing. Thanks, C. [qemu]# git bisect good 20b6643324a79860dcdfe811ffe4a79942bca21e is the first bad commit commit 20b6643324a79860dcdfe811ffe4a79942bca21e Author: Richard Henderson Date: Mon Dec 5 17:45:02 2022 -0600 tcg/ppc: Reorg goto_tb implementation The old ppc64 implementation replaces 2 or 4 insns, which leaves a race condition in which a thread could be stopped at a PC in the middle of the sequence, and when restarted does not see the complete address computation and branches to nowhere. The new implemetation replaces only one insn, swapping between b and mtctr r31 falling through to a general-case indirect branch. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson tcg/ppc/tcg-target.c.inc | 152 +-- tcg/ppc/tcg-target.h | 3 +- 2 files changed, 41 insertions(+), 114 deletions(-) [qemu]# Can someone please take a look and suggest a fix to resolve this issue? Thanks in advance. Regards, Anushree-Mathur
[jira] [Updated] (HIVE-27475) Revert Kafka version to stabilise Kafka handler
[ https://issues.apache.org/jira/browse/HIVE-27475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27475: - Description: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in [PR-4082|https://github.com/apache/hive/pull/4082] kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler and due to lack to test cases and disabled tests these issues went unidentified. With the recent refactor in [HIVE-27402|http://example.com/], Kafka-client dependency was moved out of hive-ql, Which fixed the original zstd-ini issue.(confirmed by [~difin]) Hence , It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. Ref to discussion can be found here: [#4436|https://github.com/apache/hive/pull/4436] was: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in [PR-4082|https://github.com/apache/hive/pull/4082] kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler. With the recent refactor in [HIVE-27402|http://example.com/], Kafka-client dependency was moved out of hive-ql, Which fixed the original zstd-ini issue.(confirmed by [~difin]) Hence , It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. Ref to discussion can be found here: [#4436|https://github.com/apache/hive/pull/4436] > Revert Kafka version to stabilise Kafka handler > > > Key: HIVE-27475 > URL: https://issues.apache.org/jira/browse/HIVE-27475 > Project: Hive > Issue Type: Task > Components: kafka integration >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > It was needed to shade a version of zstd-jni that is compatible with parquet > in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't > work with parquet and zstd encryption. Parquet and kafka-client both use > zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from > kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible > with parquet, so in [PR-4082|https://github.com/apache/hive/pull/4082] > kafka-client was upgraded to latest version which used zstd-jni version close > to the version in parquet. > While the upgrade fixed the zstd-ini issue, It introduced multiple > compatibility issue in the Kafka-handler and due to lack to test cases and > disabled tests these issues went unidentified. > With the recent refactor in [HIVE-27402|http://example.com/], Kafka-client > dependency was moved out of hive-ql, Which fixed the original zstd-ini > issue.(confirmed by [~difin]) > Hence , It makes more sense to downgrade kafka versions back to 2.5.0, > stabalize kafka-handler, enable test and then upgrade. > Ref to discussion can be found here: > [#4436|https://github.com/apache/hive/pull/4436] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27475) Revert Kafka version to stabilise Kafka handler
[ https://issues.apache.org/jira/browse/HIVE-27475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27475: - Description: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in [PR-4082|https://github.com/apache/hive/pull/4082] kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler. With the recent refactor in [HIVE-27402|http://example.com/], Kafka-client dependency was moved out of hive-ql, Which fixed the original zstd-ini issue.(confirmed by [~difin]) Hence , It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. Ref to discussion can be found here: [#4436|https://github.com/apache/hive/pull/4436] was: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in [PR-4082|[http://example.com|https://github.com/apache/hive/pull/4082]] kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler. With the recent refactor in [HIVE-27402|http://example.com], Kafka-client dependency was moved out of hive-ql, Which fixed the original zstd-ini issue.(confirmed by [~difin]) Hence , It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. > Revert Kafka version to stabilise Kafka handler > > > Key: HIVE-27475 > URL: https://issues.apache.org/jira/browse/HIVE-27475 > Project: Hive > Issue Type: Task > Components: kafka integration >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > It was needed to shade a version of zstd-jni that is compatible with parquet > in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't > work with parquet and zstd encryption. Parquet and kafka-client both use > zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from > kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible > with parquet, so in [PR-4082|https://github.com/apache/hive/pull/4082] > kafka-client was upgraded to latest version which used zstd-jni version close > to the version in parquet. > While the upgrade fixed the zstd-ini issue, It introduced multiple > compatibility issue in the Kafka-handler. > With the recent refactor in [HIVE-27402|http://example.com/], Kafka-client > dependency was moved out of hive-ql, Which fixed the original zstd-ini > issue.(confirmed by [~difin]) > Hence , It makes more sense to downgrade kafka versions back to 2.5.0, > stabalize kafka-handler, enable test and then upgrade. > Ref to discussion can be found here: > [#4436|https://github.com/apache/hive/pull/4436] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27475) Revert Kafka version to stabilise Kafka handler
[ https://issues.apache.org/jira/browse/HIVE-27475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27475: - Description: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in [PR-4082|[http://example.com|https://github.com/apache/hive/pull/4082]] kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler. With the recent refactor in [HIVE-27402|http://example.com], Kafka-client dependency was moved out of hive-ql, Which fixed the original zstd-ini issue.(confirmed by [~difin]) Hence , It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. was: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in PR-4082 kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler. It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. > Revert Kafka version to stabilise Kafka handler > > > Key: HIVE-27475 > URL: https://issues.apache.org/jira/browse/HIVE-27475 > Project: Hive > Issue Type: Task > Components: kafka integration >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > It was needed to shade a version of zstd-jni that is compatible with parquet > in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't > work with parquet and zstd encryption. Parquet and kafka-client both use > zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from > kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible > with parquet, so in > [PR-4082|[http://example.com|https://github.com/apache/hive/pull/4082]] > kafka-client was upgraded to latest version which used zstd-jni version close > to the version in parquet. > While the upgrade fixed the zstd-ini issue, It introduced multiple > compatibility issue in the Kafka-handler. > With the recent refactor in [HIVE-27402|http://example.com], Kafka-client > dependency was moved out of hive-ql, Which fixed the original zstd-ini > issue.(confirmed by [~difin]) > Hence , It makes more sense to downgrade kafka versions back to 2.5.0, > stabalize kafka-handler, enable test and then upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27475) Revert Kafka version to stabilise Kafka handler
[ https://issues.apache.org/jira/browse/HIVE-27475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27475: - Description: It was needed to shade a version of zstd-jni that is compatible with parquet in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't work with parquet and zstd encryption. Parquet and kafka-client both use zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible with parquet, so in PR-4082 kafka-client was upgraded to latest version which used zstd-jni version close to the version in parquet. While the upgrade fixed the zstd-ini issue, It introduced multiple compatibility issue in the Kafka-handler. It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. was: Based on the discussion on [4436|https://github.com/apache/hive/pull/4436] It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. > Revert Kafka version to stabilise Kafka handler > > > Key: HIVE-27475 > URL: https://issues.apache.org/jira/browse/HIVE-27475 > Project: Hive > Issue Type: Task > Components: kafka integration >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > > It was needed to shade a version of zstd-jni that is compatible with parquet > in hive-ql after upgrading parquet version dowstream, otherwise Tez couldn't > work with parquet and zstd encryption. Parquet and kafka-client both use > zstd-jni. In hive-ql, zstd-jni is coming as a transitive dependency from > kafka-client. The zstd-jni version in kafka-client 2.5.0 was not compatible > with parquet, so in PR-4082 kafka-client was upgraded to latest version which > used zstd-jni version close to the version in parquet. > While the upgrade fixed the zstd-ini issue, It introduced multiple > compatibility issue in the Kafka-handler. > It makes more sense to downgrade kafka versions back to 2.5.0, stabalize > kafka-handler, enable test and then upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27475) Revert Kafka version to stabilise Kafka handler
Akshat Mathur created HIVE-27475: Summary: Revert Kafka version to stabilise Kafka handler Key: HIVE-27475 URL: https://issues.apache.org/jira/browse/HIVE-27475 Project: Hive Issue Type: Task Components: kafka integration Affects Versions: 4.0.0-alpha-2 Reporter: Akshat Mathur Based on the discussion on [4436|https://github.com/apache/hive/pull/4436] It makes more sense to downgrade kafka versions back to 2.5.0, stabalize kafka-handler, enable test and then upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27475) Revert Kafka version to stabilise Kafka handler
[ https://issues.apache.org/jira/browse/HIVE-27475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27475: Assignee: Akshat Mathur > Revert Kafka version to stabilise Kafka handler > > > Key: HIVE-27475 > URL: https://issues.apache.org/jira/browse/HIVE-27475 > Project: Hive > Issue Type: Task > Components: kafka integration >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > Based on the discussion on [4436|https://github.com/apache/hive/pull/4436] > It makes more sense to downgrade kafka versions back to 2.5.0, stabalize > kafka-handler, enable test and then upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH 0/4] target/ppc: Fixes for instruction-related
On 6/20/23 18:40, Nicholas Piggin wrote: Because they got more complexities than I first thought, these patches are broken out from the bigger series here: https://lists.gnu.org/archive/html/qemu-ppc/2023-05/msg00425.html Since then I fixed the --disable-tcg compile bug reported by Anushree hopefully. Also added a workaround for KVM so injected interrupts wouldn't attempt to find the prefix bit setting. I don't know how much that is really needed, but injection callers would have to set it one way or anohter if we need to add it. Thanks, Nick Nicholas Piggin (4): target/ppc: Fix instruction loading endianness in alignment interrupt target/ppc: Change partition-scope translate interface target/ppc: Add SRR1 prefix indication to interrupt handlers target/ppc: Implement HEIR SPR target/ppc/cpu.h | 1 + target/ppc/cpu_init.c| 23 target/ppc/excp_helper.c | 110 ++- target/ppc/mmu-radix64.c | 38 ++ 4 files changed, 159 insertions(+), 13 deletions(-) Hye Nick, I tried this patch-set and the compilation of qemu with --disable-tcg parameter happened successfully! Thanks & Regards, Anushree-Mathur
Re: qemu-system-ppc64 option -smp 2 broken with commit 20b6643324a79860dcdfe811ffe4a79942bca21e
On 6/23/23 19:16, Cédric Le Goater wrote: Hello Anushree, On 6/23/23 13:09, Anushree Mathur wrote: Hi everyone, I was trying to boot rhel9.3 image with upstream qemu-system-ppc64 -smp 2 option and observed a segfault (qemu crash). qemu command line used: qemu-system-ppc64 -name Rhel9.3.ppc64le -smp 2 -m 16G -vga none -nographic -machine pseries -cpu POWER10 -accel tcg -device virtio-scsi-pci -drive file=/home/rh93.qcow2,if=none,format=qcow2,id=hd0 -device scsi-hd,drive=hd0 -boot c After doing a git bisect, I found the first bad commit which introduced this issue is below: Could you please open a gitlab issue on QEMU project ? https://gitlab.com/qemu-project/qemu/-/issues Thanks, C. [qemu]# git bisect good 20b6643324a79860dcdfe811ffe4a79942bca21e is the first bad commit commit 20b6643324a79860dcdfe811ffe4a79942bca21e Author: Richard Henderson Date: Mon Dec 5 17:45:02 2022 -0600 tcg/ppc: Reorg goto_tb implementation The old ppc64 implementation replaces 2 or 4 insns, which leaves a race condition in which a thread could be stopped at a PC in the middle of the sequence, and when restarted does not see the complete address computation and branches to nowhere. The new implemetation replaces only one insn, swapping between b and mtctr r31 falling through to a general-case indirect branch. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson tcg/ppc/tcg-target.c.inc | 152 +-- tcg/ppc/tcg-target.h | 3 +- 2 files changed, 41 insertions(+), 114 deletions(-) [qemu]# Can someone please take a look and suggest a fix to resolve this issue? Thanks in advance. Regards, Anushree-Mathur Hello Cedric, As per your mail, I have created the gitlab issue https://gitlab.com/qemu-project/qemu/-/issues/1726. Thanks & Regards, Anushree-Mathur
qemu-system-ppc64 option -smp 2 broken with commit 20b6643324a79860dcdfe811ffe4a79942bca21e
Hi everyone, I was trying to boot rhel9.3 image with upstream qemu-system-ppc64 -smp 2 option and observed a segfault (qemu crash). qemu command line used: qemu-system-ppc64 -name Rhel9.3.ppc64le -smp 2 -m 16G -vga none -nographic -machine pseries -cpu POWER10 -accel tcg -device virtio-scsi-pci -drive file=/home/rh93.qcow2,if=none,format=qcow2,id=hd0 -device scsi-hd,drive=hd0 -boot c After doing a git bisect, I found the first bad commit which introduced this issue is below: [qemu]# git bisect good 20b6643324a79860dcdfe811ffe4a79942bca21e is the first bad commit commit 20b6643324a79860dcdfe811ffe4a79942bca21e Author: Richard Henderson Date: Mon Dec 5 17:45:02 2022 -0600 tcg/ppc: Reorg goto_tb implementation The old ppc64 implementation replaces 2 or 4 insns, which leaves a race condition in which a thread could be stopped at a PC in the middle of the sequence, and when restarted does not see the complete address computation and branches to nowhere. The new implemetation replaces only one insn, swapping between b and mtctr r31 falling through to a general-case indirect branch. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson tcg/ppc/tcg-target.c.inc | 152 +-- tcg/ppc/tcg-target.h | 3 +- 2 files changed, 41 insertions(+), 114 deletions(-) [qemu]# Can someone please take a look and suggest a fix to resolve this issue? Thanks in advance. Regards, Anushree-Mathur
[jira] [Assigned] (HIVE-27451) Upgrade Kafka dependencies to latest 3.5.0
[ https://issues.apache.org/jira/browse/HIVE-27451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27451: Assignee: Akshat Mathur > Upgrade Kafka dependencies to latest 3.5.0 > -- > > Key: HIVE-27451 > URL: https://issues.apache.org/jira/browse/HIVE-27451 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > Kafka has release 3.5.0, current hive is using 3.4.0 > Upgrading to the latest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27451) Upgrade Kafka dependencies to latest 3.5.0
Akshat Mathur created HIVE-27451: Summary: Upgrade Kafka dependencies to latest 3.5.0 Key: HIVE-27451 URL: https://issues.apache.org/jira/browse/HIVE-27451 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Kafka has release 3.5.0, current hive is using 3.4.0 Upgrading to the latest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27449) Fix flaky test KafkaRecordIteratorTest
[ https://issues.apache.org/jira/browse/HIVE-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27449: Assignee: Akshat Mathur > Fix flaky test KafkaRecordIteratorTest > -- > > Key: HIVE-27449 > URL: https://issues.apache.org/jira/browse/HIVE-27449 > Project: Hive > Issue Type: Bug > Components: kafka integration >Affects Versions: 4.0.0-alpha-2 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > KafkaRecordIteratorTest was disabled in > [HIVE-23838|https://issues.apache.org/jira/browse/HIVE-23838], > Fix and enable it again -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27449) Fix flaky test KafkaRecordIteratorTest
Akshat Mathur created HIVE-27449: Summary: Fix flaky test KafkaRecordIteratorTest Key: HIVE-27449 URL: https://issues.apache.org/jira/browse/HIVE-27449 Project: Hive Issue Type: Bug Components: kafka integration Affects Versions: 4.0.0-alpha-2 Reporter: Akshat Mathur KafkaRecordIteratorTest was disabled in [HIVE-23838|https://issues.apache.org/jira/browse/HIVE-23838], Fix and enable it again -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH v4 1/6] target/ppc: Fix instruction loading endianness in alignment interrupt
On 6/15/23 08:21, Nicholas Piggin wrote: On Wed Jun 14, 2023 at 3:51 PM AEST, Anushree Mathur wrote: On 5/30/23 18:55, Nicholas Piggin wrote: powerpc ifetch endianness depends on MSR[LE] so it has to byteswap after cpu_ldl_code(). This corrects DSISR bits in alignment interrupts when running in little endian mode. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- target/ppc/excp_helper.c | 22 +- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c index c13f2afa04..0274617b4a 100644 --- a/target/ppc/excp_helper.c +++ b/target/ppc/excp_helper.c @@ -133,6 +133,26 @@ static void dump_hcall(CPUPPCState *env) env->nip); } +#ifdef CONFIG_TCG +/* Return true iff byteswap is needed in a scalar memop */ +static inline bool need_byteswap(CPUArchState *env) +{ +/* SOFTMMU builds TARGET_BIG_ENDIAN. Need to swap when MSR[LE] is set */ +return !!(env->msr & ((target_ulong)1 << MSR_LE)); +} + +static uint32_t ppc_ldl_code(CPUArchState *env, abi_ptr addr) This hunk fails to compile with configure --disable-tcg I don't see how since it's inside CONFIG_TCG. Seems to work here. You don't have an old version of the patch applied? What configure options exactly? Thanks, Nick The configure options i used are: ./configure --target-list=ppc64-softmmu --disable-tcg --prefix=/usr I applied the latest patches but still i was seeing the same issue. Can you check this once! Thanks, Anushree-Mathur
Re: [PATCH v4 1/6] target/ppc: Fix instruction loading endianness in alignment interrupt
On 5/30/23 18:55, Nicholas Piggin wrote: powerpc ifetch endianness depends on MSR[LE] so it has to byteswap after cpu_ldl_code(). This corrects DSISR bits in alignment interrupts when running in little endian mode. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- target/ppc/excp_helper.c | 22 +- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c index c13f2afa04..0274617b4a 100644 --- a/target/ppc/excp_helper.c +++ b/target/ppc/excp_helper.c @@ -133,6 +133,26 @@ static void dump_hcall(CPUPPCState *env) env->nip); } +#ifdef CONFIG_TCG +/* Return true iff byteswap is needed in a scalar memop */ +static inline bool need_byteswap(CPUArchState *env) +{ +/* SOFTMMU builds TARGET_BIG_ENDIAN. Need to swap when MSR[LE] is set */ +return !!(env->msr & ((target_ulong)1 << MSR_LE)); +} + +static uint32_t ppc_ldl_code(CPUArchState *env, abi_ptr addr) This hunk fails to compile with configure --disable-tcg FAILED: libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o cc -m64 -mlittle-endian -Ilibqemu-ppc64-softmmu.fa.p -I. -I.. -Itarget/ppc -I../target/ppc -I../dtc/libfdt -Iqapi -Itrace -Iui -Iui/shader -I/usr/include/pixman-1 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/sysprof-4 -fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -fstack-protector-strong -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wundef -Wwrite-strings -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wmissing-format-attribute -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -isystem /home/Shreya/qemu/linux-headers -isystem linux-headers -iquote . -iquote /home/Shreya/qemu -iquote /home/Shreya/qemu/include -pthread -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fno-common -fwrapv -fPIE -isystem../linux-headers -isystemlinux-headers -DNEED_CPU_H '-DCONFIG_TARGET="ppc64-softmmu-config-target.h"' '-DCONFIG_DEVICES="ppc64-softmmu-config-devices.h"' -MD -MQ libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o -MF libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o.d -o libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o -c ../target/ppc/excp_helper.c ../target/ppc/excp_helper.c:143:49: error: unknown type name ‘abi_ptr’; did you mean ‘si_ptr’? 143 | static uint32_t ppc_ldl_code(CPUArchState *env, abi_ptr addr) | ^~~ | si_ptr ../target/ppc/excp_helper.c: In function ‘powerpc_excp_books’: ../target/ppc/excp_helper.c:1416:16: error: implicit declaration of function ‘ppc_ldl_code’ [-Werror=implicit-function-declaration] 1416 | insn = ppc_ldl_code(env, env->nip); | ^~~~ ../target/ppc/excp_helper.c:1416:16: error: nested extern declaration of ‘ppc_ldl_code’ [-Werror=nested-externs] cc1: all warnings being treated as errors +{ +uint32_t insn = cpu_ldl_code(env, addr); + +if (need_byteswap(env)) { +insn = bswap32(insn); +} + +return insn; +} +#endif + static void ppc_excp_debug_sw_tlb(CPUPPCState *env, int excp) { const char *es; @@ -3100,7 +3120,7 @@ void ppc_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, /* Restore state and reload the insn we executed, for filling in DSISR. */ cpu_restore_state(cs, retaddr); -insn = cpu_ldl_code(env, env->nip); +insn = ppc_ldl_code(env, env->nip); switch (env->mmu_model) { case POWERPC_MMU_SOFT_4xx:
[jira] [Assigned] (HIVE-27424) Add mvn dependency:tree run in github actions
[ https://issues.apache.org/jira/browse/HIVE-27424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27424: Assignee: Akshat Mathur > Add mvn dependency:tree run in github actions > - > > Key: HIVE-27424 > URL: https://issues.apache.org/jira/browse/HIVE-27424 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > From the discussion on [#4396|https://github.com/apache/hive/pull/4396] > Run mvn dependency:tree in github actions -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27424) Add mvn dependency:tree run in github actions
[ https://issues.apache.org/jira/browse/HIVE-27424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27424: - Description: >From the discussion on [#4396|https://github.com/apache/hive/pull/4396] Run mvn dependency:tree in github actions was: >From the discussion on [https://github.com/apache/hive/pull/4396|#4396] Run mvn dependency:tree in github actions > Add mvn dependency:tree run in github actions > - > > Key: HIVE-27424 > URL: https://issues.apache.org/jira/browse/HIVE-27424 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur >Priority: Major > > From the discussion on [#4396|https://github.com/apache/hive/pull/4396] > Run mvn dependency:tree in github actions -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27424) Add mvn dependency:tree run in github actions
[ https://issues.apache.org/jira/browse/HIVE-27424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27424: - Description: >From the discussion on [https://github.com/apache/hive/pull/4396|#4396] Run mvn dependency:tree in github actions was: >From the discussion on [#https://github.com/apache/hive/pull/4396] Run mvn dependency:tree in github actions > Add mvn dependency:tree run in github actions > - > > Key: HIVE-27424 > URL: https://issues.apache.org/jira/browse/HIVE-27424 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur >Priority: Major > > From the discussion on [https://github.com/apache/hive/pull/4396|#4396] > Run mvn dependency:tree in github actions -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27424) Add mvn dependency:tree run in github actions
Akshat Mathur created HIVE-27424: Summary: Add mvn dependency:tree run in github actions Key: HIVE-27424 URL: https://issues.apache.org/jira/browse/HIVE-27424 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur >From the discussion on [#https://github.com/apache/hive/pull/4396] Run mvn dependency:tree in github actions -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27419) Update PR template for dependency upgrade changes
Akshat Mathur created HIVE-27419: Summary: Update PR template for dependency upgrade changes Key: HIVE-27419 URL: https://issues.apache.org/jira/browse/HIVE-27419 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Assignee: Akshat Mathur This change is added to aid the reviewers verify that the dependency is completely upgraded and older jars are excluded if required. This help better understand the changes beyond just changing version numbers -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27413) Bump jettison from 1.5.3 to 1.5.4
[ https://issues.apache.org/jira/browse/HIVE-27413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729639#comment-17729639 ] Akshat Mathur commented on HIVE-27413: -- :) > Bump jettison from 1.5.3 to 1.5.4 > - > > Key: HIVE-27413 > URL: https://issues.apache.org/jira/browse/HIVE-27413 > Project: Hive > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > Fix For: 4.0.0 > > > PR from *[dependabot|https://github.com/apps/dependabot]* > Creating for tracking -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27400) UPgrade jetty-server to 11.0.15
Akshat Mathur created HIVE-27400: Summary: UPgrade jetty-server to 11.0.15 Key: HIVE-27400 URL: https://issues.apache.org/jira/browse/HIVE-27400 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Due to multiple CVEs in the current version 9.4.40.v20210413m upgrade jetty-server to 11.0.15 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27400) Upgrade jetty-server to 11.0.15
[ https://issues.apache.org/jira/browse/HIVE-27400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27400: - Summary: Upgrade jetty-server to 11.0.15 (was: UPgrade jetty-server to 11.0.15) > Upgrade jetty-server to 11.0.15 > --- > > Key: HIVE-27400 > URL: https://issues.apache.org/jira/browse/HIVE-27400 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur >Priority: Major > > Due to multiple CVEs in the current version 9.4.40.v20210413m upgrade > jetty-server to 11.0.15 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH v2 3/6] target/ppc: Fix instruction loading endianness in alignment interrupt
On 3/27/23 18:42, Nicholas Piggin wrote: powerpc ifetch endianness depends on MSR[LE] so it has to byteswap after cpu_ldl_code(). This corrects DSISR bits in alignment interrupts when running in little endian mode. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- Since v1: - Removed big endian ifdef [Fabiano review] - Acaually use need_byswap helper. target/ppc/excp_helper.c | 20 +++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c index 287659c74d..07729967b5 100644 --- a/target/ppc/excp_helper.c +++ b/target/ppc/excp_helper.c @@ -133,6 +133,24 @@ static void dump_hcall(CPUPPCState *env) env->nip); } +/* Return true iff byteswap is needed in a scalar memop */ +static inline bool need_byteswap(CPUArchState *env) +{ +/* SOFTMMU builds TARGET_BIG_ENDIAN. Need to swap when MSR[LE] is set */ +return !!(env->msr & ((target_ulong)1 << MSR_LE)); +} + +static uint32_t ppc_ldl_code(CPUArchState *env, abi_ptr addr) This hunk fails to compile with configure --disable-tcg FAILED: libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o cc -m64 -mlittle-endian -Ilibqemu-ppc64-softmmu.fa.p -I. -I.. -Itarget/ppc -I../target/ppc -I../dtc/libfdt -Iqapi -Itrace -Iui -Iui/shader -I/usr/include/pixman-1 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/sysprof-4 -fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -fstack-protector-strong -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wundef -Wwrite-strings -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wmissing-format-attribute -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -isystem /home/Shreya/qemu/linux-headers -isystem linux-headers -iquote . -iquote /home/Shreya/qemu -iquote /home/Shreya/qemu/include -pthread -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fno-common -fwrapv -fPIE -isystem../linux-headers -isystemlinux-headers -DNEED_CPU_H '-DCONFIG_TARGET="ppc64-softmmu-config-target.h"' '-DCONFIG_DEVICES="ppc64-softmmu-config-devices.h"' -MD -MQ libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o -MF libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o.d -o libqemu-ppc64-softmmu.fa.p/target_ppc_excp_helper.c.o -c ../target/ppc/excp_helper.c ../target/ppc/excp_helper.c:143:49: error: unknown type name ‘abi_ptr’; did you mean ‘si_ptr’? 143 | static uint32_t ppc_ldl_code(CPUArchState *env, abi_ptr addr) | ^~~ | si_ptr ../target/ppc/excp_helper.c: In function ‘powerpc_excp_books’: ../target/ppc/excp_helper.c:1416:16: error: implicit declaration of function ‘ppc_ldl_code’ [-Werror=implicit-function-declaration] 1416 | insn = ppc_ldl_code(env, env->nip); | ^~~~ ../target/ppc/excp_helper.c:1416:16: error: nested extern declaration of ‘ppc_ldl_code’ [-Werror=nested-externs] cc1: all warnings being treated as errors +{ +uint32_t insn = cpu_ldl_code(env, addr); + +if (need_byteswap(env)) { +insn = bswap32(insn); +} + +return insn; +} + static void ppc_excp_debug_sw_tlb(CPUPPCState *env, int excp) { const char *es; @@ -3097,7 +3115,7 @@ void ppc_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, /* Restore state and reload the insn we executed, for filling in DSISR. */ cpu_restore_state(cs, retaddr); -insn = cpu_ldl_code(env, env->nip); +insn = ppc_ldl_code(env, env->nip); switch (env->mmu_model) { case POWERPC_MMU_SOFT_4xx: Thanks Anushree Mathur
[jira] [Commented] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
[ https://issues.apache.org/jira/browse/HIVE-27203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713447#comment-17713447 ] Akshat Mathur commented on HIVE-27203: -- Thanks [~lvegh], [~sbadhya], [~rkirtir], and [~zratkai] for the review. Appreciate it :) > Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, > and combination Tables > -- > > Key: HIVE-27203 > URL: https://issues.apache.org/jira/browse/HIVE-27203 > Project: Hive > Issue Type: Test >Affects Versions: 4.0.0 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Improve Qtest Coverage for Compaction use cases for ACID Tables: > # Partitioned Tables( Major & Minor ) > # Insert-Only Clustered( Major & Minor ) > # Insert-Only Partitioned( Major & Minor ) > # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
[ https://issues.apache.org/jira/browse/HIVE-27203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27203: - Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, > and combination Tables > -- > > Key: HIVE-27203 > URL: https://issues.apache.org/jira/browse/HIVE-27203 > Project: Hive > Issue Type: Test >Affects Versions: 4.0.0 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Improve Qtest Coverage for Compaction use cases for ACID Tables: > # Partitioned Tables( Major & Minor ) > # Insert-Only Clustered( Major & Minor ) > # Insert-Only Partitioned( Major & Minor ) > # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
[ https://issues.apache.org/jira/browse/HIVE-27203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27203: - Affects Version/s: 4.0.0 > Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, > and combination Tables > -- > > Key: HIVE-27203 > URL: https://issues.apache.org/jira/browse/HIVE-27203 > Project: Hive > Issue Type: Test >Affects Versions: 4.0.0 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Improve Qtest Coverage for Compaction use cases for ACID Tables: > # Partitioned Tables( Major & Minor ) > # Insert-Only Clustered( Major & Minor ) > # Insert-Only Partitioned( Major & Minor ) > # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
[ https://issues.apache.org/jira/browse/HIVE-27203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-27203 started by Akshat Mathur. > Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, > and combination Tables > -- > > Key: HIVE-27203 > URL: https://issues.apache.org/jira/browse/HIVE-27203 > Project: Hive > Issue Type: Test > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Improve Qtest Coverage for Compaction use cases for ACID Tables: > # Partitioned Tables( Major & Minor ) > # Insert-Only Clustered( Major & Minor ) > # Insert-Only Partitioned( Major & Minor ) > # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
[ https://issues.apache.org/jira/browse/HIVE-27203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-27203: - Status: Patch Available (was: In Progress) > Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, > and combination Tables > -- > > Key: HIVE-27203 > URL: https://issues.apache.org/jira/browse/HIVE-27203 > Project: Hive > Issue Type: Test > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Improve Qtest Coverage for Compaction use cases for ACID Tables: > # Partitioned Tables( Major & Minor ) > # Insert-Only Clustered( Major & Minor ) > # Insert-Only Partitioned( Major & Minor ) > # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
[ https://issues.apache.org/jira/browse/HIVE-27203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-27203: > Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, > and combination Tables > -- > > Key: HIVE-27203 > URL: https://issues.apache.org/jira/browse/HIVE-27203 > Project: Hive > Issue Type: Test > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > Improve Qtest Coverage for Compaction use cases for ACID Tables: > # Partitioned Tables( Major & Minor ) > # Insert-Only Clustered( Major & Minor ) > # Insert-Only Partitioned( Major & Minor ) > # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27203) Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables
Akshat Mathur created HIVE-27203: Summary: Add compaction pending Qtest for Insert-only, Partitioned, Clustered ACID, and combination Tables Key: HIVE-27203 URL: https://issues.apache.org/jira/browse/HIVE-27203 Project: Hive Issue Type: Test Reporter: Akshat Mathur Assignee: Akshat Mathur Improve Qtest Coverage for Compaction use cases for ACID Tables: # Partitioned Tables( Major & Minor ) # Insert-Only Clustered( Major & Minor ) # Insert-Only Partitioned( Major & Minor ) # Insert-Only Clustered and Partitioned( Major & Minor ) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-22628) Add locks and transactions tables from sys db to information_schema
[ https://issues.apache.org/jira/browse/HIVE-22628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-22628: - Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add locks and transactions tables from sys db to information_schema > --- > > Key: HIVE-22628 > URL: https://issues.apache.org/jira/browse/HIVE-22628 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: Zoltan Chovan > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 2h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-26954) Upgrade Avro to 1.11.1
[ https://issues.apache.org/jira/browse/HIVE-26954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-26954 started by Akshat Mathur. > Upgrade Avro to 1.11.1 > -- > > Key: HIVE-26954 > URL: https://issues.apache.org/jira/browse/HIVE-26954 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > Upgrade Avro dependencies to 1.11.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PARQUET-2239) Replace log4j1 with reload4j
[ https://issues.apache.org/jira/browse/PARQUET-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17685158#comment-17685158 ] Akshat Mathur commented on PARQUET-2239: Thanks [~ste...@apache.org] for sharing this, will take the reference :) > Replace log4j1 with reload4j > > > Key: PARQUET-2239 > URL: https://issues.apache.org/jira/browse/PARQUET-2239 > Project: Parquet > Issue Type: Improvement > Reporter: Akshat Mathur >Priority: Major > Labels: pick-me-up > > Due to multiple CVE in log4j1, replace log4j dependency with reload4j. > More about reload4j: https://reload4j.qos.ch/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (PARQUET-2239) Replace log4j1 with reload4j
Akshat Mathur created PARQUET-2239: -- Summary: Replace log4j1 with reload4j Key: PARQUET-2239 URL: https://issues.apache.org/jira/browse/PARQUET-2239 Project: Parquet Issue Type: Improvement Reporter: Akshat Mathur Due to multiple CVE in log4j1, replace log4j dependency with reload4j. More about reload4j: https://reload4j.qos.ch/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26954) Upgrade Avro to 1.11.1
[ https://issues.apache.org/jira/browse/HIVE-26954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26954: - Affects Version/s: 4.0.0 > Upgrade Avro to 1.11.1 > -- > > Key: HIVE-26954 > URL: https://issues.apache.org/jira/browse/HIVE-26954 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Avro dependencies to 1.11.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26947) Hive compactor.Worker can respawn connections to HMS at extremely high frequency
[ https://issues.apache.org/jira/browse/HIVE-26947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17681590#comment-17681590 ] Akshat Mathur commented on HIVE-26947: -- Thanks [~dkuzmenko] and [~lvegh] for the review :) > Hive compactor.Worker can respawn connections to HMS at extremely high > frequency > > > Key: HIVE-26947 > URL: https://issues.apache.org/jira/browse/HIVE-26947 > Project: Hive > Issue Type: Bug > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 10h > Remaining Estimate: 0h > > After catching the exception generated by the findNextCompactionAndExecute() > task, HS2 appears to immediately rerun the task with no delay or backoff. As > a result there are ~3500 connection attempts from HS2 to HMS over just a 5 > second period in the HS2 log > The compactor.Worker should wait between failed attempts and maybe do an > exponential backoff. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26954) Upgrade Avro to 1.11.1
Akshat Mathur created HIVE-26954: Summary: Upgrade Avro to 1.11.1 Key: HIVE-26954 URL: https://issues.apache.org/jira/browse/HIVE-26954 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Assignee: Akshat Mathur Upgrade Avro dependencies to 1.11.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-26954) Upgrade Avro to 1.11.1
[ https://issues.apache.org/jira/browse/HIVE-26954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-26954: > Upgrade Avro to 1.11.1 > -- > > Key: HIVE-26954 > URL: https://issues.apache.org/jira/browse/HIVE-26954 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > Upgrade Avro dependencies to 1.11.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26947) Hive compactor.Worker can respawn connections to HMS at extremely high frequency
[ https://issues.apache.org/jira/browse/HIVE-26947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26947: - Status: Patch Available (was: Open) > Hive compactor.Worker can respawn connections to HMS at extremely high > frequency > > > Key: HIVE-26947 > URL: https://issues.apache.org/jira/browse/HIVE-26947 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > After catching the exception generated by the findNextCompactionAndExecute() > task, HS2 appears to immediately rerun the task with no delay or backoff. As > a result there are ~3500 connection attempts from HS2 to HMS over just a 5 > second period in the HS2 log > The compactor.Worker should wait between failed attempts and maybe do an > exponential backoff. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26947) Hive compactor.Worker can respawn connections to HMS at extremely high frequency
[ https://issues.apache.org/jira/browse/HIVE-26947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26947: - Issue Type: Bug (was: Improvement) > Hive compactor.Worker can respawn connections to HMS at extremely high > frequency > > > Key: HIVE-26947 > URL: https://issues.apache.org/jira/browse/HIVE-26947 > Project: Hive > Issue Type: Bug > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > After catching the exception generated by the findNextCompactionAndExecute() > task, HS2 appears to immediately rerun the task with no delay or backoff. As > a result there are ~3500 connection attempts from HS2 to HMS over just a 5 > second period in the HS2 log > The compactor.Worker should wait between failed attempts and maybe do an > exponential backoff. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-26947) Hive compactor.Worker can respawn connections to HMS at extremely high frequency
[ https://issues.apache.org/jira/browse/HIVE-26947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-26947: Assignee: Akshat Mathur > Hive compactor.Worker can respawn connections to HMS at extremely high > frequency > > > Key: HIVE-26947 > URL: https://issues.apache.org/jira/browse/HIVE-26947 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur > Assignee: Akshat Mathur >Priority: Major > > After catching the exception generated by the findNextCompactionAndExecute() > task, HS2 appears to immediately rerun the task with no delay or backoff. As > a result there are ~3500 connection attempts from HS2 to HMS over just a 5 > second period in the HS2 log > The compactor.Worker should wait between failed attempts and maybe do an > exponential backoff. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26947) Hive compactor.Worker can respawn connections to HMS at extremely high frequency
Akshat Mathur created HIVE-26947: Summary: Hive compactor.Worker can respawn connections to HMS at extremely high frequency Key: HIVE-26947 URL: https://issues.apache.org/jira/browse/HIVE-26947 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur After catching the exception generated by the findNextCompactionAndExecute() task, HS2 appears to immediately rerun the task with no delay or backoff. As a result there are ~3500 connection attempts from HS2 to HMS over just a 5 second period in the HS2 log The compactor.Worker should wait between failed attempts and maybe do an exponential backoff. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-22628) Add locks and transactions tables from sys db to information_schema
[ https://issues.apache.org/jira/browse/HIVE-22628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-22628: - Status: Patch Available (was: In Progress) > Add locks and transactions tables from sys db to information_schema > --- > > Key: HIVE-22628 > URL: https://issues.apache.org/jira/browse/HIVE-22628 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: Zoltan Chovan > Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[edk2-devel] edk2 BUILD error issue on Win 11
Hi, I am attempting EDK2 build on Win 11 but I get Key error "WORKSPACE" (see end of followin trace:)Please assist.Thanks,Avijit Note: VS2019 Professional installed, iasl and nasm installed, python 3.10. (.venv) PS C:\edk2> stuart_UPDATE -c OvmfPkg/PlatformCI/PlatformBuild.py TOOL_CHAIN_TAG=VS2019 -a X64SECTION - Init SDESECTION - Loading PluginsSECTION - Start Invocable ToolSECTION - Initial update of environmentUpdating. DoneSECTION - Updated/Verified 2 dependenciesSECTION - Second pass update of environmentUpdating. DoneSECTION - Updated/Verified 2 dependenciesSECTION - SummaryPROGRESS - Success(.venv) PS C:\edk2> python BaseTools/Edk2ToolsBuild.py -t VS2019SECTION - Init SDESECTION - Loading PluginsSECTION - Start Invocable ToolSECTION - SummaryPROGRESS - Success(.venv) PS C:\edk2> stuart_build -c OvmfPkg/PlatformCI/PlatformBuild.py TOOL_CHAIN_TAG=VS2019 -a X64INFO - Log Started: Friday, January 06, 2023 09:50AMSECTION - Init SDEDEBUG - --- self_describing_environment.__init__()DEBUG - Skipped directories specified = ()DEBUG - --- self_describing_environment.load_workspace()DEBUG - Loading workspace: C:\edk2DEBUG - Including scopes: ovmf, edk2-build, global-win, globalDEBUG - --- self_describing_environment._gather_env_files()DEBUG - Adding descriptor C:\edk2\.venv\Lib\site-packages\edk2basetools\basetool_tiano_python_path_env.yaml to the environment with scope globalDEBUG - Adding descriptor C:\edk2\BaseTools\basetools_calling_path_env.yaml to the environment with scope globalDEBUG - Adding descriptor C:\edk2\BaseTools\basetools_path_env.yaml to the environment with scope globalDEBUG - Adding descriptor C:\edk2\BaseTools\Bin\Win32\basetoolsbin_path_env.yaml to the environment with scope edk2-buildDEBUG - Adding descriptor C:\edk2\BaseTools\BinWrappers\WindowsLike\win_build_tools_path_env.yaml to the environment with scope global-winDEBUG - Adding descriptor C:\edk2\BaseTools\Source\Python\basetool_tiano_python_path_env.yaml to the environment with scope globalDEBUG - Adding descriptor C:\edk2\BaseTools\Bin\nasm_ext_dep.yaml to the environment with scope edk2-buildDEBUG - Adding descriptor C:\edk2\OvmfPkg\PlatformCI\iasl_ext_dep.yaml to the environment with scope ovmfDEBUG - Adding descriptor C:\edk2\BaseTools\Plugin\BuildToolsReport\BuildToolsReportGenerator_plug_in.yaml to the environment with scope globalDEBUG - Adding descriptor C:\edk2\BaseTools\Plugin\WindowsResourceCompiler\WinRcPath_plug_in.yaml to the environment with scope global-winDEBUG - Adding descriptor C:\edk2\BaseTools\Plugin\WindowsVsToolChain\WindowsVsToolChain_plug_in.yaml to the environment with scope global-winDEBUG - --- self_describing_environment.update_simple_paths()DEBUG - --- self_describing_environment.update_extdep_paths()DEBUG - Verify 'iasl' returning 'True'.INFO - Computing path for iasl located at C:\edk2\OvmfPkg\PlatformCI\iasl_extdep on Host(os='Windows', arch='x86', bit='64')DEBUG - C:\edk2\OvmfPkg\PlatformCI\iasl_extdep\Windows-x86-64 does not existINFO - C:\edk2\OvmfPkg\PlatformCI\iasl_extdep\Windows-x86 was found!DEBUG - Verify 'mu_nasm' returning 'True'.INFO - Computing path for mu_nasm located at C:\edk2\BaseTools\Bin\mu_nasm_extdep on Host(os='Windows', arch='x86', bit='64')INFO - C:\edk2\BaseTools\Bin\mu_nasm_extdep\Windows-x86-64 was found!DEBUG - --- self_describing_environment.report_extdep_version()DEBUG - Verify 'iasl' returning 'True'.INFO - Computing path for iasl located at C:\edk2\OvmfPkg\PlatformCI\iasl_extdep on Host(os='Windows', arch='x86', bit='64')DEBUG - C:\edk2\OvmfPkg\PlatformCI\iasl_extdep\Windows-x86-64 does not existINFO - C:\edk2\OvmfPkg\PlatformCI\iasl_extdep\Windows-x86 was found!DEBUG - Setting up version aggregatorDEBUG - Verify 'mu_nasm' returning 'True'.INFO - Computing path for mu_nasm located at C:\edk2\BaseTools\Bin\mu_nasm_extdep on Host(os='Windows', arch='x86', bit='64')INFO - C:\edk2\BaseTools\Bin\mu_nasm_extdep\Windows-x86-64 was found!DEBUG - Verify 'iasl' returning 'True'.INFO - Computing path for iasl located at C:\edk2\OvmfPkg\PlatformCI\iasl_extdep on Host(os='Windows', arch='x86', bit='64')DEBUG - C:\edk2\OvmfPkg\PlatformCI\iasl_extdep\Windows-x86-64 does not existINFO - C:\edk2\OvmfPkg\PlatformCI\iasl_extdep\Windows-x86 was found!DEBUG - Verify 'iasl' returning 'True'.DEBUG - Verify 'mu_nasm' returning 'True'.INFO - Computing path for mu_nasm located at C:\edk2\BaseTools\Bin\mu_nasm_extdep on Host(os='Windows', arch='x86', bit='64')INFO - C:\edk2\BaseTools\Bin\mu_nasm_extdep\Windows-x86-64 was found!DEBUG - Verify 'mu_nasm' returning 'True'.SECTION - Loading PluginsDEBUG - Loading Plugin from C:\edk2\BaseTools\Plugin\BuildToolsReport\BuildToolsReportGenerator.pyDEBUG - Loading Plugin from C:\edk2\BaseTools\Plugin\WindowsResourceCompiler\WinRcPath.pyDEBUG - Loading Plugin from C:\edk2\BaseTools\Plugin\WindowsVsToolChain\WindowsVsToolChain.pySECTION - Start Invocable ToolINFO -
[jira] [Assigned] (HIVE-22628) Add locks and transactions tables from sys db to information_schema
[ https://issues.apache.org/jira/browse/HIVE-22628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-22628: Assignee: Akshat Mathur > Add locks and transactions tables from sys db to information_schema > --- > > Key: HIVE-22628 > URL: https://issues.apache.org/jira/browse/HIVE-22628 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: Zoltan Chovan > Assignee: Akshat Mathur >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: ob-groovy.el must be hand-loaded?
Sent from my iPhoneOn Dec 21, 2022, at 10:43 AM, Galaxy Being wrote:Don't know why, but in my "spare time" I snoop around Babel. So I've revisited Groovy in Babel and have found a bizarre situation where, yes, there appears an ob-groovy.el in my ~/.emacs.d/elpa/org-9.6/ , but I have to do a specific load-file to get it seen and functioning. And no, it's not because of a naming issue in my (org-babel-do-load-languages (quote org-babel-load-languages) (quote ((emacs-lisp . t) (groovy . t) ... )))form. I've renamed it apache-groovy and it errored out specifically not finding ob-groovy.el. But again, if I don't specifically load the org-9.6 ob-groovy.el, I get "can't find program groovy" upon block C-c C-c.I don’t manually load it. Can you share your setup details? I will try to recreate that on my machine. -- ⨽Lawrence BottorffGrand Marais, MN, USAborg...@gmail.com
[jira] [Updated] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26770: - Resolution: Fixed Status: Resolved (was: Patch Available) > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 7h 10m > Remaining Estimate: 0h > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643658#comment-17643658 ] Akshat Mathur commented on HIVE-26770: -- Test passed. Had to close the old PR and create a new one > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 5h 20m > Remaining Estimate: 0h > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643124#comment-17643124 ] Akshat Mathur edited comment on HIVE-26770 at 12/6/22 4:42 AM: --- Due to timeout the test are failing, blocking merge was (Author: JIRAUSER298271): Due to timeout the test are failing, blocking merge > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 5h 20m > Remaining Estimate: 0h > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26806) Precommit tests in CI are timing out after HIVE-26796
[ https://issues.apache.org/jira/browse/HIVE-26806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643656#comment-17643656 ] Akshat Mathur commented on HIVE-26806: -- [~zabetak] Closing PR-3803 and opening a new one worked thanks. Run for new PR: http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3832/1/pipeline/ > Precommit tests in CI are timing out after HIVE-26796 > - > > Key: HIVE-26806 > URL: https://issues.apache.org/jira/browse/HIVE-26806 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Stamatis Zampetakis >Assignee: Stamatis Zampetakis >Priority: Major > > http://ci.hive.apache.org/job/hive-precommit/job/master/1506/ > {noformat} > ancelling nested steps due to timeout > 15:22:08 Sending interrupt signal to process > 15:22:08 Killing processes > 15:22:09 kill finished with exit code 0 > 15:22:19 Terminated > 15:22:19 script returned exit code 143 > [Pipeline] } > [Pipeline] // withEnv > [Pipeline] } > 15:22:19 Deleting 1 temporary files > [Pipeline] // configFileProvider > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (PostProcess) > [Pipeline] sh > [Pipeline] sh > [Pipeline] sh > [Pipeline] junit > 15:22:25 Recording test results > 15:22:32 [Checks API] No suitable checks publisher found. > [Pipeline] } > [Pipeline] // stage > [Pipeline] } > [Pipeline] // container > [Pipeline] } > [Pipeline] // node > [Pipeline] } > [Pipeline] // timeout > [Pipeline] } > [Pipeline] // podTemplate > [Pipeline] } > 15:22:32 Failed in branch split-01 > [Pipeline] // parallel > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (Archive) > [Pipeline] podTemplate > [Pipeline] { > [Pipeline] timeout > 15:22:33 Timeout set to expire in 6 hr 0 min > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643124#comment-17643124 ] Akshat Mathur commented on HIVE-26770: -- Due to timeout the test are failing, blocking merge > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26770: - Affects Version/s: 4.0.0-alpha-1 Status: Patch Available (was: In Progress) > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-26770 started by Akshat Mathur. > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-26806) Precommit tests in CI are timing out after HIVE-26796
[ https://issues.apache.org/jira/browse/HIVE-26806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17642948#comment-17642948 ] Akshat Mathur edited comment on HIVE-26806 at 12/4/22 6:52 AM: --- I tried re-running the tests with 22 splits even with 24 splits. The issue seems to persist. Here is the latest run for my PR: PR-3803 [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3803/15/pipeline] I changed the split manually via Jenkins's build with params. was (Author: JIRAUSER298271): I tried re-running the tests with 22 splits even with 24 splits. The issue seems to persist. Here is the latest run for my PR: PR-3803 [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3803/15/pipeline] I changed the split manually via Jenkin's build with params. > Precommit tests in CI are timing out after HIVE-26796 > - > > Key: HIVE-26806 > URL: https://issues.apache.org/jira/browse/HIVE-26806 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Stamatis Zampetakis >Assignee: Stamatis Zampetakis >Priority: Major > > http://ci.hive.apache.org/job/hive-precommit/job/master/1506/ > {noformat} > ancelling nested steps due to timeout > 15:22:08 Sending interrupt signal to process > 15:22:08 Killing processes > 15:22:09 kill finished with exit code 0 > 15:22:19 Terminated > 15:22:19 script returned exit code 143 > [Pipeline] } > [Pipeline] // withEnv > [Pipeline] } > 15:22:19 Deleting 1 temporary files > [Pipeline] // configFileProvider > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (PostProcess) > [Pipeline] sh > [Pipeline] sh > [Pipeline] sh > [Pipeline] junit > 15:22:25 Recording test results > 15:22:32 [Checks API] No suitable checks publisher found. > [Pipeline] } > [Pipeline] // stage > [Pipeline] } > [Pipeline] // container > [Pipeline] } > [Pipeline] // node > [Pipeline] } > [Pipeline] // timeout > [Pipeline] } > [Pipeline] // podTemplate > [Pipeline] } > 15:22:32 Failed in branch split-01 > [Pipeline] // parallel > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (Archive) > [Pipeline] podTemplate > [Pipeline] { > [Pipeline] timeout > 15:22:33 Timeout set to expire in 6 hr 0 min > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-26806) Precommit tests in CI are timing out after HIVE-26796
[ https://issues.apache.org/jira/browse/HIVE-26806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17642948#comment-17642948 ] Akshat Mathur edited comment on HIVE-26806 at 12/4/22 6:52 AM: --- I tried re-running the tests with 22 splits even with 24 splits. The issue seems to persist. Here is the latest run for my PR: PR-3803 [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3803/15/pipeline] I changed the split manually via Jenkin's build with params. was (Author: JIRAUSER298271): I tried re-running the tests with 22 splits even with 24 splits. The issue seems to persist. Here is the latest run for my PR: PR-3803 http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3803/15/pipeline > Precommit tests in CI are timing out after HIVE-26796 > - > > Key: HIVE-26806 > URL: https://issues.apache.org/jira/browse/HIVE-26806 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Stamatis Zampetakis >Assignee: Stamatis Zampetakis >Priority: Major > > http://ci.hive.apache.org/job/hive-precommit/job/master/1506/ > {noformat} > ancelling nested steps due to timeout > 15:22:08 Sending interrupt signal to process > 15:22:08 Killing processes > 15:22:09 kill finished with exit code 0 > 15:22:19 Terminated > 15:22:19 script returned exit code 143 > [Pipeline] } > [Pipeline] // withEnv > [Pipeline] } > 15:22:19 Deleting 1 temporary files > [Pipeline] // configFileProvider > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (PostProcess) > [Pipeline] sh > [Pipeline] sh > [Pipeline] sh > [Pipeline] junit > 15:22:25 Recording test results > 15:22:32 [Checks API] No suitable checks publisher found. > [Pipeline] } > [Pipeline] // stage > [Pipeline] } > [Pipeline] // container > [Pipeline] } > [Pipeline] // node > [Pipeline] } > [Pipeline] // timeout > [Pipeline] } > [Pipeline] // podTemplate > [Pipeline] } > 15:22:32 Failed in branch split-01 > [Pipeline] // parallel > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (Archive) > [Pipeline] podTemplate > [Pipeline] { > [Pipeline] timeout > 15:22:33 Timeout set to expire in 6 hr 0 min > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26806) Precommit tests in CI are timing out after HIVE-26796
[ https://issues.apache.org/jira/browse/HIVE-26806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17642948#comment-17642948 ] Akshat Mathur commented on HIVE-26806: -- I tried re-running the tests with 22 splits even with 24 splits. The issue seems to persist. Here is the latest run for my PR: PR-3803 http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3803/15/pipeline > Precommit tests in CI are timing out after HIVE-26796 > - > > Key: HIVE-26806 > URL: https://issues.apache.org/jira/browse/HIVE-26806 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Stamatis Zampetakis >Assignee: Stamatis Zampetakis >Priority: Major > > http://ci.hive.apache.org/job/hive-precommit/job/master/1506/ > {noformat} > ancelling nested steps due to timeout > 15:22:08 Sending interrupt signal to process > 15:22:08 Killing processes > 15:22:09 kill finished with exit code 0 > 15:22:19 Terminated > 15:22:19 script returned exit code 143 > [Pipeline] } > [Pipeline] // withEnv > [Pipeline] } > 15:22:19 Deleting 1 temporary files > [Pipeline] // configFileProvider > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (PostProcess) > [Pipeline] sh > [Pipeline] sh > [Pipeline] sh > [Pipeline] junit > 15:22:25 Recording test results > 15:22:32 [Checks API] No suitable checks publisher found. > [Pipeline] } > [Pipeline] // stage > [Pipeline] } > [Pipeline] // container > [Pipeline] } > [Pipeline] // node > [Pipeline] } > [Pipeline] // timeout > [Pipeline] } > [Pipeline] // podTemplate > [Pipeline] } > 15:22:32 Failed in branch split-01 > [Pipeline] // parallel > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (Archive) > [Pipeline] podTemplate > [Pipeline] { > [Pipeline] timeout > 15:22:33 Timeout set to expire in 6 hr 0 min > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
Akshat Mathur created HIVE-26770: Summary: Make "end of loop" compaction logs appear more selectively Key: HIVE-26770 URL: https://issues.apache.org/jira/browse/HIVE-26770 Project: Hive Issue Type: Improvement Reporter: Akshat Mathur Assignee: Akshat Mathur Currently Initiator, Worker, and Cleaner threads log something like "finished one loop" on INFO level. This is useful to figure out if one of these threads is taking too long to finish a loop, but expensive in general. Suggested Time: 20mins Logging this should be changed in the following way # If loop finished within a predefined amount of time, level should be DEBUG and message should look like: *Initiator loop took \{ellapsedTime} seconds to finish.* # If loop ran longer than this predefined amount, level should be WARN and message should look like: *Possible Initiator slowdown, loop took \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-26770) Make "end of loop" compaction logs appear more selectively
[ https://issues.apache.org/jira/browse/HIVE-26770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-26770: > Make "end of loop" compaction logs appear more selectively > -- > > Key: HIVE-26770 > URL: https://issues.apache.org/jira/browse/HIVE-26770 > Project: Hive > Issue Type: Improvement > Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > > Currently Initiator, Worker, and Cleaner threads log something like "finished > one loop" on INFO level. > This is useful to figure out if one of these threads is taking too long to > finish a loop, but expensive in general. > > Suggested Time: 20mins > Logging this should be changed in the following way > # If loop finished within a predefined amount of time, level should be DEBUG > and message should look like: *Initiator loop took \{ellapsedTime} seconds to > finish.* > # If loop ran longer than this predefined amount, level should be WARN and > message should look like: *Possible Initiator slowdown, loop took > \{ellapsedTime} seconds to finish.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
[ https://issues.apache.org/jira/browse/HIVE-26759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26759: - Fix Version/s: 4.0.0-alpha-2 Resolution: Fixed Status: Resolved (was: Patch Available) > ERROR: column "CC_START" does not exist, when Postgres is used as Hive > metastore > > > Key: HIVE-26759 > URL: https://issues.apache.org/jira/browse/HIVE-26759 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0-alpha-2 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-2 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > This error is coming when Postgres is used as Hive Metastore. > hive-site.xml > > {code:java} > > > > > > hive.server2.logging.operation.level > NONE > > > hive.log4j.file > hive-log4j.properties > > > metastore.log4j.file > metastore-log4j.properties > > > > hive.jar.path > > /Users/am/Desktop/work/upstream/hive/ql/target/hive-exec-4.0.0-SNAPSHOT.jar > The location of hive_cli.jar that is used when > submitting jobs in a separate jvm. > > > hive.hadoop.classpath > > /Users/am/Desktop/work/upstream/hive/ql/target/hive-exec-4.0.0-SNAPSHOT.jar > > > hive.metastore.local > false > > > hive.metastore.uris > thrift://localhost:9083 > > > hive.metastore.warehouse.dir > /Users/am/Desktop/work/hivestuff/warehouse > > > hive.server2.metrics.enabled > true > > > > spark.eventLog.enabled > true > > > spark.eventLog.dir > /tmp/hive > > > > > metastore.metastore.event.db.notification.api.auth > false > > > hive.metastore.schema.verification > false > > > datanucleus.autoCreateTables > true > > > > hive.exec.scratchdir > /tmp/hive-${user.name} > > > javax.jdo.option.ConnectionURL > jdbc:postgresql://localhost:5432/hive_metastore > JDBC connect string for a JDBC > metastore > > > javax.jdo.option.ConnectionDriverName > org.postgresql.Driver > > > javax.jdo.option.ConnectionUserName > hive > > > javax.jdo.option.ConnectionPassword > hive > > > datanucleus.schema.autoCreateAll > true > > > hive.server2.enable.doAs > false > > > > hive.server2.enable.impersonation > false > > > > dfs.namenode.acls.enabled > false > > > > > > > > hive.server2.webui.explain.output > true > > > hive.server2.webui.show.graph > true > > > hive.server2.webui.show.stats > true > > > hive.server2.webui.max.graph.size > 40 > > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > > > hive.compactor.initiator.on > true > > > hive.compactor.worker.threads > 3 > > > metastore.compactor.worker.threads > 4 > > > hive.support.concurrency > true > > > hive.exec.dynamic.partition.mode > nonstrict > > > hive.lock.manager > org.apache.hadoop.hive.ql.lockmgr.DbLockManager > > > hive.compactor.crud.query.based > true > > > hive.metastore.runworker.in > hs2 > > > > > > > > > > {code} > > > Following is the stack trace when HMS service is started: > {code:java} > [Thr
[jira] [Updated] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
[ https://issues.apache.org/jira/browse/HIVE-26759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26759: - Hadoop Flags: Reviewed Status: Patch Available (was: Open) > ERROR: column "CC_START" does not exist, when Postgres is used as Hive > metastore > > > Key: HIVE-26759 > URL: https://issues.apache.org/jira/browse/HIVE-26759 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0-alpha-2 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > This error is coming when Postgres is used as Hive Metastore. > hive-site.xml > > {code:java} > > > > > > hive.server2.logging.operation.level > NONE > > > hive.log4j.file > hive-log4j.properties > > > metastore.log4j.file > metastore-log4j.properties > > > > hive.jar.path > > /Users/am/Desktop/work/upstream/hive/ql/target/hive-exec-4.0.0-SNAPSHOT.jar > The location of hive_cli.jar that is used when > submitting jobs in a separate jvm. > > > hive.hadoop.classpath > > /Users/am/Desktop/work/upstream/hive/ql/target/hive-exec-4.0.0-SNAPSHOT.jar > > > hive.metastore.local > false > > > hive.metastore.uris > thrift://localhost:9083 > > > hive.metastore.warehouse.dir > /Users/am/Desktop/work/hivestuff/warehouse > > > hive.server2.metrics.enabled > true > > > > spark.eventLog.enabled > true > > > spark.eventLog.dir > /tmp/hive > > > > > metastore.metastore.event.db.notification.api.auth > false > > > hive.metastore.schema.verification > false > > > datanucleus.autoCreateTables > true > > > > hive.exec.scratchdir > /tmp/hive-${user.name} > > > javax.jdo.option.ConnectionURL > jdbc:postgresql://localhost:5432/hive_metastore > JDBC connect string for a JDBC > metastore > > > javax.jdo.option.ConnectionDriverName > org.postgresql.Driver > > > javax.jdo.option.ConnectionUserName > hive > > > javax.jdo.option.ConnectionPassword > hive > > > datanucleus.schema.autoCreateAll > true > > > hive.server2.enable.doAs > false > > > > hive.server2.enable.impersonation > false > > > > dfs.namenode.acls.enabled > false > > > > > > > > hive.server2.webui.explain.output > true > > > hive.server2.webui.show.graph > true > > > hive.server2.webui.show.stats > true > > > hive.server2.webui.max.graph.size > 40 > > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > > > hive.compactor.initiator.on > true > > > hive.compactor.worker.threads > 3 > > > metastore.compactor.worker.threads > 4 > > > hive.support.concurrency > true > > > hive.exec.dynamic.partition.mode > nonstrict > > > hive.lock.manager > org.apache.hadoop.hive.ql.lockmgr.DbLockManager > > > hive.compactor.crud.query.based > true > > > hive.metastore.runworker.in > hs2 > > > > > > > > > > {code} > > > Following is the stack trace when HMS service is started: > {code:java} > [Thread-5] ERROR org.apache.hadoop.hive.ql.txn.compactor.Initiator - > Initiator loop
[jira] [Updated] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
[ https://issues.apache.org/jira/browse/HIVE-26759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26759: - Description: This error is coming when Postgres is used as Hive Metastore. hive-site.xml {code:java} hive.server2.logging.operation.level NONE hive.log4j.file hive-log4j.properties metastore.log4j.file metastore-log4j.properties hive.jar.path /Users/am/Desktop/work/upstream/hive/ql/target/hive-exec-4.0.0-SNAPSHOT.jar The location of hive_cli.jar that is used when submitting jobs in a separate jvm. hive.hadoop.classpath /Users/am/Desktop/work/upstream/hive/ql/target/hive-exec-4.0.0-SNAPSHOT.jar hive.metastore.local false hive.metastore.uris thrift://localhost:9083 hive.metastore.warehouse.dir /Users/am/Desktop/work/hivestuff/warehouse hive.server2.metrics.enabled true spark.eventLog.enabled true spark.eventLog.dir /tmp/hive metastore.metastore.event.db.notification.api.auth false hive.metastore.schema.verification false datanucleus.autoCreateTables true hive.exec.scratchdir /tmp/hive-${user.name} javax.jdo.option.ConnectionURL jdbc:postgresql://localhost:5432/hive_metastore JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName org.postgresql.Driver javax.jdo.option.ConnectionUserName hive javax.jdo.option.ConnectionPassword hive datanucleus.schema.autoCreateAll true hive.server2.enable.doAs false hive.server2.enable.impersonation false dfs.namenode.acls.enabled false hive.server2.webui.explain.output true hive.server2.webui.show.graph true hive.server2.webui.show.stats true hive.server2.webui.max.graph.size 40 hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on true hive.compactor.worker.threads 3 metastore.compactor.worker.threads 4 hive.support.concurrency true hive.exec.dynamic.partition.mode nonstrict hive.lock.manager org.apache.hadoop.hive.ql.lockmgr.DbLockManager hive.compactor.crud.query.based true hive.metastore.runworker.in hs2 {code} Following is the stack trace when HMS service is started: {code:java} [Thread-5] ERROR org.apache.hadoop.hive.ql.txn.compactor.Initiator - Initiator loop caught unexpected exception this time through the loop org.apache.hadoop.hive.metastore.api.MetaException: Unable to select from transaction database org.postgresql.util.PSQLException: ERROR: column "CC_START" does not exist Position: 1215 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:181) at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:133) at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52) at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java) at org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3894) at org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) at org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3946) ~[classes/:?] at org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] {code} This error disappears when derby is configured as HMS. was: This error is coming when Postgres is used as Hive
[jira] [Updated] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
[ https://issues.apache.org/jira/browse/HIVE-26759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26759: - Component/s: Metastore > ERROR: column "CC_START" does not exist, when Postgres is used as Hive > metastore > > > Key: HIVE-26759 > URL: https://issues.apache.org/jira/browse/HIVE-26759 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0-alpha-2 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > > This error is coming when Postgres is used as Hive Metastore. > Following is the stack trace when HMS service is started: > {code:java} > [Thread-5] ERROR org.apache.hadoop.hive.ql.txn.compactor.Initiator - > Initiator loop caught unexpected exception this time through the loop > org.apache.hadoop.hive.metastore.api.MetaException: Unable to select from > transaction database org.postgresql.util.PSQLException: ERROR: column > "CC_START" does not exist > Position: 1215 > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) > at > org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) > at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) > at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) > at > org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:181) > at > org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:133) > at > com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52) > at > com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3894) > at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3946) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] {code} > This error disappears when derby is configured as HMS. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
[ https://issues.apache.org/jira/browse/HIVE-26759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur updated HIVE-26759: - Affects Version/s: 4.0.0-alpha-2 > ERROR: column "CC_START" does not exist, when Postgres is used as Hive > metastore > > > Key: HIVE-26759 > URL: https://issues.apache.org/jira/browse/HIVE-26759 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0-alpha-2 >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > > This error is coming when Postgres is used as Hive Metastore. > Following is the stack trace when HMS service is started: > {code:java} > [Thread-5] ERROR org.apache.hadoop.hive.ql.txn.compactor.Initiator - > Initiator loop caught unexpected exception this time through the loop > org.apache.hadoop.hive.metastore.api.MetaException: Unable to select from > transaction database org.postgresql.util.PSQLException: ERROR: column > "CC_START" does not exist > Position: 1215 > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) > at > org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) > at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) > at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) > at > org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:181) > at > org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:133) > at > com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52) > at > com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3894) > at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3946) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] {code} > This error disappears when derby is configured as HMS. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
Akshat Mathur created HIVE-26759: Summary: ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore Key: HIVE-26759 URL: https://issues.apache.org/jira/browse/HIVE-26759 Project: Hive Issue Type: Bug Reporter: Akshat Mathur This error is coming when Postgres is used as Hive Metastore. Following is the stack trace when HMS service is started: {code:java} [Thread-5] ERROR org.apache.hadoop.hive.ql.txn.compactor.Initiator - Initiator loop caught unexpected exception this time through the loop org.apache.hadoop.hive.metastore.api.MetaException: Unable to select from transaction database org.postgresql.util.PSQLException: ERROR: column "CC_START" does not exist Position: 1215 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:181) at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:133) at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52) at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java) at org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3894) at org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) at org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3946) ~[classes/:?] at org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] {code} This error disappears when derby is configured as HMS. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-26759) ERROR: column "CC_START" does not exist, when Postgres is used as Hive metastore
[ https://issues.apache.org/jira/browse/HIVE-26759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshat Mathur reassigned HIVE-26759: Assignee: Akshat Mathur > ERROR: column "CC_START" does not exist, when Postgres is used as Hive > metastore > > > Key: HIVE-26759 > URL: https://issues.apache.org/jira/browse/HIVE-26759 > Project: Hive > Issue Type: Bug >Reporter: Akshat Mathur >Assignee: Akshat Mathur >Priority: Major > > This error is coming when Postgres is used as Hive Metastore. > Following is the stack trace when HMS service is started: > {code:java} > [Thread-5] ERROR org.apache.hadoop.hive.ql.txn.compactor.Initiator - > Initiator loop caught unexpected exception this time through the loop > org.apache.hadoop.hive.metastore.api.MetaException: Unable to select from > transaction database org.postgresql.util.PSQLException: ERROR: column > "CC_START" does not exist > Position: 1215 > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) > at > org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) > at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) > at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) > at > org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:181) > at > org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:133) > at > com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52) > at > com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3894) > at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.showCompact(TxnHandler.java:3946) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] {code} > This error disappears when derby is configured as HMS. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10536) Getting java.lang.LinkageError when integrating Nifi with Hashicorp Vault
Ruchit Mathur created NIFI-10536: Summary: Getting java.lang.LinkageError when integrating Nifi with Hashicorp Vault Key: NIFI-10536 URL: https://issues.apache.org/jira/browse/NIFI-10536 Project: Apache NiFi Issue Type: Bug Components: Tools and Build Affects Versions: 1.16.1 Reporter: Ruchit Mathur Hello, We configured Nifi with Hashicorp Vault using Encrypt Configuration Tool as mentioned in official Docs. We were able to add Sensitive Properties (Keystore Passwords and Sensitive Key) in our Vault KV Path, but after Restarting Nifi we encountered following errors:- Please note we are not encountering this Error in Nifi version 1.15.3. After version 1.15.3 we are getting this error in all Releases. {code:java} Caused by: java.lang.LinkageError: loader constraint violation: when resolving method 'void org.springframework.http.client.HttpComponentsClientHttpRequestFactory.(org.apache.http.client.HttpClient)' the class loader org.apache.nifi.property.protection.loader.PropertyProtectionURLClassLoader @69c335c4 of the current class, org/springframework/vault/client/ClientHttpRequestFactoryFactory$HttpComponents, and the class loader org.apache.nifi.nar.NarClassLoader @5792c08c for the method's defining class, org/springframework/http/client/HttpComponentsClientHttpRequestFactory, have different Class objects for the type org/apache/http/client/HttpClient used in the signature (org.springframework.vault.client.ClientHttpRequestFactoryFactory$HttpComponents is in unnamed module of loader org.apache.nifi.property.protection.loader.PropertyProtectionURLClassLoader @69c335c4, parent loader org.eclipse.jetty.webapp.WebAppClassLoader @6d4502ca; org.springframework.http.client.HttpComponentsClientHttpRequestFactory is in unnamed module of loader org.apache.nifi.nar.NarClassLoader @5792c08c, parent loader org.apache.nifi.nar.NarClassLoader @46e190ed) at org.springframework.vault.client.ClientHttpRequestFactoryFactory$HttpComponents.usingHttpComponents(ClientHttpRequestFactoryFactory.java:333) at org.springframework.vault.client.ClientHttpRequestFactoryFactory.create(ClientHttpRequestFactoryFactory.java:130) at org.apache.nifi.vault.hashicorp.StandardHashiCorpVaultCommunicationService.(StandardHashiCorpVaultCommunicationService.java:59) at org.apache.nifi.properties.AbstractHashiCorpVaultSensitivePropertyProvider.(AbstractHashiCorpVaultSensitivePropertyProvider.java:43) at org.apache.nifi.properties.HashiCorpVaultKeyValueSensitivePropertyProvider.(HashiCorpVaultKeyValueSensitivePropertyProvider.java:31) at org.apache.nifi.properties.StandardSensitivePropertyProviderFactory.getProvider(StandardSensitivePropertyProviderFactory.java:230) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at org.apache.nifi.properties.StandardSensitivePropertyProviderFactory.getSupportedProviders(StandardSensitivePropertyProviderFactory.java:152) at org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:164) at org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:190) at org.apache.nifi.properties.NiFiPropertiesLoader.loadDefault(NiFiPropertiesLoader.java:215) at org.apache.nifi.properties.NiFiPropertiesLoader.loadDefaultWithKeyFromBootstrap(NiFiPropertiesLoader.java:103) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ... 124 common frames omitted 2022-09-23 15:04:15,770 INFO [Thread-0] org.apache.nifi.NiFi Application Server shutdown started {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Understanding webhook checks in flink operator
Hi, We are trying to install Flink Kubernetes operator v1.1.0. Few questions regarding the webhooks: - Is it necessary to install webhooks? If not, what can be the implications? - What type of validations and mutations does it support? Thanks Prakhar Mathur
[PHP-WEBMASTER] [web-rmtools] master: Add branch config for PHP 8.2
Author: Shivam Mathur (shivammathur) Committer: Christoph M. Becker (cmb69) Date: 2022-08-30T23:00:32+02:00 Commit: https://github.com/php/web-rmtools/commit/a2e9e71579b6508677a2fdcc80e4307f87090593 Raw diff: https://github.com/php/web-rmtools/commit/a2e9e71579b6508677a2fdcc80e4307f87090593.diff Add branch config for PHP 8.2 Closes GH-20. Changed paths: A data/config/branch/x64/php82.ini A data/config/branch/x86/php82.ini Diff: diff --git a/data/config/branch/x64/php82.ini b/data/config/branch/x64/php82.ini new file mode 100644 index 000..2f5ce4c --- /dev/null +++ b/data/config/branch/x64/php82.ini @@ -0,0 +1,33 @@ +name=PHP-8.2 +branch=8.2 +repo_name=phpnet +repo_module=php-src.git +repo_branch=refs/heads/PHP-8.2 +build_dir=C:/php-snap-build/obj-x64 +build_location=C:/php-snap-build/snap_8.2/vs16/x64 +appver= +debug=0 +pgo=1 +compiler=vs16 +arch=x64 + +[build-nts-windows-vs16-x64] +name=nts-windows-vs16-x64 +compiler=vs16 +arch=x64 +configure_options=--enable-snapshot-build --disable-zts --enable-debug-pack --with-pdo-oci=c:/php-snap-build/deps_aux/oracle/x64/instantclient_19_9/sdk,shared --with-oci8-19=c:/php-snap-build/deps_aux/oracle/x64/instantclient_19_9/sdk,shared --enable-com-dotnet=shared --without-analyzer +platform=windows + +[build-ts-windows-vs16-x64] +name=ts-windows-vs16-x64 +compiler=vs16 +arch=x64 +configure_options=--enable-snapshot-build --enable-debug-pack --with-pdo-oci=c:/php-snap-build/deps_aux/oracle/x64/instantclient_19_9/sdk,shared --with-oci8-19=c:/php-snap-build/deps_aux/oracle/x64/instantclient_19_9/sdk,shared --enable-com-dotnet=shared --without-analyzer +platform=windows + +[build-nts-windows-vs16-x64-avx] +name=nts-windows-vs16-x64 +compiler=vs16 +arch=x64 +configure_options=--enable-snapshot-build --disable-zts --enable-debug-pack --with-pdo-oci=c:/php-snap-build/deps_aux/oracle/x64/instantclient_19_9/sdk,shared --with-oci8-19=c:/php-snap-build/deps_aux/oracle/x64/instantclient_19_9/sdk,shared --enable-com-dotnet=shared --without-analyzer --enable-native-intrinsics=avx +platform=windows diff --git a/data/config/branch/x86/php82.ini b/data/config/branch/x86/php82.ini new file mode 100644 index 000..3e80e62 --- /dev/null +++ b/data/config/branch/x86/php82.ini @@ -0,0 +1,26 @@ +name=PHP-8.2 +branch=8.2 +repo_name=phpnet +repo_module=php-src.git +repo_branch=refs/heads/PHP-8.2 +build_dir=C:/php-snap-build/obj +build_location=C:/php-snap-build/snap_8.2/vs16/x86 +appver= +debug=0 +pgo=1 +compiler=vs16 +arch=x86 + +[build-nts-windows-vs16-x86] +name=nts-windows-vs16-x86 +compiler=vs16 +arch=x86 +configure_options=--enable-snapshot-build --disable-zts --enable-debug-pack --with-pdo-oci=c:\php-snap-build\deps_aux\oracle\x86\instantclient_19_9\sdk,shared --with-oci8-19=c:\php-snap-build\deps_aux\oracle\x86\instantclient_19_9\sdk,shared --enable-com-dotnet=shared --without-analyzer +platform=windows + +[build-ts-windows-vs16-x86] +name=ts-windows-vs16-x86 +compiler=vs16 +arch=x86 +configure_options=--enable-snapshot-build --enable-debug-pack --with-pdo-oci=c:\php-snap-build\deps_aux\oracle\x86\instantclient_19_9\sdk,shared --with-oci8-19=c:\php-snap-build\deps_aux\oracle\x86\instantclient_19_9\sdk,shared --enable-com-dotnet=shared --without-analyzer +platform=windows -- PHP Webmaster List Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
[osg-users] OpenSceneGraph on NASA Artemis I
Hello all, Artemis is a NASA program to send humans back to the Moon. As you may have heard on the news, the first Artemis mission (Artemis I) is scheduled for launch on Monday (Aug 29) morning. This is a very big deal for human space flight. What you may not know is how big of a role OpenSceneGraph has played in the Artemis mission design and operations. The Artemis I mission will send the Orion spacecraft on a 40 day mission to the Moon and back. The trajectory that Orion will fly was designed using a NASA software called Copernicus, which has 3D visualizations powered by - you guessed it - OSG. Not only that, but Copernicus is also being used for realtime operational support after launch, so OSG will there in the Artemis mission control room! And it doesn't end there. Artemis I includes 10 tiny spacecraft, called CubeSats, that will perform secondary missions to glean information about how deep space affects organisms and systems during long missions, e.g. to the Moon and Mars. One of those CubeSats is called CuSP, which will travel 4 million kilometers away (10x the Moon's distance) and with a goal of better understanding solar wind. The CuSP trajectory was designed by another NASA software called GMAT, whose 3D graphics are also provided by OSG! GMAT is also being used for operational support of CuSP, so OSG will be all over the CuSP mission operations center too! Being able to visualize complex spacecraft trajectories as they are being computed and as they are being flown is an important aspect of modern space mission design and operations, and OSG has been an important part of that. I wanted to thank Robert and all the other OSG devs for what they've created and tirelessly improved over the past 20+ years. And I look forward to eventually porting Copernicus and GMAT over to VSG! Thanks! Ravi Mathur Director of R, Emergent Space Technologies Inc. Visualization dev lead, NASA Copernicus and NASA GMAT software. Trajectory design lead, NASA CuSP mission. -- You received this message because you are subscribed to the Google Groups "OpenSceneGraph Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to osg-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/osg-users/CAGYwD3Att0P%3DkzgDmJFvYtNegFOpXP_1ZTaYV1Fvjb9tfUxmqg%40mail.gmail.com.
[efloraofindia:419621] Re: Identification of these grasses/Sedges native to Maldives
Dear Mr Garg, I really need to get these species identified. please help me with the identification with their botanical names pls. what i know is that one is or could be Cyperus Canescens and one is Pteris Interrupta. Please can the lovely community here confirm these and also identify the first one. Thank you! warm regards, Ritu On Thu, 7 Apr 2022 at 1:56 PM, J.M. Garg wrote: > Thanks, Ritu ji > > > -- Forwarded message - > From: Ritu Mathur > Date: Thu, 7 Apr 2022 at 14:09 > Subject: Identification of these grasses/Sedges native to Maldives > To: J.M. Garg , > > > Dear Mr Garg and Mr Singh, > Need your help in identifying these three native grasses that grow in > Maldives. > And It will be good to know if they can be used for cleaning water of > Nirates, Nitrites, Phosphates and other impurities. > > Will they work as an alternative to Vetiver and other reeds like > Phragmites? > > Thank you! > Warm regards, > Ritu > > > -- > With regards, > J.M.Garg > -- You received this message because you are subscribed to the Google Groups "efloraofindia" group. To unsubscribe from this group and stop receiving emails from it, send an email to indiantreepix+unsubscr...@googlegroups.com. To view this discussion on the web, visit https://groups.google.com/d/msgid/indiantreepix/CAAXnSxD4gxq%2BpRKhSvcmT5H8pjOWtDFO-PMC-gPyAmToYdn4cw%40mail.gmail.com.
[Bf-committers] Enquiry about python contribution
Hey there, I am a profecient coder in python and would like to contribute in blender using the same, I have taken a glance at the good first issues as well. It would be great if anyone of you could point me out as where and how to start with it, what to read, etc. A little help would be gladly appreciated. Thanks and regards Chirag Mathur (blender chat: Chirag-Mathur) ___ Bf-committers mailing list Bf-committers@blender.org List details, subscription details or unsubscribe: https://lists.blender.org/mailman/listinfo/bf-committers