[jira] [Resolved] (HBASE-26898) Cannot rebuild a cluster from an existing root directory
[ https://issues.apache.org/jira/browse/HBASE-26898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiangJun He resolved HBASE-26898. - Resolution: Resolved > Cannot rebuild a cluster from an existing root directory > > > Key: HBASE-26898 > URL: https://issues.apache.org/jira/browse/HBASE-26898 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 3.0.0-alpha-2 >Reporter: LiangJun He >Assignee: LiangJun He >Priority: Major > Fix For: 3.0.0-alpha-2 > > > When I tested to rebuild an HBase cluster, and the rootdir was configured as > a existed directory (the directory was generated by another HBase cluster of > the same version), I found the following error message: > {code:java} > java.net.UnknownHostException: Call to address=worker-1.cluster-xxx:16020 > failed on local exception: java.net.UnknownHostException: > worker-1.cluster-xxx:16020 could not be resolved > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:234) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:93) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:424) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:419) > at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:119) > at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:134) > at > org.apache.hadoop.hbase.ipc.NettyRpcConnection.lambda$sendRequest$4(NettyRpcConnection.java:351) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) > at > org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.UnknownHostException: worker-1.cluster-xxx:16020 could > not be resolved > at > org.apache.hadoop.hbase.ipc.RpcConnection.getRemoteInetAddress(RpcConnection.java:192) > at > org.apache.hadoop.hbase.ipc.NettyRpcConnection.connect(NettyRpcConnection.java:275) > at > org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$800(NettyRpcConnection.java:78) > at > org.apache.hadoop.hbase.ipc.NettyRpcConnection$4.run(NettyRpcConnection.java:325) > at > org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl.notifyOnCancel(HBaseRpcControllerImpl.java:262) > at > org.apache.hadoop.hbase.ipc.NettyRpcConnection.sendRequest0(NettyRpcConnection.java:308) > at > org.apache.hadoop.hbase.ipc.NettyRpcConnection.lambda$sendRequest$4(NettyRpcConnection.java:349) > {code} > Eventually, I fail to create the cluster. > But for cloud environments, this operation is a common scenario(Rebuild a > cluster from an existing rootdir directory) > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
Re: [DISCUSS] HBASE-26245 Store region server list in master local region
Thank you Duo. I have also encountered this issue and it is somewhere on the to do list. Let me review the PR, this is fantastic. On Wed, Mar 30, 2022 at 5:26 PM 张铎(Duo Zhang) wrote: > Liangjun He from Alibaba has tested the patch on their cloud deployment > rebuilding scenario, and it works fine if we stop masters first and then > region servers. Please check the comments on the jira issue for more > details. > > Let me try to get this in. This will be very useful for users who deploy > HBase on cloud. > > Thanks. > > 张铎(Duo Zhang) 于2022年3月28日周一 12:24写道: > > > The issue aims to solve the problem of redeploying HBase clusters on > cloud. > > > > I can not find the issue but IIRC, the AWS guys said they tried to do the > > following steps while redeploying a customer's HBase cluster: > > > > 1. Disable write to cluster, flush all data to disk(which is actually S3) > > 2. Recreate the cluster with a set of new machines, and also a new zk and > > a new HDFS(for writing WAL) > > > > Then the new cluster just hung there and no regions were online. > > > > This is because in HMaster startup, we rely on scanning the WAL directory > > on HDFS to get the previous live region servers, and we will compare the > > list with the list stored on zookeeper to find out dead region servers > and > > schedule SCPs for them, and then the SCPs will bring the regions online. > > > > The problem for the above redeploying operation is, the WAL directory is > > also cleaned, so we can not get the previous live region servers, so no > SCP > > will be scheduled. > > > > This is a bit annoying as we have already flushed all the data out so it > > should be safe to delete all the WAL data. > > > > The idea in HBASE-26245 is to also store a copy of the live region > servers > > in master local region, so when restarting, we could also load the > previous > > live region servers from master local region, instead of only relying on > > the WAL directory. In this way we could solve the problem of the above > > redeploying operation. > > > > The PR is also ready. > > > > https://github.com/apache/hbase/pull/4136 > > > > Suggestions and reviews are always welcomed. > > > > Thanks. > > > -- Best regards, Andrew Unrest, ignorance distilled, nihilistic imbeciles - It's what we’ve earned Welcome, apocalypse, what’s taken you so long? Bring us the fitting end that we’ve been counting on - A23, Welcome, Apocalypse
Re: [DISCUSS] HBASE-26245 Store region server list in master local region
Liangjun He from Alibaba has tested the patch on their cloud deployment rebuilding scenario, and it works fine if we stop masters first and then region servers. Please check the comments on the jira issue for more details. Let me try to get this in. This will be very useful for users who deploy HBase on cloud. Thanks. 张铎(Duo Zhang) 于2022年3月28日周一 12:24写道: > The issue aims to solve the problem of redeploying HBase clusters on cloud. > > I can not find the issue but IIRC, the AWS guys said they tried to do the > following steps while redeploying a customer's HBase cluster: > > 1. Disable write to cluster, flush all data to disk(which is actually S3) > 2. Recreate the cluster with a set of new machines, and also a new zk and > a new HDFS(for writing WAL) > > Then the new cluster just hung there and no regions were online. > > This is because in HMaster startup, we rely on scanning the WAL directory > on HDFS to get the previous live region servers, and we will compare the > list with the list stored on zookeeper to find out dead region servers and > schedule SCPs for them, and then the SCPs will bring the regions online. > > The problem for the above redeploying operation is, the WAL directory is > also cleaned, so we can not get the previous live region servers, so no SCP > will be scheduled. > > This is a bit annoying as we have already flushed all the data out so it > should be safe to delete all the WAL data. > > The idea in HBASE-26245 is to also store a copy of the live region servers > in master local region, so when restarting, we could also load the previous > live region servers from master local region, instead of only relying on > the WAL directory. In this way we could solve the problem of the above > redeploying operation. > > The PR is also ready. > > https://github.com/apache/hbase/pull/4136 > > Suggestions and reviews are always welcomed. > > Thanks. >
[jira] [Resolved] (HBASE-26871) shaded mapreduce and shaded byo-hadoop client artifacts contains no classes
[ https://issues.apache.org/jira/browse/HBASE-26871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Kyle Purtell resolved HBASE-26871. - Resolution: Fixed > shaded mapreduce and shaded byo-hadoop client artifacts contains no classes > --- > > Key: HBASE-26871 > URL: https://issues.apache.org/jira/browse/HBASE-26871 > Project: HBase > Issue Type: Bug > Components: integration tests, jenkins, mapreduce >Affects Versions: 2.5.0, 2.6.0 >Reporter: Duo Zhang >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.5.0, 2.6.0, 2.4.12 > > > After fixing the logging problem in HBASE-26870, we could see the actual > error. > {noformat} > /home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hadoop-3/bin/hadoop > --config > /home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/output-integration/hadoop-3/hbase-conf/ > jar > /home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/lib/shaded-clients/hbase-shaded-mapreduce-2.6.0-SNAPSHOT.jar > importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars > /home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/shaded-clients/hbase-shaded-mapreduce-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/htrace-core4-4.1.0-incubating.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/jcl-over-slf4j-1.7.33.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/jul-to-slf4j-1.7.33.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/opentelemetry-api-1.0.1.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/opentelemetry-context-1.0.1.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/opentelemetry-semconv-1.0.1-alpha.jar:/home/jenkins/jenkins-home/workspace/ase_Nightly_HBASE-26870-branch-2/hbase-client/bin/../lib/client-facing-thirdparty/slf4j-api-1.7.33.jar > WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete. > Exception in thread "main" java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.mapreduce.Driver > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at org.apache.hadoop.util.RunJar.run(RunJar.java:311) > at org.apache.hadoop.util.RunJar.main(RunJar.java:232) > {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26909) hbase-shaded-mapreduce and hbase-shaded-client expose some of the same classes
Bryan Beaudreault created HBASE-26909: - Summary: hbase-shaded-mapreduce and hbase-shaded-client expose some of the same classes Key: HBASE-26909 URL: https://issues.apache.org/jira/browse/HBASE-26909 Project: HBase Issue Type: Improvement Reporter: Bryan Beaudreault We supply 2 primary artifacts for end-users to consume: * hbase-shaded-client, which is for general use * hbase-shaded-mapreduce, which is for use when you need to connect to hbase via mapreduce. For example, TableInputFormat The problem is that these artifacts expose tons of duplicate classes. One example (among many) is org.apache.hadoop.hbase.Cell, which appears in both jars. This may not be a problem if your projects are always very isolated – either doing mapreduce, or not. In that case you just depend in the one you need. Many users might exist in much more complicated environments where dependencies tend to bleed along more between projects. Here's an illustration: Imagine a project FooService, which includes two modules FooServiceRestWeb (for the rest http resources) and FooServiceData (which includes DAOs for accessing data). FooServiceRestWeb depends on FooServiceData to access hbase. In this case, FooServiceData should depend on hbase-shaded-client. Now imagine another project FooPipeline, which has modules FooPipelineHadoop (with M/R jobs for processing data) and FooPipelineData (which has some DAOs for accessing data). In this case, FooPipelineData might depend on hbase-shaded-mapreduce since the context is intended for M/R. The problem arises when suddenly we want to include some data from FooService into our pipeline. The most straightforward way to achieve this is by depending on FooServiceData, which has all of he DAOs for that data but also depends on hbase-shaded-client. At this point you have a problem, because FooPipelineHadoop now depends on both hbase-shaded-mapreduce and hbase-shaded-client. (Note, this obviously skirts around potential microservice solutions like only accessing FooService's data through the API... it's just for illustration, and it does come up.) >From a plain java perspective, having these 2 jars on the classpath is >somewhat wasteful but not a huge issue since the implementations are all the >same. >From a maven perspective, it's problematic because the maven dependency plugin >will complain about the conflicting classes. One potential fix is to add exclusions to the FooServiceData dependency, to avoid pulling in hbase-shaded-client. This works on a one-off basis but is much more painful in a large and complicated environment where this may come up hundreds of times. A better fix in my opinion is to make hbase-shaded-mapreduce depend on hbase-shaded-client and then only expose the classes that aren't already exposed by the shaded client. [~busbey] also mentioned a BOM being a potential solution, but I don't have experience with that. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[DISCUSS] Bump Hadoop3 version to 3.2.2
Heya, When running tests locally with JDK 11, I'm running into an annoying problem with mini clusters and the Jetty that ships in hadoop. See details on https://issues.apache.org/jira/browse/HBASE-26907. The issue has been fixed in Hadoop, so I wonder if this is problematic enough that we should bump the minimum Hadoop3 version number to include the fix. Thoughts? Thanks, Nick
Re: Meta replicas in LoadBalance mode broken in 2.4?
Hi Huaxiang, Given that you already use this feature in production with no problems, I created a short patch to remove the “be careful” warnings from the HBase documentation. PTAL if you agree. https://github.com/apache/hbase/pull/4301 Thanks, Andor > On 2022. Mar 29., at 18:52, Huaxiang Sun wrote: > > This is great, thanks for the testing results! > > Huaxiang > > On 2022/03/29 13:29:48 Andor Molnar wrote: >> Works! >> >> I enabled async wal replication with the suggested option and ITBLL ran >> successfully. >> >> generator step: >> hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList generator 15 >> 100 /tmp/hbase-itbll >> >> verification step: >> hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList verify >> /tmp/hbase-itbll-verify 15 >> >> Both succeeded. I also confirmed that meta replicas are written and read by >> the clients, so must be in load balance mode. >> >> Thanks for the help! >> >> Andor >> >> >> >>> On 2022. Mar 27., at 6:59, Huaxiang Sun wrote: >>> >>> It makes sense to turn on async wal replication when >>> hbase.meta.replicas.use = true. Let me run couple rounds of itbll with >>> hbase.region.replica.replication.catalog.enabled (lastest 2.4 and 2.5.0 >>> candidates) to get more confidence before proposing turn on async wal >>> replication for meta. >>> >>> Thanks, >>> Huaxiang >>> >>> On 2022/03/26 04:03:15 Andrew Purtell wrote: Just to be clear when I say "it seems pointless to have meta replicas which do not actually receive updates (by default)", what I should have said is 'timely updates', because a long delay in updating meta might as well be a missed update. On Fri, Mar 25, 2022 at 9:01 PM Andrew Purtell wrote: >> "Async WAL replication for META is added as a new feature in 2.4.0. It > is still under active development. Use with caution. Set > hbase.region.replica.replication.catalog.enabled to enable async WAL > Replication for META region replicas. It is off by default." > > Do we still need this warning? > > Should hbase.region.replica.replication.catalog.enabled have a default of > 'true' (enabled) if hbase.meta.replicas.use = true ? Otherwise, it seems > pointless to have meta replicas which do not actually receive updates (by > default). > > > On Fri, Mar 25, 2022 at 10:51 AM Huaxiang Sun > wrote: > >> Hi Andor, >> >> I get what you are saying. The HFile refreshing is the old way for >> replica regions to refresh hfiles periodically, default is 5 minutes. In >> this itbll case, we need to have the wal replication enabled for meta >> replica. Please check out, >> >> https://hbase.apache.org/book.html#_async_wal_replication_for_meta_table_as_of_hbase_2_4_0. >> Basically, you need to set >> "hbase.region.replica.replication.catalog.enabled" to true in the >> configuration and rerun itbll. Otherwise, all meta changes at the primary >> meta region wont be updated at the replica meta regions and it will >> result >> in itbll failures. >> >> Hope this helps, >> >> Huaxiang >> >> >> On 2022/03/25 13:46:42 Andor Molnar wrote: >>> Hi Huaxiang, >>> >>> We use 2.4.6 for the tests. >>> >>> I run itbll with the following command: >>> >>> hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList >> generator 15 100 /tmp/hbase-itbll >>> >>> for the generator step and essentially jobs have failed. We can see the >> meta request are spanning out to replicas, but writes start failing after >> this due to the stale cache which is not getting updated. >>> >>> Would you please tell me more about ‘hfile refresh’ and how to >> configure it? >>> >>> Thanks, >>> Andor >>> >>> >>> >>> On 2022. Mar 24., at 17:43, Huaxiang Sun >> wrote: Hi Andor, Which 2.4 release do you test in your lab? We use this feature at >> production cluster with 2.4.5. At server side, we use hfile refresh instead of wal replication. I >> used to run itbll for each release with this feature enabled. How did you >> find the errors, did itbll fail? Regards, Huaxiang >>> >>> >> > > > -- > Best regards, > Andrew > > Unrest, ignorance distilled, nihilistic imbeciles - > It's what we’ve earned > Welcome, apocalypse, what’s taken you so long? > Bring us the fitting end that we’ve been counting on > - A23, Welcome, Apocalypse > -- Best regards, Andrew Unrest, ignorance distilled, nihilistic imbeciles - It's what we’ve earned Welcome, apocalypse, what’s taken you so long? Bring us the fitting end that we’ve been counting on - A23, Welcome, Ap
[jira] [Created] (HBASE-26908) Remove warnings from meta replicas feature references in the HBase book
Andor Molnar created HBASE-26908: Summary: Remove warnings from meta replicas feature references in the HBase book Key: HBASE-26908 URL: https://issues.apache.org/jira/browse/HBASE-26908 Project: HBase Issue Type: Task Components: documentation Reporter: Andor Molnar Assignee: Andor Molnar Meta replicas is a new feature in HBase 2.4 and mentioned in "Use with caution" in the docs. Given that the feature and the related "async wal replication for meta" is actively used in production already, I'd like to remove these warnings from the docs. With this change, users will have more confidence in the feature. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26907) Cannot run tests on Hadoop3 with Java11 versions having 4 version number positions
Nick Dimiduk created HBASE-26907: Summary: Cannot run tests on Hadoop3 with Java11 versions having 4 version number positions Key: HBASE-26907 URL: https://issues.apache.org/jira/browse/HBASE-26907 Project: HBase Issue Type: Task Components: build Reporter: Nick Dimiduk It happened that my JDK version upgraded to 11.0.14.1. Running unit tests involving the HDFS mini cluster now fails with a stack trace that ends with {noformat} Caused by: java.lang.IllegalArgumentException: Invalid Java version 11.0.14.1 at org.eclipse.jetty.util.JavaVersion.parseJDK9(JavaVersion.java:71) at org.eclipse.jetty.util.JavaVersion.parse(JavaVersion.java:49) at org.eclipse.jetty.util.JavaVersion.(JavaVersion.java:43) {noformat} We are using hadoop-3.2.0, which uses jetty-9.3.24. This is a Jetty issue has been fixed upstream in Jetty via https://github.com/eclipse/jetty.project/issues/2090. Hadoop has upgraded its Jetty version to 9.4 in HADOOP-16152, which is available as of hadoop-3.2.2. -- This message was sent by Atlassian Jira (v8.20.1#820001)
Call for Presentations now open, ApacheCon North America 2022
[You are receiving this because you are subscribed to one or more user or dev mailing list of an Apache Software Foundation project.] ApacheCon draws participants at all levels to explore “Tomorrow’s Technology Today” across 300+ Apache projects and their diverse communities. ApacheCon showcases the latest developments in ubiquitous Apache projects and emerging innovations through hands-on sessions, keynotes, real-world case studies, trainings, hackathons, community events, and more. The Apache Software Foundation will be holding ApacheCon North America 2022 at the New Orleans Sheration, October 3rd through 6th, 2022. The Call for Presentations is now open, and will close at 00:01 UTC on May 23rd, 2022. We are accepting presentation proposals for any topic that is related to the Apache mission of producing free software for the public good. This includes, but is not limited to: Community Big Data Search IoT Cloud Fintech Pulsar Tomcat You can submit your session proposals starting today at https://cfp.apachecon.com/ Rich Bowen, on behalf of the ApacheCon Planners apachecon.com @apachecon
[jira] [Created] (HBASE-26906) Remove duplicate dependency declaration
Nick Dimiduk created HBASE-26906: Summary: Remove duplicate dependency declaration Key: HBASE-26906 URL: https://issues.apache.org/jira/browse/HBASE-26906 Project: HBase Issue Type: Task Components: build Reporter: Nick Dimiduk Assignee: Nick Dimiduk On branch-2 derivatives, I noticed on PR builds of JDK11+Hadoop3 that running tests with {{-pl}} results in a warning from maven {noformat} [WARNING] [WARNING] Some problems were encountered while building the effective model for org.apache.hbase:hbase-build-configuration:pom:2.6.0-SNAPSHOT [WARNING] 'profiles.profile[hadoop-3.0].dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.apache.hadoop:hadoop-mapreduce-client-app:test-jar -> duplica te declaration of version ${hadoop-three.version} @ org.apache.hbase:hbase:2.6.0-SNAPSHOT, /Users/ndimiduk/repos/apache/hbase/pom.xml, line 3816, column 23 [WARNING] [WARNING] Some problems were encountered while building the effective model for org.apache.hbase:hbase:pom:2.6.0-SNAPSHOT [WARNING] 'profiles.profile[hadoop-3.0].dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.apache.hadoop:hadoop-mapreduce-client-app:test-jar -> duplica te declaration of version ${hadoop-three.version} @ line 3816, column 23 [WARNING] [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build. [WARNING] [WARNING] For this reason, future Maven versions might no longer support building such malformed projects. [WARNING] {noformat} I _think_ they're harmless, but we should clean them up. -- This message was sent by Atlassian Jira (v8.20.1#820001)