[ 
https://issues.apache.org/jira/browse/HDDS-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-5164.
--------------------------------------
    Fix Version/s: 1.2.0
       Resolution: Fixed

> Improve client and server logging
> ---------------------------------
>
>                 Key: HDDS-5164
>                 URL: https://issues.apache.org/jira/browse/HDDS-5164
>             Project: Apache Ozone
>          Issue Type: Improvement
>            Reporter: Nilotpal Nandi
>            Assignee: Hanisha Koneru
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.2.0
>
>
> On the OM server side, NotLeaderException and LeaderNotReadyExceptions can be 
> suppressed. Otherwise, OM1 log is flooded with NotLeaderExceptions as clients 
> always try OM1 before moving to the next OM. Instead we can change these 
> exception logs to DEBUG.
> Some BlockOutputStream and BlockOutputStreamEntryPool logs should be DEBUG 
> level instead of INFO.
> Running a 20GB put key operation resulted in the following console log:
> {code:java}
> ozone sh key put o3://ozone1/vol2/buck2/20GB /tmp/20GB
>  21/03/08 19:01:05 WARN impl.MetricsConfig: Cannot locate configuration: 
> tried 
> hadoop-metrics2-xceiverclientmetrics.properties,hadoop-metrics2.properties
>  21/03/08 19:01:05 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
>  21/03/08 19:01:05 INFO impl.MetricsSystemImpl: XceiverClientMetrics metrics 
> system started
>  21/03/08 19:01:06 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.ratis.metrics.impl.MetricRegistriesImpl
>  21/03/08 19:01:06 INFO metrics.RatisMetrics: Creating Metrics Registry : 
> ratis.client_message_metrics.client-30FCE96A6E8D->c2e9a19c-15f3-4eae-ba46-83c763d2ee8d
>  21/03/08 19:01:06 WARN impl.MetricRegistriesImpl: First MetricRegistry has 
> been created without registering reporters. You may need to call 
> MetricRegistries.global().addReporterRegistration(...) before.
>  21/03/08 19:02:39 ERROR storage.BlockOutputStream: writing chunk failed 
> 105855717519851570_chunk_43 blockID conID: 74 locID: 105855717519851570 
> bcsId: 7856 with exception 
> org.apache.ratis.protocol.exceptions.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server c2e9a19c-15f3-4eae-ba46-83c763d2ee8d@group-7EA52504D5B4: 
> Container 74 in CLOSING state
>  21/03/08 19:02:39 ERROR storage.BlockOutputStream: writing chunk failed 
> 105855717519851570_chunk_44 blockID conID: 74 locID: 105855717519851570 
> bcsId: 7856 with exception 
> org.apache.ratis.protocol.exceptions.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server c2e9a19c-15f3-4eae-ba46-83c763d2ee8d@group-7EA52504D5B4: 
> Container 74 in CLOSED state
>  21/03/08 19:02:39 ERROR storage.BlockOutputStream: writing chunk failed 
> 105855717519851571_chunk_1 blockID conID: 75 locID: 105855717519851571 bcsId: 
> 0 with exception org.apache.ratis.protocol.exceptions.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server c2e9a19c-15f3-4eae-ba46-83c763d2ee8d@group-7EA52504D5B4: 
> Container 75 in CLOSED state
>  21/03/08 19:02:39 ERROR storage.BlockOutputStream: writing chunk failed 
> 105855717519851571_chunk_2 blockID conID: 75 locID: 105855717519851571 bcsId: 
> 0 with exception org.apache.ratis.protocol.exceptions.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server c2e9a19c-15f3-4eae-ba46-83c763d2ee8d@group-7EA52504D5B4: 
> Container 75 in CLOSED state
>  21/03/08 19:02:39 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:02:39 ERROR storage.BlockOutputStream: writing chunk failed 
> 105855717519851571_chunk_3 blockID conID: 75 locID: 105855717519851571 bcsId: 
> 0 with exception org.apache.ratis.protocol.exceptions.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server c2e9a19c-15f3-4eae-ba46-83c763d2ee8d@group-7EA52504D5B4: 
> Container 75 in CLOSED state
>  21/03/08 19:02:39 ERROR storage.BlockOutputStream: writing chunk failed 
> 105855717519851571_chunk_4 blockID conID: 75 locID: 105855717519851571 bcsId: 
> 0 with exception org.apache.ratis.protocol.exceptions.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server c2e9a19c-15f3-4eae-ba46-83c763d2ee8d@group-7EA52504D5B4: 
> Container 75 in CLOSED state
>  21/03/08 19:02:41 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:02:43 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:02:45 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:02:47 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:02:48 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:02:50 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:02:52 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:02:54 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:02:56 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:02:57 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:02:59 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:01 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:02 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:04 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:06 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:07 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:10 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:11 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:13 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:15 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:17 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:18 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:20 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:22 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:23 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:28 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:30 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> 21/03/08 19:03:31 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList
> {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
>  21/03/08 19:03:33 INFO io.BlockOutputStreamEntryPool: Allocating block with 
> ExcludeList {datanodes = [], containerIds = [#74, #75], pipelineIds = []}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to