steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1343033763

   getting a test failure locally, ITestReadBufferManager failing as one of its 
asserts isn't valid.
   
   going to reopen the jira
   @pranavsaxena-microsoft can you see if you can replicate the problem and add 
a followup patch (use the same jira). 
   do make sure you are running this test *first*, and that it is failing for 
you. thanks
   
   ```
   INFO] Running org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
   [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
3.816 s <<< FAILURE! - in 
org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
   [ERROR] 
testPurgeBufferManagerForSequentialStream(org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager)
  Time elapsed: 1.995 s  <<< FAILURE!
   java.lang.AssertionError:
   [Buffers associated with closed input streams shouldn't be present]
   Expecting:
    
<org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((stream_read_bytes_backwards_on_seek=0)
 (stream_read_seek_forward_operations=0) (stream_read_seek_operations=0) 
(read_ahead_bytes_read=16384) (stream_read_seek_bytes_skipped=0) 
(stream_read_bytes=1) (action_http_get_request=0) (bytes_read_buffer=1) 
(seek_in_buffer=0) (remote_bytes_read=81920) 
(action_http_get_request.failures=0) (stream_read_operations=1) 
(remote_read_op=8) (stream_read_seek_backward_operations=0));
   gauges=();
   minimums=((action_http_get_request.failures.min=-1) 
(action_http_get_request.min=-1));
   maximums=((action_http_get_request.max=-1) 
(action_http_get_request.failures.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, 
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   
}AbfsInputStream@(1517329307){StreamStatistics{counters=((stream_read_seek_bytes_skipped=0)
 (seek_in_buffer=0) (stream_read_bytes=1) (stream_read_seek_operations=0) 
(remote_bytes_read=81920) (stream_read_operations=1) (bytes_read_buffer=1) 
(action_http_get_request.failures=0) (action_http_get_request=0) 
(stream_read_seek_forward_operations=0) (stream_read_bytes_backwards_on_seek=0) 
(read_ahead_bytes_read=16384) (stream_read_seek_backward_operations=0) 
(remote_read_op=8));
   gauges=();
   minimums=((action_http_get_request.min=-1) 
(action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) 
(action_http_get_request.failures.max=-1));
   means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) 
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
   }}>
   not to be equal to:
    
<org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((bytes_read_buffer=1)
 (stream_read_seek_forward_operations=0) (read_ahead_bytes_read=16384) 
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0) 
(stream_read_seek_backward_operations=0) (remote_bytes_read=81920) 
(stream_read_operations=1) (stream_read_bytes_backwards_on_seek=0) 
(action_http_get_request.failures=0) (seek_in_buffer=0) 
(action_http_get_request=0) (remote_read_op=8) (stream_read_bytes=1));
   gauges=();
   minimums=((action_http_get_request.min=-1) 
(action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) 
(action_http_get_request.failures.max=-1));
   means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) 
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
   }AbfsInputStream@(1517329307){StreamStatistics{counters=((remote_read_op=8) 
(stream_read_seek_forward_operations=0) 
(stream_read_seek_backward_operations=0) (read_ahead_bytes_read=16384) 
(action_http_get_request.failures=0) (bytes_read_buffer=1) 
(stream_read_seek_operations=0) (stream_read_bytes=1) 
(stream_read_bytes_backwards_on_seek=0) (action_http_get_request=0) 
(seek_in_buffer=0) (stream_read_seek_bytes_skipped=0) (remote_bytes_read=81920) 
(stream_read_operations=1));
   gauges=();
   minimums=((action_http_get_request.failures.min=-1) 
(action_http_get_request.min=-1));
   maximums=((action_http_get_request.failures.max=-1) 
(action_http_get_request.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, 
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   }}>
   
        at 
org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.assertListDoesnotContainBuffersForIstream(ITestReadBufferManager.java:145)
        at 
org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.testPurgeBufferManagerForSequentialStream(ITestReadBufferManager.java:120)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:750)
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to