thanks for sharing this info Paul :-)

On 10 September 2014 17:33, Paul Gale <paul.n.g...@gmail.com> wrote:
> All of the following is assuming you're using Linux. I'm using RHEL 6.3 to
> mount an NFSv3 based device using autofs.
>
> I should have added that the issue for me was that I had specified the
> wrong block size values for the rsize/wsize parameters in the autofs mount
> configuration for the device I was mounting.
>
> I was operating in the mistaken belief that the larger the value for these
> parameters the better. Therefore I set their values to be 256K (in bytes).
> Problems with the message store followed.
>
> What I should have done, and ended up doing, was rather than guess the
> value of the device's block size was to determine the device's _actual_
> block size. You can either ask the administrator for the device what its
> block size is or use the stat command.
>
> If you really want to play it safe you could always use the default block
> size for a device that supports NFSv3 which is quite conservative at 8192
> bytes (I think - look it up). However, if the device you're mounting can
> support larger block sizes then the stat command is how you would find that
> out.
>
> First, mount the device using a _very_ conservative block size value, say
> 1024 bytes. Second, run the stat command on the mount point to see what the
> device's block size actually is. It might be the default 8192 or it could
> be larger. Either way you'll know.
>
> Here's an example. Say your local mount point is /NFS then the stat command
> to use is:
>
> stat -f /NFS
>
> The output should look something like:
>
> File: "/NFS"
> ID: 0        Namelen: 255    Type: nfs
> Block size: 32768      Fundamental block size: 32768
> Blocks: Total: 330424288  Free: 178080429  Available: 178080429
> Inodes: Total: 257949694  Free: 246974355
>
> The output indicates the block size in bytes (32768) for the device. This
> is the value that should be plugged into the rsize/wsize parameters for the
> mount's definition.
>
> I hope this helps.
>
> Thanks,
> Paul
>
>
> Thanks,
> Paul
>
> On Wed, Sep 10, 2014 at 10:27 AM, Paul Gale <paul.n.g...@gmail.com> wrote:
>
>> In my particular case I fixed it when I realized that I had the NFS mount
>> settings for the mount where the KahaDB message store was located
>> mis-configured. Since correcting the settings I've not had a single
>> problem.
>>
>> Are you using NFS?
>>
>>
>> Thanks,
>> Paul
>>
>> On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <
>> khandelwal.anu...@gmail.com> wrote:
>>
>>> I am also seeing the same exception with ActiveMQv5.10. It comes
>>> infrequent
>>> and non-reproducible.
>>>
>>> I have already posted
>>>
>>> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>>>
>>>
>>> ActiveMQGods can you please help us out here. ?
>>>
>>>
>>>
>>> Thanks,
>>> Anuj
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>>
>>



-- 
http://redhat.com
http://blog.garytully.com

Reply via email to