On May 3, 2012, at 4:01 PM, m.r...@5-cent.us wrote:
Ljubomir Ljubojevic wrote:
On 05/03/2012 09:16 PM, aurfalien wrote:
On May 3, 2012, at 3:04 PM, Glenn Cooper wrote:
I never really paid attention to this but a file on an NFS mount is
showing 64M in size, but when copying the file to a
On May 3, 2012, at 3:04 PM, Glenn Cooper wrote:
I never really paid attention to this but a file on an NFS mount is
showing 64M in size, but when copying the file to a local drive, it
shows 2.5MB in size.
My NFS server is hardware Raided with a volume stripe size of 128K
were the volume
On May 3, 2012, at 3:04 PM, Glenn Cooper wrote:
I never really paid attention to this but a file on an NFS mount is
showing 64M in size, but when copying the file to a local drive, it
shows 2.5MB in size.
My NFS server is hardware Raided with a volume stripe size of 128K
were the volume
On 05/03/2012 09:16 PM, aurfalien wrote:
On May 3, 2012, at 3:04 PM, Glenn Cooper wrote:
I never really paid attention to this but a file on an NFS mount is
showing 64M in size, but when copying the file to a local drive, it
shows 2.5MB in size.
My NFS server is hardware Raided with a
Ljubomir Ljubojevic wrote:
On 05/03/2012 09:16 PM, aurfalien wrote:
On May 3, 2012, at 3:04 PM, Glenn Cooper wrote:
I never really paid attention to this but a file on an NFS mount is
showing 64M in size, but when copying the file to a local drive, it
shows 2.5MB in size.
snip
By the way,
Hi all,
I never really paid attention to this but a file on an NFS mount is showing 64M
in size, but when copying the file to a local drive, it shows 2.5MB in size.
My NFS server is hardware Raided with a volume stripe size of 128K were the
volume size is 20TB.
My NFS clients are the same
Have you noticed any bad blocks warnings on your /var/log/messages?
The badblocks command can also help you.
On 7/25/07, Centos [EMAIL PROTECTED] wrote:
ok, the file system is ext and block size is the default which is 4096,
so I should be able to have 16 Tera Byte filesystem and 2 Tera Byte
On 7/26/07, Centos [EMAIL PROTECTED] wrote:
any idea why my server crashes when I am creating a 200 G tar file?
I am using tar -zcvf and the original file is about 250 G
Probably because you're attempting to compress it at the same time.
That's amazingly resource intensive, and it's probably
any idea why my server crashes when I am creating a 200 G tar file?
I am using tar -zcvf and the original file is about 250 G
Centos wrote:
ok, the file system is ext and block size is the default which is 4096,
so I should be able to have 16 Tera Byte filesystem and 2 Tera Byte
files size.
Hello
What is the largest file size that can be created on Linux ?
is there any limitation ?
Thanks
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Thank you Jim,
How can I find the current block size and file system type ?
Jim Perrin wrote:
On 7/25/07, Centos [EMAIL PROTECTED] wrote:
What is the largest file size that can be created on Linux ?
is there any limitation ?
This depends on several things, including the architecture
Centos wrote:
Thank you Jim,
How can I find the current block size and file system type ?
File system type can be found in 3rd column of /etc/fstab.
For ext{2,3} file systems the block size can be found by
tune2fs -l /dev/ | grep Block size
where XXX is something like
1) sda1 (for
ok, the file system is ext and block size is the default which is 4096,
so I should be able to have 16 Tera Byte filesystem and 2 Tera Byte
files size.
I had to transfer some files which the total size was about 250 G
so I used tar -zcvf to tar and gzip them , but server crashed and rebooted
13 matches
Mail list logo