"Shawn K. O'Shea" <[EMAIL PROTECTED]> writes:
> Although Linux (ie GNU) du defaults to outputting sizes in k, OS X
> does not. It counts blocks (512 byte blocks)
Like a proper *BSD should ;)
--
Seeya,
Paul
___
gnhlug-discuss mailing list
gnhlug-discuss
the middle, thus
=>using *less* space on disk than the file size).
=>
=> The GNU variant, at least, has an option to report actual file sizes
=>instead of disk usage.
=>
=> Which one you want depends on what you're looking for.
I'd just like to kibbutz one more su
On 10/22/07, Michael ODonnell <[EMAIL PROTECTED]> wrote:
> Ooops - that "--files0-from=" option is apparently
> new enough ... that it's probably not widely available.
find . -xdev -type f -name "*.jpg" -print0 2>/dev/null | xargs -0 du
-ch | tail -1
(untested)
-- Ben
___
Shawn K. O'Shea wrote:
>> du -c *.txt | tail -1
>
> Since I know Kent has a Mac and this might be on his laptop, I'd like
> to add that this should really be:
> du -ck *.txt | tail -1
No, this is a bona fide Linux question :-) it's a Webfaction account.
But thanks for the note!
Kent
___
t; means
*disk usage*. That means du is supposed to be aware of things like
allocation overhead (a 3 byte might use 4096 bytes on disk, or
whatever) and sparse files (files with "holes" in the middle, thus
using *less* space on disk than the file size).
The GNU variant, at least, has an
On Monday 22 October 2007 09:36, Kent Johnson wrote:
> Jim Kuzdrall wrote:
> > On Monday 22 October 2007 09:11, Kent Johnson wrote:
> >> How can I get the total size, in K, of all files in a directory
> >> that match a pattern?
> >>
> >> For example, I have a dir with ~5000 files, I would like to k
> Hmm, again, certainly not my fist instinct :)
Paul, we embrace diversity here but that is *definitely* OT...
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
On 10/22/07, Stephen Ryan <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-10-22 at 09:11 -0400, Kent Johnson wrote:
> > Newbie question:
> >
> > How can I get the total size, in K, of all files in a directory that
> > match a pattern?
> >
> > For example, I have a dir with ~5000 files, I would like to kn
Kent Johnson <[EMAIL PROTECTED]> writes:
> Newbie question:
>
> How can I get the total size, in K, of all files in a directory that
> match a pattern?
Stephen Ryan <[EMAIL PROTECTED]> writes:
> du -c *.txt | tail -1
>
> du prints out the sizes of each of the matching files; '-c' means you
> wa
Ooops - that "--files0-from=" option is apparently
new enough (my du version is 5.97) that it's probably
not widely available. My home system has it, but my
work systems don't... >-/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
Jim Kuzdrall wrote:
> On Monday 22 October 2007 09:11, Kent Johnson wrote:
>> How can I get the total size, in K, of all files in a directory that
>> match a pattern?
>>
>> For example, I have a dir with ~5000 files, I would like to know the
>> total size of the ~1000 files matching *.txt.
>
>
Kent Johnson wrote:
> Newbie question:
>
> How can I get the total size, in K, of all files in a directory that
> match a pattern?
>
> For example, I have a dir with ~5000 files, I would like to know the
> total size of the ~1000 files matching *.txt.
>
> On RHEL and bash, if it matters...
> T
More than you asked for, but here's a command that reports
total space occupied by all files with names ending in .jpg,
recursively from the current directory (but not crossing mount
points) and which is also a gratuitous example of the Process
Substitution facility mentioned in a previous thread
On Monday 22 October 2007 09:11, Kent Johnson wrote:
> Newbie question:
>
> How can I get the total size, in K, of all files in a directory that
> match a pattern?
>
> For example, I have a dir with ~5000 files, I would like to know the
> total size of the ~1000 files matching *.txt.
Ah! Perh
On Mon, 2007-10-22 at 09:11 -0400, Kent Johnson wrote:
> Newbie question:
>
> How can I get the total size, in K, of all files in a directory that
> match a pattern?
>
> For example, I have a dir with ~5000 files, I would like to know the
> total size of the ~1000 files matching *.txt.
>
du -
Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total size of the ~1000 files matching *.txt.
On RHEL and bash, if it matters...
Thanks,
Kent
>>> It would help if you told us:
>>> - distribution and release
>>> - kernel version
>>> - C library version
>>> - Samba version
and architecture, as some are more equal than others,
particularly the 64 bit ones, like Alpha...
B.
___
gnhlug-dis
In a message dated: 20 Aug 2002 07:34:27 EDT
"Kenneth E. Lussier" said:
>Hi All,
>
>Can the 2GB file size limit be changed? I need to store about 10GB worth
>of data in a single file, but it dies at 2GB.
I don't know if ext2 supports "big files". I think you need to turn
something on in the k
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
At some point hitherto, Mark Komarinski hath spake thusly:
> Samba and NFS(v2) don't like >2GB file sizes.
> http://www.suse.de/~aj/linux_lfs.html
That page is a bit outdated. It talks about RH 6.2 as being current,
and doesn't
untering a limit in:
> - the ext2 driver in your kernel
> - the general file I/O routines in your kernel
> - your C library
> - Samba
Samba and NFS(v2) don't like >2GB file sizes.
http://www.suse.de/~aj/linux_lfs.html
-Mark
___
On 20 Aug 2002, at 8:12am, Kenneth E. Lussier wrote:
> Sorry for the lack of description. I didn't want to get into too much
> detail, since it is a bit embarrassing I'm doing a Windows backup to a
> samba mount. I get write failures at the 2GB point. I believe that it is
> actually a limit in
(fpos_t and friends)
> instead of just "int" to represent file sizes...
Sorry for the lack of description. I didn't want to get into too much
detail, since it is a bit embarrassing I'm doing a Windows backup to
a samba mount. I get write failures at the 2GB point. I bel
"Kenneth E. Lussier" <[EMAIL PROTECTED]> writes:
> Hi All,
>
> Can the 2GB file size limit be changed? I need to store about 10GB worth
> of data in a single file, but it dies at 2GB.
I have files that are more than 3 GB on my system, in an ext3 filesystem.
It depends on whether the failure is
modified to use the
new interfaces and types (fpos_t and friends)
instead of just "int" to represent file sizes...
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Hi All,
Can the 2GB file size limit be changed? I need to store about 10GB worth
of data in a single file, but it dies at 2GB.
TIA,
Kenny
--
"Tact is just *not* saying true stuff" -- Cordelia Chase
Kenneth E. Lussier
25 matches
Mail list logo