Fw: Re: ZFS continuously growing

2009-09-03 Thread Robert Eckardt
On Thu, 3 Sep 2009 09:09:19 +0200, Adrian Penisoara wrote
 Hi,
 
 On Wed, Sep 2, 2009 at 10:22 PM, Robert Eckardt
 robert.ecka...@robert-eckardt.de wrote:
  Do I have to be worried?
  Is there a memory leak in the current ZFS implementation?
  Why is used space growing slower than free space is shrinking?
  Is there some garbage collection needed in ZFS?
 
  Besides, although the backup server has 3 GB RAM I had to tune arc_max
  to 150MB to copy the backed-up data from an 2.8TB ZFS (v6) to the
  4.5 TB ZFS (v13) by zfs send|zfs recv without kmalloc panic.
  (I.e., the defaults algorithm was not sufficient.)
 
  Do I take you are using ZFS snapshots in between rsync'ing 
 (send/recv requires snapshots) ? Could you please post the zfs 
 list output after subsequent runs to clarify ?
 
 Regards,
 Adrian
 EnterpriseBSD

Hi Adrian,

no I'm not using snapshots. Just seperate directories, where identical 
files are hardlinked by rsync to the version one day older.
The send|recv was neccessary when I increased the raidz of the backup-fs.
(Copying everthing to two 1.5TB HDDs and after adding disks back again.
I used s.th. like zfs send bigpool/b...@backup | zfs recv big/big.)

Here the zfs list of the last five days:
Thu Sep  3 09:36:12 CEST 2009  (Today add. 2GB of data were transfered.)
big  1861882752  0 1861882752 0%5 14545959
0%   /big 
big/big  4676727168 2814844416 186188275260% 43137409 14545959   
75%   /big/big 
NAME  USED  AVAIL  REFER  MOUNTPOINT 
big  2.72T  1.73T  31.5K  /big 
big/big  2.72T  1.73T  2.62T  /big/big

Wed Sep  2 09:36:24 CEST 2009
big  1869058944128 1869058816 0%5 14602022
0%   /big 
big/big  4679698688 2810639872 186905881660% 43226966 14602022   
75%   /big/big 
NAME  USED  AVAIL  REFER  MOUNTPOINT 
big  2.72T  1.74T  31.5K  /big 
big/big  2.72T  1.74T  2.62T  /big/big

Tue Sep  1 09:36:33 CEST 2009
big  1875352064  0 1875352064 0%5 14651188
0%   /big 
big/big  4683241856 2807889792 187535206460% 43316454 14651188   
75%   /big/big 
NAME  USED  AVAIL  REFER  MOUNTPOINT 
big  2.71T  1.75T  31.5K  /big 
big/big  2.71T  1.75T  2.62T  /big/big

Mon Aug 31 09:45:26 CEST 2009
big  1881967616128 1881967488 0%5 14702871
0%   /big 
big/big  4686380928 2804413440 188196748860% 43406044 14702871   
75%   /big/big 
NAME  USED  AVAIL  REFER  MOUNTPOINT 
big  2.71T  1.75T  31.5K  /big 
big/big  2.71T  1.75T  2.61T  /big/big

Sun Aug 30 09:39:31 CEST 2009
big  1891064192  0 1891064192 0%5 14773939
0%   /big 
big/big  4694821376 2803757184 189106419260% 43496712 14773939   
75%   /big/big 
NAME  USED  AVAIL  REFER  MOUNTPOINT 
big  2.70T  1.76T  31.5K  /big 
big/big  2.70T  1.76T  2.61T  /big/big

Regards,
Robert

--
Dr. Robert Eckardt---robert.ecka...@robert-eckardt.de
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Fw: Re: ZFS continuously growing [SOLVED]

2009-09-03 Thread Robert Eckardt
On Thu, 3 Sep 2009 10:01:28 +0100, krad wrote 
 2009/9/3 Robert Eckardt r...@robert-eckardt.de 
 On Thu, 3 Sep 2009 09:09:19 +0200, Adrian Penisoara wrote 
 
  Hi, 
  
  On Wed, Sep 2, 2009 at 10:22 PM, Robert Eckardt 
  robert.ecka...@robert-eckardt.de wrote: 
 
   Do I have to be worried? 
   Is there a memory leak in the current ZFS implementation? 
   Why is used space growing slower than free space is shrinking? 
   Is there some garbage collection needed in ZFS? 
   
   Besides, although the backup server has 3 GB RAM I had to tune arc_max 
   to 150MB to copy the backed-up data from an 2.8TB ZFS (v6) to the 
   4.5 TB ZFS (v13) by zfs send|zfs recv without kmalloc panic. 
   (I.e., the defaults algorithm was not sufficient.) 
  

 do a  zfs list -t all 
 
 you will see all snapshots and zvols then as well

Uups, sorry for asking.
Everything o.k. after zfs destroy big/b...@backup  :-(

I hope the info on arc_max will stay useful.

Regards,
Robert

-- 
Dr. Robert Eckardt    ---     robert.ecka...@robert-eckardt.de

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


ZFS continuously growing

2009-09-02 Thread Robert Eckardt
Hi folks,

after upgrading my backup server to 8.0-BETA2, I noticed that the 
available space shrinks from backup to backup (a tree each day with 
differential rsync) although with each new tree the oldest tree gets 
removed.

Since I removed some subdirectories on my active server the number 
of used inodes now is reduced by approx. 9 on each run.
At the same time used space grows by between 650MB and 6.7GB and
free space gets reduced by 4.4 to 9GB (see table below). The output 
of df and zfs list is consistent.

Although I understand that the backed-up file by rsync can be much 
larger than the data transferred I get worried that without changing 
much the available space shrinks continuously. (Remember, the number 
of backup trees stays constant since the oldest gets removed and 
6GB/d results in more that 1TB over half a year.)

Do I have to be worried?
Is there a memory leak in the current ZFS implementation?
Why is used space growing slower than free space is shrinking?
Is there some garbage collection needed in ZFS?

Besides, although the backup server has 3 GB RAM I had to tune arc_max 
to 150MB to copy the backed-up data from an 2.8TB ZFS (v6) to the 
4.5 TB ZFS (v13) by zfs send|zfs recv without kmalloc panic.
(I.e., the defaults algorithm was not sufficient.)

Regards,
Robert

day rsynced Usedfreeinodes  oldest 
dir  newest dir  d-used  d-free  d-inode
27  570189872792986368  1914681984  43854571
20090224-0917   20090827-0916   
28  671812512794269440  1910242176  43765134
20090225-0917   20090828-0916
1.283.072   -4.439.808  -89.437
30  520783822800983296  1897022720  43586320
20090227-0917   20090830-0916
6.713.856   -13.219.456 -178.814
31  2647268060  2803757056  1891064192  43496712
20090228-0917   20090831-0916
2.773.760   -5.958.528  -89.608
1   920962582804415616  1881965184  43406059
20090301-0917   20090901-0916
658.560 -9.099.008  -90.653
2   121590303   2807900288  1875341440  43316517
20090302-0917   20090902-0916
3.484.672   -6.623.744  -89.542

--
Dr. Robert Eckardt---robert.ecka...@robert-eckardt.de

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


ZFS continuously growing

2009-09-02 Thread Robert Eckardt
Hi folks,

after upgrading my backup server to 8.0-BETA2, I noticed that the 
available space shrinks from backup to backup (a tree each day with 
differential rsync) although with each new tree the oldest tree gets 
removed.

Since I removed some subdirectories on my active server the number 
of used inodes now is reduced by approx. 9 on each run.
At the same time used space grows by between 650MB and 6.7GB and
free space gets reduced by 4.4 to 9GB (see table below). The output 
of df and zfs list is consistent.

Although I understand that the backed-up file by rsync can be much 
larger than the data transferred I get worried that without changing 
much the available space shrinks continuously. (Remember, the number 
of backup trees stays constant since the oldest gets removed and 
6GB/d results in more that 1TB over half a year.)

Do I have to be worried?
Is there a memory leak in the current ZFS implementation?
Why is used space growing slower than free space is shrinking?
Is there some garbage collection needed in ZFS?

Besides, although the backup server has 3 GB RAM I had to tune arc_max 
to 150MB to copy the backed-up data from an 2.8TB ZFS (v6) to the 
4.5 TB ZFS (v13) by zfs send|zfs recv without kmalloc panic.
(I.e., the defaults algorithm was not sufficient.)

Regards,
Robert

day rsynced Usedfreeinodes  oldest 
dir  newest dir  d-used  d-free  d-inode
27  570189872792986368  1914681984  43854571
20090224-0917   20090827-0916   
28  671812512794269440  1910242176  43765134
20090225-0917   20090828-0916
1.283.072   -4.439.808  -89.437
30  520783822800983296  1897022720  43586320
20090227-0917   20090830-0916
6.713.856   -13.219.456 -178.814
31  2647268060  2803757056  1891064192  43496712
20090228-0917   20090831-0916
2.773.760   -5.958.528  -89.608
1   920962582804415616  1881965184  43406059
20090301-0917   20090901-0916
658.560 -9.099.008  -90.653
2   121590303   2807900288  1875341440  43316517
20090302-0917   20090902-0916
3.484.672   -6.623.744  -89.542

--
Dr. Robert Eckardt---robert.ecka...@robert-eckardt.de
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Fw: 5.2 / 5.2.1 Highpoint 374 RAID 5 driver support

2004-03-11 Thread Robert Eckardt
On Wed, 10 Mar 2004 13:49:39 -, Steven Hartland wrote
 I'm looking to get the 374 working with  5.2 / 5.2.1 does anyone have
 any info on this. Highpoint have a driver for all the old version but
 no source / 5.2 download. Can we get the source or is their an
 alternative.
 
Steve

Hi Steve,

there should be basic support in the ATA-driver. (For the 372 there is.)

However, it does not support initial synchronization of two disks and
the handling in case of an error is very limited.
You either have to start using an already existing array or resync using
some BIOS tool. Later, the driver does not care whether the content of
the RAID is identical or not.

Robert


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Guide to writing device drivers sought

2003-03-09 Thread Robert Eckardt
Hi,

long ago I used the joy-driver as an example to integrate my own device
driver. I'm now trying (once again :-) to do the same in FreeBSD-4.7.
Unfortunately, joy no longer functions correctly (since 4.1) and so it is an
inappropriate example.
My driver is going to create two devices with different minor device numbers
(/dev/dcf and /dev/dcf100) (as joy should do too), which can be used
simultaneously (both accessing the same I/O port e.g. 0x201).

Where can I find an introduction to the currently used device framework?
(Things seem to be spreaad out over numerous man pages.)

How can I write a device driver that creates multiple devices with different
minor device numbers for each given I/O port?

Which driver is suited best as a simple example?

Where can I find an introduction how to deal with PnP- and Non-PnP-hardware
correctly?


Thanks,
Robert

--
Dr. Robert Eckardt---[EMAIL PROTECTED]

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


No 16bit mode with newpcm for Soundblaster Creative Vibra 16C

2000-10-30 Thread Robert Eckardt
rs!).
# Note that motherboard sound devices may require options PNPBIOS.
#
# Supported cards include:
# Creative SoundBlaster ISA PnP/non-PnP
# Supports ESS and Avance ISA chips as well.
# Gravis UltraSound ISA PnP/non-PnP
# Crystal Semiconductor CS461x/428x PCI
# Neomagic 256AV (ac97)
# Most of the more common ISA/PnP sb/mss/ess compatable cards.

# For non-pnp sound cards with no bridge drivers only:
#device pcm0 at isa? irq 5 drq 1 flags 0x0
#
# For PnP/PCI sound cards
device  pcm
device  sbc0at isa? port 0x220 irq 5 drq 1 flags 0x15

# Not controlled by `snd'
device  pca0 at isa? port IO_TIMER1



-- 
Dr. Robert Eckardt  [EMAIL PROTECTED]


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message