From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Bos, Karel >I partly agree with the OS statement. Partly because I find it difficult >to explain why an OS is able to contain one disk with +6 million files >but the back-up application isn't able to get the back-up stable >(journaling) or working at all (normal incremental) without using time >consuming options like memory efficient.
Disk accessibility and (proper) use of RAM for processes are two entirely different issues. Windows has (IMO) never been a real server operating system, for this and many other reasons. >Splitting a disk over multiple nodes means hard coding the subdirs under >root in the opt files. If a system administrator puts new data in a >different folder, this data will be mist. Work around, adding a extra >node which has a exclude.dir for all dirs already being managed by the >other nodes. ...or the judicious use of DOMAIN statements could do the same this, as could regular expressions. (Does anyone know if Windows clients recognize regular expressions in include/exclude statements?) >But what if the root is de container of all data? Meaning, the profile >disk of a windows box with all profiles (6000+) in the root of the disk? >Do I really want to be force to configure multiple nodes + one extra to >get the backup of this monster running? MemEff runs for over 36 hours >and Journal db is +2 GB within 24 hours. Such a system (all data in the root filesystem/drive) is very poorly designed. If you want highly robust storage accessibility, Windows/Intel are *not* the way to go for either software or hardware. The only reason many data shops try to cram 2-4TB of storage onto a single Windows server is because there is no desire to lay out the money or skills to do the work properly (i.e., UNIX). -- Mark Stapleton ([EMAIL PROTECTED]) Senior TSM consultant