Actually, the UID/etc errors are coming from my Mac, which is mounting the
share over AFP:

cocoa:~/Downloads$ mv *zip thunder/WordPress/
mv: thunder/WordPress/sitepress-multilingual-cms.3.1.5.zip: set owner/group
(was: 503/20): Operation not permitted
mv: thunder/WordPress/smart-youtube.zip: set owner/group (was: 503/20):
Operation not permitted
mv: thunder/WordPress/tweet-old-post.zip: set owner/group (was: 503/20):
Operation not permitted
mv: thunder/WordPress/w3-total-cache.0.9.4.zip: set owner/group (was:
503/20): Operation not permitted
mv: thunder/WordPress/wordpress-seo.1.5.3.zip: set owner/group (was:
503/20): Operation not permitted


It works, it just complains about setting the users, so I'm not really
concerned about it. I just wanted to make sure that BackupPC didn't have
any special hitches with running over NFS before I considered moving that
over. (Since this would let me migrate the data over to a mirrored array,
rather than having all the backups on a SPOF. That said, the tertiary HD in
that is a 500G, and I freed up 4x1TB drives in the progress of upgrading
the Drobo (to 2x4T drives, found a good price on Newegg), so I may just rip
out the 500G and mirror the 1T instead. The question is what's going to
require the least amount of effort and time...


--
~*~ StormeRider ~*~

"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."

(from Smallville Season 6x1: "Zod")

On why I hate the phrase "that's so lame"... http://bit.ly/Ps3uSS


On Mon, May 12, 2014 at 3:21 PM, Lawrence K. Chen, P.Eng. <[email protected]>wrote:

> At one time, I did run BackupPC with its pool over NFS.
>
> Though it was BackupPC running in a Linux VM on a Solaris server....with a
> zfs
> dataset from its mirrored rpool.
>
> I didn't have any issues with uid/gid....since it was a typical NFSv3 share
> (root squash in effect), everything in the pool is normally owner/group of
> backuppc. .. so the uid/gid stayed the same, they jump mapped to a
> different
> user/group on the Solaris side.
>
> Don't know what Drobo does....a guess is that it might be doing a forced
> anonuid/anongid, so that you don't have to have consistent uid/gid mappings
> across your network.
>
> Someday I should make my uid/gid's consistent across my Linux/FreeBSD
> systems
> at home....perhaps when I complete getting them all under the control of
> CFEngine.
>
> I later got BackupPC running directly on Solaris.  And, now its running on
> FreeBSD with its own zpool.  I had done VM + NFS, because "apt-get install
> backuppc" was easier than customizing BackupPC to work on Solaris, but I've
> been customizing it here and there.
>
> Since I use varying incremental and full periods, depending on the data.
>  It
> is hard to tell if the next backup of a given 'host' is imminent and
> whether
> its going to do a full or incremental when its time comes.  So, this
> morning I
> tweaked the host summary page, tonight I plan to convert it into a CFEngine
> edit_line bundle :)
>
> Someday I'll get my work machine under the control of my CFEngine at
> home....
> I've discovered that my failsafe.cf doesn't work from a clean slate....of
> a
> non-bootstrapped host (kind of hard to get the bootstrap to bring up a
> secure
> tunnel to discover my policy server at home....though failsafe has
> problems in
> this area as well...plus I wonder if all my remotes appear as the IP of my
> router is going to be a problem.)
>
> At work its 2x1TB mirrored.  At home its 6x2TB's in a raidz2 (though the
> most
> common outage results in me losing 3 drives....since they are split
> between 2
> - 5 bay enclosures.....have thought about whether a double conversion UPS
> is a
> consideration.  When there's a flicker....sometimes it triggers a bus
> reset,
> and if power transfers back before the reset recovers....)
>
> Before FreeBSD, it was Ubuntu and they were managed by mdadm as a RAID
> 10.....but I did an upgrade from 8.04LTS to 10.04LTS....which had a bug
> that
> made some of my mdadm arrays vanish.  The patch for the bug came out the
> following week.
>
> Before I built this big array, I had only been doing RAID1's...there were
> 5....I remember that the 6 disk array was md5, and known as vg4 (lv was
> backuppc.)  I had played with 4x2TB in RAID5 briefly before jumping to the
> 6x2TB RAID10....which worked fine until I started having bitrot.
>
> The other reason I went big with FreeBSD at home, was that SLA customer at
> work had gone with such a system....so decided it was time I got back up to
> speed with it, etc.  Though maybe I wasn't that behind....Netbackup client
> is
> for 6.  Had talked about doing ZFS at home someday for years
> though....Windows
> 7 eating itself created an opening....(it auto applied a bunch of patches
> and
> then wouldn't boot, and the a chkdsk made everything vanish....always
> bugged
> me that Intel's RAID implied that when its 'initializing' my array is what
> most of us call resilvering.  And, it does this only after after a BSOD.
>
> Their system is scary....FreeBSD 9 booting from a raidz of 6 2TB drives.
>  But,
> its one of two production FreeBSD servers doing this (the second has a
> 100GB
> SSD for L2ARC....they had gotten thinking ZIL, but everything strong
> advises
> against not using mirrored SSDs for ZIL (especially a consumer MLC
> drive....maybe if they had gone with something like the 500GB Enterprise
> SLC
> drives in our 7420 :)  Plus they don't need that much SSD for ZIL.  I have
> 4G
> ZIL's at home, but the most I've seen in utilization is ~100MB.  Guessing
> I'm
> no where near the interface max...6Gbps. :(
>
> Probably need to decide soon what I'm going to do with my remaining
> 10.04LTS
> server (there were two, but one died over Thanksgiving...so the last
> things on
> it got moved to a 12.04LTS box next to it....the rest had already been
> moved
> to a pair of FreeBSD servers....which I have discovered that I didn't leave
> space to do HAST as I original intended (trying to find details on how
> people
> are using zvols for HAST/CARP.)
>
> Though it might not matter with the 10.04LTS server....if more of its disks
> die off.  Already lost over 1.5TB of data (well, its all in my BackupPC
> pool
> still) when a pair of ST2000DM000's decided they had sufficiently exceeded
> their 1 year warranty die close together.  Didn't seem to matter that I had
> gotten them about 3 months apart from different sellers.
>
> I originally got one to replace a failed disk, which it reminded me the
> problems misaligned accesses on advanced format drives.  At first I tweaked
> the partition which seemed to help....but eventually I turned it into two
> degraded arrays, copied the data across and moved the old disk into the new
> array....
>
> Though wonder if using GPT would've caused a problem with upgrading ubuntu?
>
> Guess that's one less thing to worry about.... ;)
>
> The other alternative would've been if I got the system configuration
> documented in CFEngine....but I keep finding new things about CFEngine and
> the
> systems I do have under its control.
>
>
>
> On 05/09/14 09:19, Morgan Blackthorne wrote:
> > I was wondering if anyone had done this, and stored the pool over a
> redundant
> > array of drives (like in my case, on 2x4TB drives in a Drobo FS) via an
> NFS
> > mount that root can write to. I've noticed that if I copy something as a
> user,
> > it strips off the UID/GID with a warning, but I'm not sure if that's
> something
> > that would actually impact the way that BackupPC operates. Given that it
> does
> > deduping, I think it already has an index of file metadata and a
> reference to
> > where to find it inside the pool.
> >
> > But I also don't want to screw up a working system, either, so wanted to
> see
> > if anyone might know of some pitfalls ahead in this prospect. Worst
> case, I
> > could just move some disks around now that I've expanded the Drobo, and
> mirror
> > two 1TB drives just for the backup purposes with mdadm.
> >
> > Thanks for any advice.
> >
> > --
> > ~*~ StormeRider ~*~
> >
> > "Every world needs its heroes [...] They inspire us to be better than we
> are.
> > And they protect from the darkness that's just around the corner."
> >
> > (from Smallville Season 6x1: "Zod")
> >
> > On why I hate the phrase "that's so lame"... http://bit.ly/Ps3uSS
> >
> >
> > _______________________________________________
> > Discuss mailing list
> > [email protected]
> > https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
> > This list provided by the League of Professional System Administrators
> >  http://lopsa.org/
> >
>
> --
> Who: Lawrence K. Chen, P.Eng. - W0LKC - Sr. Unix Systems Administrator
> For: Enterprise Server Technologies (EST) -- & SafeZone Ally
> _______________________________________________
> Discuss mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to