The usual reason for these messages, is that user 503 does not belong to group 20 on the server.
Also seem to recall seeing these messages on NFS shares exported with allsquash, but can't recreate the problem since my uid/gid is now the same among my Linux/FreeBSD systems. Though speaking of NFS and Mac....do you run into a problem where the NFS mount disappears from the Mac after a while? I bring my work laptop home now and then, to do time machine backups to an NFS share. This is because the laptop is a MacBook Air (so no ethernet jack), and there are no NFS servers at work accessible over wireless. (when I had MacBookPro, I would have my Linux workstation NAT it in (since we each only get 3 jacks and unmanaged switches/hubs are not permitted...) Though one of my co-workers needed much more than 3 got networking to put a 24 port managed switch in his cube....its an out of service 10/100 switch. Though we only recently had our jacks upgraded to gigabit, before that I had broken down one day and setup a private 1 gig network....speeds up doing BackupPC among my computers (though it meant my Mac was only on the private network (so I missed that one of my ports had be configured as forced 100/Full...) Haven't installed VPN on my MacBook Air yet...so don't know if that would be an option at work, though because I can potentially access machines with sensitive data (but I'm not allowed to because they had rescheduled the mandatory training to when I was in the hospital and won't schedule another session of me. Right now I'm scheduled to take Chef training the 2 days after my next trip to KUMC....no idea what mood I'll be in after, conflicted on what mood I should be in for either of the two possible outcomes. Though I'm really hoping the outcome is they at least figure out what exactly it is that I have. :) It might not be practical to leave my MacBookAir plugged in to do TimeMachine backups during its PowerNaps to an NFS share via VPN at work. Which of course, isn't working at home, because the mount seems to disappear after a day or so. Yet, I was sure it used to last and result in it complaining that the server has gone away when I open it for the first time outside of home.... On 2014-05-12 19:47, Morgan Blackthorne wrote: > Actually, the UID/etc errors are coming from my Mac, which is mounting the > share over AFP: > >> cocoa:~/Downloads$ mv *zip thunder/WordPress/ >> >> mv: thunder/WordPress/sitepress-multilingual-cms.3.1.5.zip: set owner/group >> (was: 503/20): Operation not permitted >> >> mv: thunder/WordPress/smart-youtube.zip: set owner/group (was: 503/20): >> Operation not permitted >> >> mv: thunder/WordPress/tweet-old-post.zip: set owner/group (was: 503/20): >> Operation not permitted >> >> mv: thunder/WordPress/w3-total-cache.0.9.4.zip: set owner/group (was: >> 503/20): Operation not permitted >> >> mv: thunder/WordPress/wordpress-seo.1.5.3.zip: set owner/group (was: >> 503/20): Operation not permitted > > It works, it just complains about setting the users, so I'm not really > concerned about it. I just wanted to make sure that BackupPC didn't have any > special hitches with running over NFS before I considered moving that over. > (Since this would let me migrate the data over to a mirrored array, rather > than having all the backups on a SPOF. That said, the tertiary HD in that is > a 500G, and I freed up 4x1TB drives in the progress of upgrading the Drobo > (to 2x4T drives, found a good price on Newegg), so I may just rip out the > 500G and mirror the 1T instead. The question is what's going to require the > least amount of effort and time... > > -- > ~*~ StormeRider ~*~ > > "Every world needs its heroes [...] They inspire us to be better than we are. > And they protect from the darkness that's just around the corner." > > (from Smallville Season 6x1: "Zod") > > On why I hate the phrase "that's so lame"... http://bit.ly/Ps3uSS [2] > > On Mon, May 12, 2014 at 3:21 PM, Lawrence K. Chen, P.Eng. <[email protected]> > wrote: > >> At one time, I did run BackupPC with its pool over NFS. >> >> Though it was BackupPC running in a Linux VM on a Solaris server....with a >> zfs >> dataset from its mirrored rpool. >> >> I didn't have any issues with uid/gid....since it was a typical NFSv3 share >> (root squash in effect), everything in the pool is normally owner/group of >> backuppc. .. so the uid/gid stayed the same, they jump mapped to a different >> user/group on the Solaris side. >> >> Don't know what Drobo does....a guess is that it might be doing a forced >> anonuid/anongid, so that you don't have to have consistent uid/gid mappings >> across your network. >> >> Someday I should make my uid/gid's consistent across my Linux/FreeBSD systems >> at home....perhaps when I complete getting them all under the control of >> CFEngine. >> >> I later got BackupPC running directly on Solaris. And, now its running on >> FreeBSD with its own zpool. I had done VM + NFS, because "apt-get install >> backuppc" was easier than customizing BackupPC to work on Solaris, but I've >> been customizing it here and there. >> >> Since I use varying incremental and full periods, depending on the data. It >> is hard to tell if the next backup of a given 'host' is imminent and whether >> its going to do a full or incremental when its time comes. So, this morning I >> tweaked the host summary page, tonight I plan to convert it into a CFEngine >> edit_line bundle :) >> >> Someday I'll get my work machine under the control of my CFEngine at home.... >> I've discovered that my failsafe.cf [1] doesn't work from a clean >> slate....of a >> non-bootstrapped host (kind of hard to get the bootstrap to bring up a secure >> tunnel to discover my policy server at home....though failsafe has problems >> in >> this area as well...plus I wonder if all my remotes appear as the IP of my >> router is going to be a problem.) >> >> At work its 2x1TB mirrored. At home its 6x2TB's in a raidz2 (though the most >> common outage results in me losing 3 drives....since they are split between 2 >> - 5 bay enclosures.....have thought about whether a double conversion UPS is >> a >> consideration. When there's a flicker....sometimes it triggers a bus reset, >> and if power transfers back before the reset recovers....) >> >> Before FreeBSD, it was Ubuntu and they were managed by mdadm as a RAID >> 10.....but I did an upgrade from 8.04LTS to 10.04LTS....which had a bug that >> made some of my mdadm arrays vanish. The patch for the bug came out the >> following week. >> >> Before I built this big array, I had only been doing RAID1's...there were >> 5....I remember that the 6 disk array was md5, and known as vg4 (lv was >> backuppc.) I had played with 4x2TB in RAID5 briefly before jumping to the >> 6x2TB RAID10....which worked fine until I started having bitrot. >> >> The other reason I went big with FreeBSD at home, was that SLA customer at >> work had gone with such a system....so decided it was time I got back up to >> speed with it, etc. Though maybe I wasn't that behind....Netbackup client is >> for 6. Had talked about doing ZFS at home someday for years though....Windows >> 7 eating itself created an opening....(it auto applied a bunch of patches and >> then wouldn't boot, and the a chkdsk made everything vanish....always bugged >> me that Intel's RAID implied that when its 'initializing' my array is what >> most of us call resilvering. And, it does this only after after a BSOD. >> >> Their system is scary....FreeBSD 9 booting from a raidz of 6 2TB drives. But, >> its one of two production FreeBSD servers doing this (the second has a 100GB >> SSD for L2ARC....they had gotten thinking ZIL, but everything strong advises >> against not using mirrored SSDs for ZIL (especially a consumer MLC >> drive....maybe if they had gone with something like the 500GB Enterprise SLC >> drives in our 7420 :) Plus they don't need that much SSD for ZIL. I have 4G >> ZIL's at home, but the most I've seen in utilization is ~100MB. Guessing I'm >> no where near the interface max...6Gbps. :( >> >> Probably need to decide soon what I'm going to do with my remaining 10.04LTS >> server (there were two, but one died over Thanksgiving...so the last things >> on >> it got moved to a 12.04LTS box next to it....the rest had already been moved >> to a pair of FreeBSD servers....which I have discovered that I didn't leave >> space to do HAST as I original intended (trying to find details on how people >> are using zvols for HAST/CARP.) >> >> Though it might not matter with the 10.04LTS server....if more of its disks >> die off. Already lost over 1.5TB of data (well, its all in my BackupPC pool >> still) when a pair of ST2000DM000's decided they had sufficiently exceeded >> their 1 year warranty die close together. Didn't seem to matter that I had >> gotten them about 3 months apart from different sellers. >> >> I originally got one to replace a failed disk, which it reminded me the >> problems misaligned accesses on advanced format drives. At first I tweaked >> the partition which seemed to help....but eventually I turned it into two >> degraded arrays, copied the data across and moved the old disk into the new >> array.... >> >> Though wonder if using GPT would've caused a problem with upgrading ubuntu? >> >> Guess that's one less thing to worry about.... ;) >> >> The other alternative would've been if I got the system configuration >> documented in CFEngine....but I keep finding new things about CFEngine and >> the >> systems I do have under its control. >> >> On 05/09/14 09:19, Morgan Blackthorne wrote: >>> I was wondering if anyone had done this, and stored the pool over a >>> redundant >>> array of drives (like in my case, on 2x4TB drives in a Drobo FS) via an NFS >>> mount that root can write to. I've noticed that if I copy something as a >>> user, >>> it strips off the UID/GID with a warning, but I'm not sure if that's >>> something >>> that would actually impact the way that BackupPC operates. Given that it >>> does >>> deduping, I think it already has an index of file metadata and a reference >>> to >>> where to find it inside the pool. >>> >>> But I also don't want to screw up a working system, either, so wanted to see >>> if anyone might know of some pitfalls ahead in this prospect. Worst case, I >>> could just move some disks around now that I've expanded the Drobo, and >>> mirror >>> two 1TB drives just for the backup purposes with mdadm. >>> >>> Thanks for any advice. >>> >>> -- >>> ~*~ StormeRider ~*~ >>> >>> "Every world needs its heroes [...] They inspire us to be better than we >>> are. >>> And they protect from the darkness that's just around the corner." >>> >>> (from Smallville Season 6x1: "Zod") >>> >>> On why I hate the phrase "that's so lame"... http://bit.ly/Ps3uSS [2] >>> >>>> _______________________________________________ >>> Discuss mailing list >>> [email protected] >>> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss [3] >>> This list provided by the League of Professional System Administrators >>> http://lopsa.org/ [4] >>> >> >> -- >> Who: Lawrence K. Chen, P.Eng. - W0LKC - Sr. Unix Systems Administrator >> For: Enterprise Server Technologies (EST) -- & SafeZone Ally >> _______________________________________________ >> Discuss mailing list >> [email protected] >> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss [3] >> This list provided by the League of Professional System Administrators >> http://lopsa.org/ [4] -- Who: Lawrence K. Chen, P.Eng. - W0LKC - Sr. Unix Systems Administrator For: Enterprise Server Technologies (EST) -- & SafeZone Ally Links: ------ [1] http://failsafe.cf [2] http://bit.ly/Ps3uSS [3] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss [4] http://lopsa.org/
_______________________________________________ Discuss mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
