This is what I got after the upgrade nv109->nv110 here, when I 'init 6' it.
I was hoping that it would just go away, but it doesn't.
Actually it is worse, because it shuts down the file system uncleanly and at
Failsafe does the complete FS-check, phases 1-5, before the boot archive can be
updated
[i]I don't understand why the rules for ZFS are any different from the
rules for any other filesystem. Why don't you try pulling out drives for
UFS and pcfs and seeing whether they are corrupted or not? Guess what,
they are equally likely to be corrupted, but you simply wont be ablt to
detect the c
Try the thread "ZFS: unreliable for professional usage?" in zfs-discuss for
further info, and read how SUN engineers agree to potential corruption without
proper umount && export.
Don't. Simply don't. At least not yet, as long as the recovery utility for the
Überblocks is pending.
Then test-test
vfat/pcfs seems to be the best alternative.
Search the archives, and you'll find that under specific circumstances ZFS
could damage the data beyond a chance for recovery, so avoid it.
I myself keep some UFS, without problem. Only, depending on the purpose of the
backup, it is difficult to read f
[i]I have not seen those details on this list. Did you share those details
privately?[/i]
Yes. John is very helpful, we have exchanged more than 10 mails. Even though I
lost another 2 days of mail, I kept silent until - while following some more
steps proposed by John today - I noticed that I ca
Tim,
all this is very well possible. I suggest to wait until this issue is resolved,
explained, or whatnot.
As long as I can make a mail disappear at will by disconnecting the recipient
MTA for more than 15 minutes using the default configuration with smarthost
enabled, I felt that an informati
Not solved yet, but confirmed.
There is a probability to lose queued mail on the default sendmail
configuration, here on nv104 (don't know about other Nevada), whereby the mails
without reachable destination are dropped within 15 minutes of entering the
queue. It is yet unresolved how it happens
[i]I take exception to the technical charge that sendmail fails to retry.[/i]
Accepted. It does fail here to retry by default, and as far as i know, I did
not do any modifications to sendmail or its configuration.
[i]If you want to provide details in an effort to find the problem(s) in
a sendmai
Until now, we have seen some arguements on postfix versus sendmail. It looks
loke a dead horse.
But it isn't. I have replaced an old OpenBSD running postfix with a fresh yound
and new OpenSolaris as mail concentrator.
:( Now I have lost incoming mail twice, and a great amount. Because the
OpenSo
[i]The root cause seems to be if a user executes dtpad directly.[/i]
Okay, I'm not a native of Solaris, so I'd like to know how to get it done
manually. I have manually added a crontab to
/var/spool/cron/crontabs/, like
-rw--- 1 root udippel1059 Dec 16 13:58 udippel
Now, it does
Cindy,
probably you are right.
Unfortunately, my search in the bug database didn't reveal this bug, because it
is not accessible outside of SUN.
It is nv103. Strangely, though, on one of my nv103, dtpad does come up as root,
on another one it doesn't, it shows the same message that it greets th
This is what I get as a user setting a cronjob:
bash-3.2$ crontab -e
ttdt_open failed: TT_ERR_PROCIDThe process id passed is not valid.
Crontab -e as root works okay, though.
If it was a permission problem, it should inform me verbosely. Otherwise, I
have no clue what the problem could be.
Alas, not.
What I get is (I don't know how to copy it, so I used pencil and paper):
ZFS 1/6 cannot mount '/export': directory is not empty (6/6)
/usr/bin/zfs mount -a failed exit status 1
svc:/system/filesystem/local:default: /lib/svc/method/fs-local failed
system /filesystem/local:default fail
I did a - hopefully proper - luupgrade of nv_99 to nv_101; like
[i]
lofiadm -a /export/home/udippel/Sources/sol-nv-b101-x86-dvd.iso
mount -F hsfs /dev/lofi/1 /b
sh /b/Solaris_11/Tools/Installers/liveupgrade20
lucreate -n nv_101
luupgrade -u -n nv_101 -s /b
umount /b
lofiadm -d /dev/lofi/1
luactiv
Mike Gerdts wrote:
>
> You should be able to confirm that this is your problem with:
>
> $ find . | awk 'length($0) > 8192 {print length($0), $0}'
>
Strange enough, this runs in almost no time: pressing enter is followed
by the command prompt immediately:
# find . | awk 'length($0) > 8192 {pri
On Tue, Aug 26, 2008 at 9:32 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> You should be able to confirm that this is your problem with:
>
> $ find . | awk 'length($0) > 8192 {print length($0), $0}'
>
> And could likely work around it by using perl to do the sorting:
>
> $ find . | perl -e 'print s
[EMAIL PROTECTED] wrote:
>
> I'm no sort expert but it looks like the sort program has a bug:
>
> http://bugs.opensolaris.org/view_bug.do?bug_id=506
http://tech.groups.yahoo.com/group/solarisx86/message/17725
Cindy, I didn't think the bug of late 2004 had never been resolved?
So is this a new
Ceri Davies wrote:
>> is what I get in lumake. I tried Google and found one link, in French.
>> % zoneadm -z t-zone list -v
>> ID NAME STATUS PATH BRANDIP
>>
>>- t-zone configured /export/t-zone native
>> shared
>
I could, but it won't help, it will miss the relevant options for gupdatedb:
# cat /opt/csw/bin/gupdatedb | grep sort
sort="/usr/bin/sort -z"
} | $sort -f | $frcode $frcode_options > $LOCATE_DB.n
} | tr / '\001' | $sort -f | tr '\001' / > $filelist
$bigram $bigram_opts < $filelist | sort |
[I enter it manually, it didn't make it through the vetting]
Ceri Davies wrote:
is what I get in lumake. I tried Google and found one link, in French.
% zoneadm -z t-zone list -v
ID NAME STATUS PATH BRAND
IP - t-zone
[I copy it into here, it didn't go through the vetting until now]
Mike Gerdts wrote:
You should be able to confirm that this is your problem with:
$ find . | awk 'length($0) > 8192 {print length($0), $0}'
Strange enough, this runs in almost no time: pressing enter is followed by th
Noticed this using gupdatedb from Blastwave. [Yes, I filed it there, but no
answer]
# gupdatedb
sort: insufficient memory; use -S option to increase allocation
Broken Pipe
Played a bit with it, but didn't go away. Searched Google, and this was
something way back in Solaris 10.
But nothing actua
zoneadm: zone 't-zone': illegal UUID value specified
is what I get in lumake. I tried Google and found one link, in French.
% zoneadm -z t-zone list -v
ID NAME STATUS PATH BRANDIP
- t-zone configured /export/t-zone na
23 matches
Mail list logo