On Thu, Oct 29, 2009 at 8:52 PM, Paul B. Henson hen...@acm.org wrote:
On Thu, 29 Oct 2009 casper@sun.com wrote:
Do you have the complete NFS trace output? My reading of the source code
says that the file will be created with the proper gid so I am actually
believing that the client
Why are you sending from s1? If you've already sent that, the logical thing to
do is send from s3 the next time.
If you really do need to send from the start every time, you can do that with
the -f option on zfs receive, to force it to overwrite newer changes, but you
are going to be sending
Hello
I've got OpenSolaris 2009.06
I've created crontab entry using crontab -e command
and crontab -l gives me the output
0 12 * * 0 pfexec zpool scrub rpool
On sunday at 12:10 AM zpool status told me that scrub was not done.
svcs | grep cron
online Oct_27 svc:/system/cron:default
Vano Beridze wrote:
Hello
I've got OpenSolaris 2009.06
I've created crontab entry using crontab -e command
and crontab -l gives me the output
0 12 * * 0 pfexec zpool scrub rpool
On sunday at 12:10 AM zpool status told me that scrub was not done.
svcs | grep cron
online Oct_27
where can I see the output of the cron job?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Vano Beridze wrote:
where can I see the output of the cron job?
man cron...
cron captures the output of the job's stdout and stderr
streams, and, if it is not empty, mails the output to the
user. If the job does not produce output, no mail is sent to
the user.
--
Now I've logged in and there was a mail saying that cron did not found zpool
it's in my path
which zpool
/usr/sbin/spool
Does cron use different PATH setting?
Is it ok to specify /usr/sbin/zpool in crontab file?
--
This message posted from opensolaris.org
On Sun, Nov 1, 2009 at 3:45 PM, Vano Beridze vanua...@gmail.com wrote:
Now I've logged in and there was a mail saying that cron did not found zpool
it's in my path
which zpool
/usr/sbin/spool
Does cron use different PATH setting?
Yes. Typically your PATH is set up by various shell
It's ok to use zpool full path. Nontheless, I'd suggest you read
crontab man page to learn how you can set some options, such as paths,
shell, timezones and so on directly into your crontab files.
On Nov 1, 2009, at 16:45, Vano Beridze vanua...@gmail.com wrote:
Now I've logged in and there
Ross wrote:
[context, please!]
Why are you sending from s1? If you've already sent that, the logical thing to
do is send from s3 the next time.
If you really do need to send from the start every time, you can do that with
the -f option on zfs receive, to force it to overwrite newer changes,
I've looked at man cron and found out that I can modify /etc/default/cron file
to set PATH that is defaulted for /usr/bin for mortal users and
/usr/bin:/usr/sbin for root.
I did not change /etc/default/cron file, instead I've indicated full path in my
crontab file.
Ethically speaking I guess
Glad it helped you.
As far as it concerns your observation about the root user, please
take into account that Solaris Role Based Access control lets you fine
tune privileges you grant to users: your ZFS administrator needs not
be root. Specifically, if you have a look at your /etc/prof_attr and
What "root user" would that be then? "root" is just a role by default
in OpenSolaris.
Now sit down the next bit will come as a shock.
Go to Systems - Administration - User and Groups
Select a user and click the properties button that un-greys
You can give the user profiles
Hi,
I may have lost my first zpool, due to ... well, we're not yet sure.
The 'zpool import tank' causes a panic -- one which I'm not even
able to capture via savecore.
I'm glad this happened when it did.
At home I am in the process of moving all my data from a Linux NFS
server to OpenSolaris.
I've got Solaris 10u7 + patches as of 20090918 running on a V40z.
I have two existing zpools - rpool and dbzpool.
I'm trying to make a third pool with some new disks, however solaris is
very unhappy about one of them, for some reason..
# format -e c12t1d0
selecting c12t1d0
[disk formatted]
I've sent this to the driver list as well, but since the zfs folks tend to
be intimately involved with the marvell driver stack, I figured I'd give you
guys a shot too.
Does anyone happen to know if there was a driver change with build 126? I
had a pool that was 2x5+1 raidz vdev's. I moved
16 matches
Mail list logo