[b]On Linux Ubuntu system 10 have shared printer and CUPS 1.3.9
For each print job coming from Solaris 10 to ubuntu shared printer , its prints
cover page
I had disable it from CUPS GUI and from web enabled access server
https://servername:631 then set printer option then banner start ending
Thanks for your quick answers.
Now I have understand using man.
Thx
Alex
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Hello all,
I have been using OpenSolaris 2009.06 without any troubles on Lenovo T61.
Yesterday I have tried to run fsck /dev/dsk/c8t0d0s0 - for system checking, and
I have got BAD SUPERBLOCK 16 MAGIC NUMBER WRONG..
I re-installed the whole system, and tried fsck... again.
I have got the same
fsck is for ufs, osol uses zfs as default filesystem. the data should
almost always consistent w/ zfs. if you like to check zfs, you can use
zpool scrub rpool.
Hello all,
I have been using OpenSolaris 2009.06 without any troubles on Lenovo T61.
Yesterday I have tried to run fsck
I am trying to come up with the best way to create a small ZFS backed redundant
SAN for a testing environment running a couple dozen small Xen instances. I
would like to do this with OpenSolaris since it allows me the option of using
commodity hardware initially and scaling up to 'something
russell aspinwall 7will...@dsl.pipex.com wrote:
Hi,
I found a reference here ( http://en.wikipedia.org/wiki/GNU_GRUB ) that
OpenSolaris uses a modified Legacy GRUB install to support disk labels,
automatic 64 bit kernel selection and booting from ZFS (with compression and
multiple boot
We have just configured a IPMP group with following /etc/hostname.xsvnicx files
-bash-3.00# cat /etc/hostname.xsvnic53
190.100.15.1 netmask 255.255.255.0 group hari1 up
-bash-3.00# cat /etc/hostname.xsvnic57
group hari1 standby up
-bash-3.00#
Now we perform following steps
1. unplumb the primary
Hi everyone,
I have configured a dhcp and dns server on my 2009 box but the problem is that
dhcp server doesn't provide clients with dns info. Any help will be much
appreciated.
#dhtadm -P
NameTypeValue
==
Joerg Schilling wrote:
Isn't GRUB2 GPLv3 and isn't GPLv3 a bigger risk when using with OpenSolaris
than the GPLv2 is?
Bigger risk of what? OpenSolaris includes a number of GPLv3 components
already. (The patent clauses of GPLv3 do require projects using it to
need additional review when
Alan Coopersmith alan.coopersm...@sun.com wrote:
Joerg Schilling wrote:
Isn't GRUB2 GPLv3 and isn't GPLv3 a bigger risk when using with OpenSolaris
than the GPLv2 is?
Bigger risk of what? OpenSolaris includes a number of GPLv3 components
already. (The patent clauses of GPLv3 do
I think unplumbing is not good method for testing. Because when you unplumb
virtual interface is created on second interface and primary's IP is attached
that virtual on 2ndary at the same time primary interface virtual interface
will go away. that means at this stage primary interface without
you could use pstop for stopping a running process. you re-run it using prun.
but not sure why do you want to freeze the entire zone though ?
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
Ive never heard of a CPU limiting ZFS raid size? I had a 32 bit CPU, pentium4
2.4GHz and I had four 500GB drives in a ZFS raid without problems.
There are OpenSolaris setups with a low power AMD cpu, under volted and under
clocked to 1.8-2GHz and the whole system including 5 drives, uses
Le 18 nov. 09 à 17:21, Gopi Desaboyina a écrit :
you could use pstop for stopping a running process. you re-run it
using prun. but not sure why do you want to freeze the entire zone
though ?
Because the zone uses two zpools that I want to snapshot at the same
time - stopping the zone is
Setup:
Linux client (Atom 330, 1GB d...@533) connects to Solaris server (Atom 330, 2GB
d...@533)
Network connectivity through 100 Mbps MOCA link (3 msecs latency each
way)Solaris server exports the same filesystem through cifs and nfs
Operation:
dd if=/dev/zero of=test bs=16384 count=512
On Wed, Nov 18, 2009 at 4:55 PM, Yannis Schoinas yan...@schoinas.net wrote:
Setup:
Linux client (Atom 330, 1GB d...@533) connects to Solaris server (Atom 330,
2GB d...@533)
Network connectivity through 100 Mbps MOCA link (3 msecs latency each
way)Solaris server exports the same filesystem
I would expect both protocols to saturate the network. What are your
performance expectations for cifs?
CIFS performance is not limited by network BW or CPU performance (at this BW
level). Something else is causing the degradation to 50% of available network
BW. Do you have any ideas?
What's
On Wed, Nov 18, 2009 at 5:54 PM, Yannis Schoinas yan...@schoinas.net wrote:
I would expect both protocols to saturate the network. What are your
performance expectations for cifs?
CIFS performance is not limited by network BW or CPU performance (at this BW
level). Something else is causing
On Wed, 2009-11-18 at 11:05 +0100, Gaëtan Lehmann wrote:
Is it possible to freeze a zone and unfreeze it a few seconds later?
No; a zone isn't a virtual machine in the way you seem to be thinking.
There's only one kernel running. You can kill -STOP the processes (some
will ignore it) but you
2009/11/18 Gaëtan Lehmann gaetan.lehm...@jouy.inra.fr:
Hi,
Is it possible to freeze a zone and unfreeze it a few seconds later?
I can't find anything like that in the doc. In the mean time, I've done
that:
running_processes=`ps -o s= -o pid= -z $zone_name | grep -v T | awk '{print
$2}'`
After looking around, it appears it is an inherent limitation of the SMB
protocol. It doesn't pipeline requests which means it experiences the full 6
msecs latency introduced by MOCA. SMB2 should fix this when it is available in
CIFS.
You can also try much larger sample sets.
Something like iometer allows you to specify queue depth to really push things
along as well.
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
I have an XVR100. I thought that was one of the ones supported.
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Has anyone come across their RBAC files ( 200906 - 111b ) being reduced from
around 60-odd entries to less than 5 ? Are these files auto-generated now by
any chance ?
Below is the full contents of the files. Incidentally exec_attr still has all
it's contents. I know this because I've got the
Although I respond to Alan appraently it didn't get posted here. I am comparing
a cluster of Sun Blade 2000's with a cluster of Ultra-24's Even though the
clock speed of the Ultra-24 is nearly 3 times that of the Blade 2000 the run
times for WRF-DA are nearly the same ( 5% difference). Since I
25 matches
Mail list logo