A third recommendation for iperf. It's the tool you want. Don't mess
around with anything else.
On 5/18/2012 10:02 AM, Rich wrote:
iperf is also your friend, as is dd+netcat+pv if you want something a
bit less rigorous.
- Rich
___
OpenIndiana-dis
Yes. We've used it for both commercial Solaris and open Indiana. It works,
but performance seems no better than spinning media, in sharp contrast to
intel ssds. It does free a regular slot for use in stripes though. That is
a plus. You have to configure it from Bios level as a volume before it
pres
catman -w rebuilds the index (-M to supply directory)
On Tue, Oct 9, 2012 at 11:18 AM, Boris Epstein wrote:
> Hello listmates,
>
> If may man command does not display some of the pages including those
> clearly present under /usr/share/man - how do I fix that? I remember there
> was a command th
yes, you shoud do a scrub and no, there isn't very much risk to this. This
will scan your disks for bits that have gone stale or the like. You should
do it. We do a scrub once per week.
On Fri, Oct 12, 2012 at 3:55 PM, Roel_D wrote:
> Being on the list and reading all ZFS problem and question
So">?}?\, a lot of people have already answered this in various ways.
I'm going to provide a little bit of direct answer and focus to some of
those other answers (and emphasis)
On 10/12/2012 5:07 PM, Michael Stapleton wrote:
It is easy to understand that zfs srubs can be useful, But, How often
On 10/23/2012 7:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays default for OI151_a
On 10/23/2012 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation so
I thought I would ask them here.
I have learned that zfs datasets can be expanded by adding vdevs. Say
tha
On 10/23/2012 11:08 AM, Robin Axelsson wrote:
On 2012-10-23 15:41, Doug Hughes wrote:
On 10/23/2012 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation
so I thought I
On 10/23/2012 4:13 PM, Timothy Coalson wrote:
Works pretty well, though I get ~70MB/s on gigabit ethernet instead of the
theoretically possible 120MB/s, and I'm not sure why (NFS gets pretty close
to 120MB/s on the same network).
There's a fair bit of overhead to ssh and to zfs send/recive, s
On 11/19/2012 9:39 PM, Edward Ned Harvey (openindiana) wrote:
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
I've been lately looking around the net for high-availability and sync
replication solutions for ZFS and came up pretty dry - seems like all
the jazz is going around on Linux with co
On Tue, Nov 20, 2012 at 7:01 AM, Edward Ned Harvey (openindiana) <
openindi...@nedharvey.com> wrote:
> > From: Doug Hughes [mailto:d...@will.to]
> >
> > Well, to me, the most obvious is use another box with ZFS to mirror the
> > ISCSI devices on ZFS. I'm in th
Have you tried to import the pools by name? Often that will work where
discovery isn't so great.
On 12/9/2012 1:35 PM, Thommy M. Malmström wrote:
Gentlemen, you have to be really careful with me here. It's some years
since I've touched Solaris. :)
I have a file server with several disks and Op
That isn't necessarily a driver. What model hba do you have? If it is among
many versions of megaraid or similar, you won't see any disks until you
create logical units from the raid card bios interface.
On Dec 19, 2012 9:53 AM, "Thommy M. Malmström"
wrote:
> Sorry I didn't read this thread till
On 1/18/2013 7:53 PM, dormitionsk...@hotmail.com wrote:
On Jan 17, 2013, at 8:47 PM, Reginald Beardsley wrote:
As far as I'm concerned, problems like this are a bottomless abyss. Which is
why I'm still putting up w/ my OI box hanging. It's annoying, but not
critical. It's also why critical
On 3/5/2013 7:38 PM, Edward Ned Harvey (openindiana) wrote:
All subnets have been renamed to protect the innocent. ;-)
At home, I use 192.168.1.x /24, and unfortunately, I need to VPN to work where
they use both 192.168.1.x /24 and 192.168.10.x /24. Fortunately, I don't need
to access any of
On 3/5/2013 10:18 PM, Edward Ned Harvey (openindiana) wrote:
From: Doug Hughes [mailto:d...@will.to]
2) explicitly set the route for 192.168.10.x :
route add 192.168.10.0/ 192.168.2.1
That's what I'm saying I have already done. I set the default route to
192.168.1.1, and I se
On 3/17/2013 6:23 PM, Reginald Beardsley wrote:
Tape as an archival medium has significant issues. Reading poorly stored tapes is a
"one try" proposition w/ no assurance of success. The first high volume
commercial application for digital tape was seismic data acquisition for the oil
industr
some of these points are a bit dated. Allow me to make some updates. I'm sure
that you are aware that most 10gig switches these days are cut through and not
store and forward. That's Arista, HP, Dell Force10, Mellanox, and IBM/Blade.
Cisco has a mix of things, but they aren't really in the low l
Most of the early marvell problems have been addressed with software fixes in
the oracle release of solaris. I couldnt say about OI. Yes, the 4540 is better
all around.
Sent from my android device.
-Original Message-
From: Ben Taylor
To: Discussion list for OpenIndiana
Sent: Fri, 21 J
fresh openinidna 148 install. It came up with ipv6 for nge1/nge2 after
auto-discovering and I did the unplumb on those, so the interfaces are
clean and unused, but still I can't create the aggregate.
hughesd@x4240-3-1-17:~# dladm show-aggr
hughesd@x4240-3-1-17:~# dladm create-aggr -d nge2 1
d
that did it! thanks! (never would have found that)
On 5/6/2011 12:25 PM, Lucas Van Tol wrote:
I saw something like this when I hadn't disabled NWAM before trying to create
an aggregate...
Date: Fri, 6 May 2011 11:43:39 -0400
From: d...@will.to
To: openindiana-discuss@openindiana.org
Subjec
Box = Sun x4240 with 8xIntel 320 160GB flash
R6x6 SSD oi R6x6 SSD u9 R5x5 SSD u9 R5x6u9 R6 ssd oi/12
100k4m 37s 3m 31s 3m 24s 3m 27s 4m 18s
rm 100k 1m 39s 1m 3s 1m 1m 1m 25s
star-x 3m 5s 3m 37s 2m 37s
On 5/9/2011 12:01 PM, Doug Hughes wrote:
Box = Sun x4240 with 8xIntel 320 160GB flash
R6x6 SSD oi R6x6 SSD u9 R5x5 SSD u9 R5x6u9 R6 ssd oi/12
100k4m 37s 3m 31s 3m 24s 3m 27s 4m 18s
rm 100k 1m 39s 1m 3s 1m 1m 1m 25s
On 6/21/2011 8:00 PM, Blake wrote:
My coworker just suggested Nova
I like this too - implies both the Sun exploding and things being made new
:)
or, "doesn't go" in Spanish. ;)
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
On 10/11/2011 6:10 AM, Dmitry Kozhinov wrote:
> doesn't "pull" let them manage their own time better?
Yes. This is why my vote is for forum, not mailing list.
Dmitry.
I've stayed out of this thread thus far, and probably will going
forward, but I vastly prefer mail lists and I think the ab
On 10/12/2011 10:56 PM, Dan Swartzendruber wrote:
This one is driving me nuts. I can't seem to keep an ssh session open for
more than 10 minutes or so. When I check after that, putty has an error up
about "network error caused software abort" (or words to that effect.) I
have TCP/IP keepalives
On 10/26/2011 7:40 PM, Jason J. W. Williams wrote:
Has anyone here tried using two LSI HBAs (9211-8i) in a single SC216
chassis? I'm being told there's only one iPass cable off the backplane
and so only one HBA can be used...but with the number of folks adding
in SATA SSDs it seems like there mus
On 12/1/2011 3:15 PM, David Brodbeck wrote:
On Thu, Dec 1, 2011 at 8:09 AM, Geoff Flaritywrote:
My advice would be to max out your RAM (for ARC) before you bother
with a ZIL, or L2ARC. Where a fast SSD for a ZIL really shines is
random synchronized writes. IE a database transaction. You'll not
Yes, that's a very fine way to do it and we do it here. I'm afraid our
python scripts wouldn't be much use to you though. They are very site
specific. My main point for posting is a word of encouragement.
On Mon, Jan 9, 2012 at 5:47 PM, Marcus Dillury wrote:
> Hi,
>
> I have a customer who would
On 2/27/2012 11:32 AM, James Carlson wrote:
Scott LeFevre wrote:
I have a pools setup named tank1 with multiple datasets/filesystems
defined. At the top, I have tank1/media followed by tank1/media/Video
and tank1/media/Music. I've setup tank1/media as a nfs share
(e.g. /export/media) and can m
We have lacp working between force10, hp, and cisco switches in all possible
combinations with no difficulties. We do monitor and alert on excessive errors
and drops for interfaces, but lacp isnt a culprit. If anything, it's an
underlying interface when we find them. Also, it beats the heck out
On 12/11/2013 8:13 PM, Edward Ned Harvey (openindiana) wrote:
From: Gregory Youngblood [mailto:greg...@youngblood.me]
Check out owncloud. The open source components might be useful.
I personally, and two other IT guys that I've spoken with from different
companies, have been burned by placing
Why not Intel 320 series? Also 710 series work fine for this, for a bit
more $$ and a bit more speed. The 320 are not as fast as the S3700 or S3500
but they are a LOT less expensive.
On Mon, Feb 10, 2014 at 4:32 PM, Schweiss, Chip wrote:
> On Mon, Feb 10, 2014 at 5:22 AM, Hans J. Albertsson <
true, Volker..
Just to note though, the 320s have no battery, but they do have enough
capacitor to flush anything from the small ram into flash on power outage.
On Mon, Feb 10, 2014 at 4:58 PM, Volker A. Brandt wrote:
> > Why not Intel 320 series? Also 710 series work fine for this, for a
> >
smartctl -x reports the wear on decent ones (read: you shouldn't consider
any that doesn't have this feature). When it gets close to 0, or you see a
lot of errors, it's time to replace it.
On Mon, Feb 10, 2014 at 5:00 PM, Brogyányi József wrote:
> 2014.02.09. 22:19 keltezéssel, Jim Klimov írta:
gt;
> On Mon, Feb 10, 2014 at 5:06 PM, Doug Hughes wrote:
>
> > true, Volker..
> >
> > Just to note though, the 320s have no battery, but they do have enough
> > capacitor to flush anything from the small ram into flash on power
> outage.
> >
> >
&g
On 2/11/2014 5:57 PM, Brogyányi József wrote:
And on a similar note: Suppose I have a signgle disk with data, and I
decide I can just afford a raidz1 of 4 disks, so I buy 3 more can I
somehow migrate the data on the original single disk onto a 3 disk
raidz1 and then add the original disk to t
On 9/30/2014 8:31 PM, Tim Aslat wrote:
Harry Putnam wrote on 01/10/2014 09:52:
This is not so easy to find in google searches
How does one go about destroying all but a specific snapshot? The one
I want is somewhere in the middle timewise So not wanting to use
`destroy -r'.
This is in a .zfs/
Well, it's not the exactly same, but you could do
chmod -R go-w .
On 1/11/2015 3:33 PM, Harry Putnam wrote:
Is there some similarly all incompassing command to revert:
/bin/chmod -R A=everyone@:full_set:fd:allow /some-dir
back to a more default chmod 755 on directories and 644 on type -f fi
You could use A= with -R as long as you know the number of the item that
you are trying to modify (visible with -v or -V depending on your
viewing pleasure)
It's probably the 'everyone' one...
On 1/11/2015 4:21 PM, Harry Putnam wrote:
Doug Hughes writes:
Well, it's
Couple of points and counter points from my own experience.
*) tape really isn't dead. No, really. at about $.01/GB/copy, and 1x10^20
bit error rate, you can't beat it. Use it for the right thing though. This
excels as an offline archival media with media lifetimes expected at around
30 years. Cont
The reason that data volumes on rpool is generally not a good idea is for
recoverability. You can take all of the disks of a given pool and move them to
another system, except rpool. rpool defines the system itself, so data volumes
there are tied to that system. Data volumes in any other pool ca
you can also use ttcp or iperf just fine. I do that all the time and they are
pretty much available anywhere. (ttcp is so simple it's just a single .c source
file). The buffering isn't really the important part, using as close to raw tcp
transport gets you the biggest benefit (vs encryption/decr
for home or for office?
for office, I don't back up root pool. it's considered disposible and
reproducible via reinstall. (that plus config management)
for home, you can zfs send it somewhere to a file if you want, or you can
tar it up since that's probably easier to restore individual files after
d, 28 Oct 2015, Doug Hughes wrote:
> for home or for office?
> for office, I don't back up root pool. it's considered disposible and
> reproducible via reinstall. (that plus config management)
> for home, you can zfs send it somewhere to a file if you want, or you can
> t
That is the classical answer for ctime. James is correct. However some may find
it interesting to note that GPFS does indeed keep the original creation time as
an additional attribute that can then be used for policy applications. It is
not exposed to Unix, but it's there. Alas, GPFS is only ava
It seems to me that you might be hitting up against "arp_defend_rate"
which by default says that the maximum arps it should be expecting in
one hour is 100. It's he's sending 3 per minute, that's already 180. I
could be wrong. I'd probably try setting that to 300 and confirm what's
going on by
silly question: is the filesystem mounted on the receive side? if you just sent
it, you'll want to mount it.
Sent from my android device.
-Original Message-
From: Harry Putnam
To: openindiana-discuss@openindiana.org
Sent: Sat, 25 Mar 2017 19:35
Subject: [OpenIndiana-discuss] send/receiv
On 3/28/2017 5:24 AM, Harry Putnam wrote:
> Timothy Coalson writes:
>
>> On Mon, Mar 27, 2017 at 9:44 PM, Harry Putnam wrote:
>>
>>> Geoff Nordli writes:
>>>
>>> [...]
>>>
Just a thought here, you may want to try a different ssh cipher. Give
arcfour a try and see if that is fast enou
It has been a topic of discussion on the illumos developers list today.
On 4/11/2017 3:25 PM, jason matthews wrote:
>
> https://www.theregister.co.uk/2017/04/11/solaris_shadow_brokers_nsa_exploits/
>
>
> has anyone reviewed this for relevancy?
>
>
> j.
>
>
> __
On 4/21/2017 2:06 PM, C. R. Oldham wrote:
> For Ubuntu, it is an effort to get ZFS on the root partition. See
> these wiki entries:
>
> https://github.com/zfsonlinux/zfs/wiki/Ubuntu
>
> I have several machines installed with these instructions on 16.04,
> 16.10, and 17.04. They do work, but it's
On 4/11/2022 9:14 AM, Bob Friesenhahn wrote:
On Sun, 10 Apr 2022, Judah Richardson wrote:
OK. The Solaris documentation I linked to says that Solaris (and
presumably
distros downstream of that codebase) expects the DHCP server to be
another
Solaris machine, and so DHCP servers that don't beha
52 matches
Mail list logo