On 2/7/2021 10:50 AM, Walter von Entferndt wrote:
>
> - Inserting an I/O scheduler might improve performance, too (gsched(8)).
> Yes, UFS is likely faster than ZFS on such a setup, but ZFS offers many
> advantages in terms of administration, fault tolerance & reliability.
Especially if the data i
ncsa.com/ule-bsd.html
---Mike
--
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada http://www.tancsa.com/
___
freebsd-per
>
> Also what filesystem you were using?
UFS
> How many CPUs were in place?
4
> Did you reboot before to move the steal_thresh value?
No.
---Mike
--
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services
can, but how do I do that ?
---Mike
--
-------
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada http://www.tancsa.com/
___
free
7 39.4353840.59814114
Difference at 95.0% confidence
1.27033 +/- 0.412636
3.32852% +/- 1.08119%
(Student's t, pooled s = 0.425627)
a value of 1 is *slightly* faster.
--
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sent
dent's t, pooled s = 0.0200388)
hardware is X3450 with 8G of memory. RELENG8
---Mike
--
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ont
At 03:36 PM 10/27/2010, Mike Tancsa wrote:
At 01:05 PM 10/27/2010, Mike Tancsa wrote:
At 12:34 PM 10/27/2010, David Wolfskill wrote:
* release/7.1.0, with the following merged in:
r186860 from stable/7
r190970 from stable/7
r203072 from head
r209964 from stable/7
and using the MAC
At 01:05 PM 10/27/2010, Mike Tancsa wrote:
At 12:34 PM 10/27/2010, David Wolfskill wrote:
* release/7.1.0, with the following merged in:
r186860 from stable/7
r190970 from stable/7
r203072 from head
r209964 from stable/7
and using the MAC kernel config
* stable/8 @r214029 using the
tionsFLOWTABLE # per-cpu routing cache
---Mike
Mike Tancsa, tel +1 519 651 3400
Sentex Communications,m...@sentex.net
Providing Internet
: 0
Mike Tancsa, tel +1 519 651 3400
Sentex Communications,m...@sentex.net
Providing Internet since 1994www.sentex.net
Cambridge, Ontario
2
Regards,
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"
----
TA300 chipset on AMD64, 8G of RAM.
---Mike
Mike Tancsa, tel +1 519 651 3400
Sentex Communications,m...@sentex.net
Providing Internet since
At 10:02 AM 12/24/2008, ivo...@freebsd.org wrote:
Comparing to your original numbers, it looks like you might have some
debugging enabled there: the original 7.x results went from 13:57 to
Yes, its a regular kernel and things like malloc are the default and
from the dmesg
WARNING: WITNESS o
At 11:20 AM 12/23/2008, Ivan Voras wrote:
I just thought of another thing - can you boot an 8-CURRENT kernel
on the machine and report the value of kern.sched.topology_spec
sysctl? This is to verify how the ULE sees the HTT topology of the CPUs.
And buildworld from current
4,8 and 10
3287.
At 11:20 AM 12/23/2008, Ivan Voras wrote:
I just thought of another thing - can you boot an 8-CURRENT kernel
on the machine and report the value of kern.sched.topology_spec
sysctl? This is to verify how the ULE sees the HTT topology of the CPUs.
It will have to wait for the next board as this
At 05:22 AM 12/23/2008, Pieter de Goeje wrote:
On Tuesday 23 December 2008, Ivan Voras wrote:
> Mike Tancsa wrote:
> > Just got our first board to play around with and unlike in the past,
> > having hyperthreading enabled seems to help performance At least in
>
hed
uhid0: at uhub4 port 1 (addr 2) disconnected
uhid0: detached
ukbd0: on uhub4
kbd2 at ukbd0
uhid0: on uhub4
----
Mike Tancsa, tel +1 519 651 3400
Sentex Communications,
At 12:06 PM 11/25/2008, Adrian Chadd wrote:
2% may not sound like a lot but it starts becoming measurable savings
when the number of boxes involved is ${LARGE}.
True, but then again is there such a thing as a synthetic benchmark
that would have a margin of error less than 2% while representing
At 03:28 PM 11/24/2008, Steven Hartland wrote:
http://www.phoronix.com/scan.php?page=article&item=os_threeway_2008&num=1
Was interesting until I saw this:-
"However, it's important to reiterate that all three operating
systems were left in their stock configurations and that no
additional twe
Are there any suggestions for a AMD64 RELENG_7 box to improve samba
performance ? Write throughput from a windows box seems a bit slower
than it should be.
---Mike
Mike Tancsa
At 12:50 PM 3/4/2008, alan bryan wrote:
Hi,
I've got a new server with a 3ware 9550SXU with the
Battery. I am using FreeBSD 7.0-Release (tried both
4BSD and ULE) using AMD64 and the 3ware performance
for writes is just plain horrible. Something is
obviously wrong but I'm not sure what.
Not s
At 03:52 AM 3/3/2008, Thomas Krause (Webmatic) wrote:
Pyun YongHyeon has fixed a lot of driver issues (i.e. re(4), bfr(4), vr(4))
over the last few months, many are already in CURRENT or RELENG_7 (not
sure how many of them made it into 7.0-RELEASE) or posted as patches
to the current@ mailing li
At 10:44 AM 2/29/2008, Chris wrote:
A weakness of freebsd is its fussyness over hardware in particular
network cards, time and time again I see posts here telling people to
go out buying expensive intel pro 1000 cards just so they can use the
operating system properly when I think its reasonable
At 05:27 PM 2/14/2008, Brett Bump wrote:
stat doesn't show as much as gstat and iostat. Gstat alwasy shows my
drive with /var/mail being 97-100% busy and iostat will always show hi
tps rates, but never anything above 8MB/s (4.10 gave me 30MB/s+).
If a lot of users are checking mail at once, t
At 03:09 PM 2/14/2008, Brett Bump wrote:
On Thu, 14 Feb 2008, Mike Tancsa wrote:
> At 02:22 PM 2/14/2008, Brett Bump wrote:
>
> >I've recently upgraded a mailserver from a 4.x version to 6.2.
>
> I would say move to 6.3R as its a better release with a lot of bug
>
At 02:22 PM 2/14/2008, Brett Bump wrote:
I've recently upgraded a mailserver from a 4.x version to 6.2.
I would say move to 6.3R as its a better release with a lot of bug
fixes. In terms of your general performance issues, choice of
hardware really makes a difference as quality of drivers c
At 04:31 AM 1/30/2008, Kris Kennaway wrote:
Claus Guttesen wrote:
I forgot to mention in my first post that I'm using ULE. The p800
controller has a (factory set) 25/75 read/write cache ratio.
There's maybe one additional thing: do you dual-boot Linux and FreeBSD?
If so, you'll need to set up a
At 08:24 PM 1/30/2008, Steven Hartland wrote:
The plot thickens This stall is not just related to newfs you have to
have gstat running as well. If I do the newfs without gstat running then
no stall occurs. As soon as Im running gstat while doing the newfs then
everything locks as described.
At 03:46 PM 1/28/2008, Claus Guttesen wrote:
I had (allready) saved the thread in my mail-account so I could look
it up before I started testing. :-) So I compiled postgresql with the
option WITH_THREADSAFE=true and used sysbench with --pgsql-host="" .
As pointed out by Ivan my test also involve
At 03:07 PM 12/19/2007, Alexandre Biancalana wrote:
On 12/19/07, Mike Tancsa <[EMAIL PROTECTED]> wrote:
> At 02:09 PM 12/19/2007, Alexandre Biancalana wrote:
>
> >The behavior that I'm observing and that want your help is when the
> >system is accessing some di
At 02:09 PM 12/19/2007, Alexandre Biancalana wrote:
The behavior that I'm observing and that want your help is when the
system is accessing some directory with many small files ( directories
with ~ 1 million of ~30kb files), the performance is very poor.
Hi,
Have you adjusted the dirha
At 05:19 PM 12/5/2007, Philipp Wuensche wrote:
After switching to net.isr.direct=0 and 346609775 good packets later, RX
overruns haven't increased by one! Thats nice. Still interrupt is using
up the CPU. I'm not quite sure if polling would help now!?
Polling is helpful to prevent livelock. Not
At 12:23 PM 12/5/2007, Philipp Wuensche wrote:
Mike Tancsa wrote:
> At 07:14 PM 12/4/2007, Philipp Wuensche wrote:
>
>> The debug output of em0 looks like this:
>>
>> em0: CTRL = 0x40140248 RCTL = 0x8002
>> em0: Packet buffer = Tx=20k Rx=12k
>> em0: Flow
At 07:14 PM 12/4/2007, Philipp Wuensche wrote:
The debug output of em0 looks like this:
em0: CTRL = 0x40140248 RCTL = 0x8002
em0: Packet buffer = Tx=20k Rx=12k
em0: Flow control watermarks high = 10240 low = 8740
em0: tx_int_delay = 66, tx_abs_int_delay = 66
em0: rx_int_delay = 32, rx_abs_int_d
At 06:08 AM 12/4/2007, Gergely CZUCZY wrote:
cache seems to be turned on in the web-based management. However, I
still don't think this is OS-specific, since I see no OS-specific
options, and 3ware makes the devices available through SCSI, and WC
is handled differently there.
Its the queuing
At 04:22 AM 12/4/2007, Gergely CZUCZY wrote:
On Sat, Dec 01, 2007 at 04:06:55PM -0500, Mike Tancsa wrote:
> At 03:56 PM 12/1/2007, Gergely CZUCZY wrote:
> >I don't quite understand the question. It's the very same box, with
> >a dualboot configuration.
>
> Fire
At 04:10 PM 12/2/2007, Peter Losher wrote:
Mike Tancsa wrote:
> I think the card in question is twa in this case.
Not in our case...
Sorry, I was referring to the original posters card
http://lists.freebsd.org/pipermail/freebsd-performance/2007-November/002942.html
The box i
At 08:54 PM 12/1/2007, Peter Losher wrote:
Manjunath R Gowda wrote:
> On 12/1/07, Boris Samorodov <[EMAIL PROTECTED]> wrote:
>>
>> 3ware driver is under GIANT at 7.x. I don't know if it's the same for
>> linux.
>
> It is not under GIANT any more, MPSAFE starting from 7.0 BETA1.
I know in one cas
At 03:56 PM 12/1/2007, Gergely CZUCZY wrote:
I don't quite understand the question. It's the very same box, with
a dualboot configuration.
Fire up the 3ware controller's RAID management software and make sure
the same write caching strategy is set for FreeBSD and Linux. The
driver my default
At 11:33 AM 12/1/2007, Gergely CZUCZY wrote:
> >
> >The box is a dual opteron 246 with 12GB of memory with 10K RPM
> >SATA disks on a 9550 3ware.
> >
> >So, what can cause this big difference?
Are the caching options for the 3ware the same on FreeBSD as Linux ?
---Mike
_
At 07:31 PM 11/30/2007, Jeff Roberson wrote:
Though, maybe I should rebuild it dynamically to ensure it's
linked against libthr (and not pthread or c_r)...
So, any tips, guesses, anything what can cause this?
I would make it dynamic instead of static. I seem to recall this
issue in the past
At 08:30 AM 3/14/2007, Edwin Mons wrote:
Mike Tancsa wrote:
> At 03:16 AM 3/14/2007, Kris Kennaway wrote:
>
>
>> On the pgsql side, disable the update_process_titles option (or
>> whatever it is called), because this has a 33% performance overhead
>
> Hi,
>
At 03:16 AM 3/14/2007, Kris Kennaway wrote:
On the pgsql side, disable the update_process_titles option (or
whatever it is called), because this has a 33% performance overhead
Hi,
Is this a version specific config option or compile ? I cant
find anything like that in the .conf or man
At 04:38 AM 3/2/2007, O. Hartmann wrote:
The last days I tried to figure out why some of my lab's FreeBSD
boxes and also mine at home seem to be outperformed by some Linux
setups around here and I saw something interesting.
On my lab's FreeBSD 6.2/i386 box (ASUS P4P800, ICH5 with two SATA
150
At 04:06 AM 2/28/2007, Peter Losher wrote:
We recently put a stock Fedora Core 6 and a stock FreeBSD 6.2 on the
same HW (HP ProLiant DL320 G5 Dual Core Xeons w/ 16GB RAM) and running
Is that using PAE or AMD64 ?
---Mike
___
freebsd-perfor
At 12:43 PM 1/5/2007, Eugene Grosbein wrote:
Hi!
I'm trying to meashure network throughput between two 6.2-PRERELEASE boxes,
basically get maximim IP packets per second transmitted/received.
Try /usr/src/tools/tools/netrate/
I did some tests with the results at http://www.tancsa.com/blast.htm
At 12:57 PM 11/30/2006, Ivan Voras wrote:
Mike Tancsa wrote:
> Yeah I inadvertently slighted the NetBSD folks by leaving them out. So
> I guess I better give them a try as well.
>
> The part that really surprises me is the drop in performance as firewall
> rules are added to REL
At 12:51 AM 11/30/2006, Nick Pavlica wrote:
Did a quick default install. Results are not so interesting since one
stream livelocks the box. Basic stats at http://www.tancsa.com/blast.html
If there are some OpenSolaris wizards out there who want me to tune,
I am happy to retest...
Mike,
I'm n
At 04:35 PM 11/29/2006, Robert Watson wrote:
You may want to datestamp the version of HEAD you're using in each
test (assuming it changes between tests).
Ahh, Good point. I have been using the sources always from Nov 24th
to make comparisons as similar as possible for the tests with 7.0
wh
Did some more tests, this time using a single NIC interface in
trunking mode. Strangely enough, the speed is a little faster on
HEAD. Perhaps less interrupt processing ? Results in the usual place
http://www.tancsa.com//blast.html
---Mike
__
At 03:06 AM 11/28/2006, Massimo Lusetti wrote:
FWIW I would definitively like to see it. But thanks for going so far..
Tried it with the patch branch. With the em nics, the box locks up
with 2 streams. It works now with bge, but rates are pretty slow
(220Kpps), and very slow with pf enable
At 03:28 AM 11/24/2006, Massimo Lusetti wrote:
On Thu, 2006-11-23 at 11:52 -0500, Mike Tancsa wrote:
> I might give OpenBSD a quick try as a reference.
That would be very interesting.
OK, I added OpenBSD to the mix as well. Results are pretty crappy
with the base default install. With
At 03:06 AM 11/28/2006, Massimo Lusetti wrote:
On Mon, 27 Nov 2006 16:36:34 -0500
Mike Tancsa <[EMAIL PROTECTED]> wrote:
> OK, I added OpenBSD to the mix as well. Results are pretty crappy
> with the base default install. With one stream, the box essentially
> live locks. Thi
At 02:12 PM 11/25/2006, Nick Pavlica wrote:
I might give OpenBSD a quick try as a reference.
Mike,
Have you done any testing on Solaris 10, or OpenSolaris? I
understand that it has a very robust IP stack. It would be
Did a quick default install. Results are not so interesting since one
st
At 02:12 PM 11/25/2006, Nick Pavlica wrote:
I might give OpenBSD a quick try as a reference.
Mike,
Have you done any testing on Solaris 10, or OpenSolaris? I
understand that it has a very robust IP stack. It would be
interesting to see how the three stack up against each other (FBSD,
LINUS,
At 06:40 PM 11/24/2006, Steven Hartland wrote:
Whats wrong with that web page the display is totally broken :(
Try it now.
---Mike
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performa
At 04:03 PM 11/24/2006, Divacky Roman wrote:
I see generic_bzero/bcopy used quite often. why dont you define
cpu I586_CPU
in your kernel config?
Hi,
I cvsup'd to todays kernel and re-ran some of the tests, controlling
for CPU defs in the kernel. Posted at http://www.tancsa.com/b
At 04:03 PM 11/24/2006, Divacky Roman wrote:
On Fri, Nov 24, 2006 at 03:27:40PM -0500, Mike Tancsa wrote:
> At 03:28 AM 11/24/2006, Massimo Lusetti wrote:
> >On Thu, 2006-11-23 at 11:52 -0500, Mike Tancsa wrote:
> >
> >> I might give OpenBSD a quick try as a reference.
&
At 03:28 AM 11/24/2006, Massimo Lusetti wrote:
On Thu, 2006-11-23 at 11:52 -0500, Mike Tancsa wrote:
> I might give OpenBSD a quick try as a reference.
That would be very interesting.
OpenBSD 4.0 i386 panics on boot.
I also posted some results with PMC compiled into the kernel
i
test setup description at
http://www.tancsa.com/blast.html
More testing :) This time with a pair of PCIe 1x bge nics, as well
as using the patch at
http://lists.freebsd.org/pipermail/freebsd-net/2006-November/012389.html
As I switched to bge, I could test against Dragonfly as well since
At 12:43 PM 11/23/2006, Vlad Galu wrote:
Can you please completely remove the iptables support from your
Linux configuration, as well as removing support for any packet filter
in FreeBSD? Also, please enable fast_forwarding.
I did that a while ago. See
http://www.tancsa.com/blast.html
At 08:09 AM 11/22/2006, Jeremie Le Hen wrote:
It would be interesting to know the real performance of Linux as a mere
router if we want a true comparision with FreeBSD performances.
Re-tested, this time with a LINUX UP kernel and there is not that
much difference in overall speeds. I added a
At 08:09 AM 11/22/2006, Jeremie Le Hen wrote:
Hi Mike,
Thank you for spending that much time for benchmarking, this is really
interesting.
Hi,
More to come, and if you can think of other tests let me
know. Next is VLAN performance.
Though this is a little bit off topic, I'm quite
At 12:50 AM 11/21/2006, Mike Tancsa wrote:
The table is also up at http://www.tancsa.com/blast.html which might
be easier to read
Decided to test with RELENG_4 as a comparison. Quite a difference.
With polling and fast forwarding on, I can use 2 routers to blast
through at almost 1Mpps
At 04:21 PM 10/20/2006, Mike Tancsa wrote:
The next set of comparisons I want to run is in our spam
scanners. The boxes which operate in round robin make heavy use of mysql, DNS
OK, we are just getting ready to run some tests for this
setup. SpamAssassin has some built in benchmarking that
At 04:06 PM 10/20/2006, Ed Maste wrote:
On Fri, Oct 20, 2006 at 02:57:46PM -0400, Mike Tancsa wrote:
> With all the threads about poor FreeBSD performance, I wanted to test
> it out myself to see how 64bit LINUX would compare using the same hardware.
[ snip ]
It seems your message en
every 1.000 msec
da0 at twa0 bus 0 target 0 lun 0
da0: Fixed Direct Access SCSI-3 device
da0: 100.000MB/s transfers
da0: 152566MB (312455168 512 byte sectors: 255H 63S/T 19449C)
SMP: AP CPU #1 Launched!
Trying to mount root from ufs:/dev/da0s1a
---Mike
At 03:20 PM 10/6/2006, Jerry Bell wrote:
I have actually made the changes to my.cnf before I ran these. I expanded
them quite a bit beyond what is in my-large.cnf. I need to pull them back
Hi,
I was just looking at this thread as its relevant to a new
DB server I am trying to put tog
At 06:14 PM 9/3/2006, Jan Zacharias wrote:
So far i messed with:
- ifconfig mtu
leave it at 1500 unless everything talking to the box supports jumbo
frames. (ie. all routers / switches in between)
- net.inet.tcp settings
OK, but what did you fiddle with ?
what did you set
net.inet.tc
At 01:42 PM 25/05/2006, Nash Nipples wrote:
net.inet.tcp.inflight.enable=1 who said u have to put it down?
When you are on the same subnet, is it not automatically disabled ?
---Mike
___
freebsd-performance@freebsd.org mailing list
http://
At 08:54 AM 11/11/2005, Joao Barros wrote:
Copyright (c) 1992-2005 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 6.0-RELEASE #5: Thu Nov 10 13:57:54 WET 2005
[EMAI
At 09:15 PM 02/11/2005, Michael VInce wrote:
I have seen some network based SMP related performance problems
vanish in 6.0 tests, admittedly I haven't done hard drive based
tests but I wouldn't surprise me of performance drops on HDs in SMP
on 6.0 are gone as well.
Yes, I noticed that well.
At 05:47 AM 02/11/2005, Achim Patzner wrote:
For SATA I have always been getting the Dell 750s (now 850s) which
use the 'aac'
Adaptec AdvancedRAID Controller driver, do 'man aac' for more details.
Did you ever have to replace a failed drive? You might try it, just
to see if you're still happy a
At 07:23 PM 31/10/2005, Francisco Reyes wrote:
On Sat, 29 Oct 2005, Mike Tancsa wrote:
I use the 3ware line (8xxx) and they are very stable, but not the
fastest. For speed, check out the cards from Areca. Native
FreeBSD support and they are FAST
http://www.areca.com.tw/products/html/pcix
At 12:52 AM 01/11/2005, Francisco Reyes wrote:
On Mon, 31 Oct 2005, Mike Tancsa wrote:
Which 3ware were they comparing ? The 9500SX was a little bit more
than the 4 port ARECA here in Canada.
It was the 9500SX.. Don't recall the models for the ARECA, but they
compared it to two.
I
At 07:23 PM 31/10/2005, Francisco Reyes wrote:
On Sat, 29 Oct 2005, Mike Tancsa wrote:
I use the 3ware line (8xxx) and they are very stable, but not the
fastest. For speed, check out the cards from Areca. Native
FreeBSD support and they are FAST
http://www.areca.com.tw/products/html/pcix
At 12:11 AM 29/10/2005, Francisco wrote:
On Tue, 18 Oct 2005, Mike Tancsa wrote:
I use the 3ware line (8xxx) and they are very stable, but not the
fastest. For speed, check out the cards from Areca. Native FreeBSD
support and they are FAST
http://www.areca.com.tw/products/html/pcix
At 12:12 PM 05/10/2005, Robert Watson wrote:
Obviously, this is about two things: performance, and stability. Many of us
...
Of particular interest is if changing to direct dispatch hurts
performance in your environment, and understanding why that is.
I enabled this last Monday on 2 SMP
At 07:23 PM 18/10/2005, [EMAIL PROTECTED] wrote:
At 11:36 PM 10/18/2005 +0100, Steven Hartland wrote:
| Anyone got any SATA RAID 5 controllers they can recommend
| 64Bit PCIX.
|
| Steve
Hi Steve,
I am using the 3Ware 9500S-12 on our servers and like it very
much. It's very
easy to setup,
+0 records in
2+0 records out
65536 bytes transferred in 7.587819 secs (86370011 bytes/sec)
Richard
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mike Tancsa
Sent: 19 October 2005 04:45
To: Steven Hartland; freebsd-performance@freebsd.org
Subject:
At 06:36 PM 18/10/2005, Steven Hartland wrote:
Anyone got any SATA RAID 5 controllers they can recommend
64Bit PCIX.
I use the 3ware line (8xxx) and they are very stable, but not the
fastest. For speed, check out the cards from Areca. Native FreeBSD
support and they are FAST
http://www.ar
At 01:36 AM 15/08/2005, Jason Coene wrote:
Thanks for the response. I've attached what you requested as a text file in
case the following gets garbled by Outlook.
Hi
It all looks nice and clean. Is there a slow down between
the boxes ? I dont see any errors to speak of. One thing you
At 04:37 PM 14/08/2005, Jason Coene wrote:
Before the update, the servers would sustain high transfer rates (well over
4 Mbyte/sec), but since the update that has changed. A transfer will start
out at normal speed (as high as we've ever seen), but it will immediately
and consistently drop to bet
---Mike
----
Mike Tancsa, tel +1 519 651 3400
Sentex Communications,[EMAIL PROTECTED]
Providing Internet since 1994www.sentex.net
Cambridge, Onta
At 10:57 PM 20/06/2005, [EMAIL PROTECTED] wrote:
I was doing a dd of dev/zero into a file on a UFS2 filesystem
(softupdates disabled) on a clean 5.4-R system.
I see the same issue on RELENG_5 with IDE disk. However, on
FreeBSD nfs.sentex.ca 6.0-CURRENT FreeBSD 6.0-CURRENT #1: Fri Jun 17
No suc
While on the topic, has anyone tried
http://www.dlink.com/products/?pid=406
on FreeBSD ?
---Mike
Mike Tancsa, tel +1 519 651 3400
Sentex Communications
ed in 32.012170 secs (204722143 bytes/sec)
Steve
- Original Message ----- From: "Mike Tancsa" <[EMAIL PROTECTED]>
OK, some further tests, trying to control for this. I am not sure what
values to actually fiddle with via bsdlable as this is the entire disk so
I will just v
At 05:31 PM 02/05/2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Mike Tancsa writes:
>Using an amrc controller in RAID5, it doesnt make a difference really on
>the dd stuff - read or write-- perhaps 2-5MB difference on the faster
>side. Raw reads and writes wheth
At 12:48 PM 01/05/2005, Chuck Swiger wrote:
Mike Tancsa wrote:
A somewhat obvious question to some perhaps, but what server application
mix on FreeBSD today sees an improvement using 64bit CPUs ?
Databases. Big ones, anyway. Other than that, not much, unless you're
running processes
At 03:10 PM 01/05/2005, Matthew D. Fuller wrote:
On Sun, May 01, 2005 at 09:46:09AM -0400 I heard the voice of
Mike Tancsa, and lo! it spake thus:
>
> A somewhat obvious question to some perhaps, but what server
> application mix on FreeBSD today sees an improvement using 64bit
> CPUs
these benefit from the 64bit world ? Or would they ?
---Mike
Mike Tancsa, tel +1 519 651 3400
Sentex Communications,[EMAIL PROTECTED]
Providing Internet
At 04:47 AM 20/04/2005, Claus Guttesen wrote:
> elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec)
>
Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6
sc
92 matches
Mail list logo