In message <20100322233607.gb1...@garage.freebsd.pl>, Pawel Jakub Dawidek write
s:
>A class is suppose to interact with other classes only via GEOM, so I
>think it should be safe to choose g_up/g_down threads for each class
>individually, for example:
>
> /dev/ad0s1a (DEV)
> |
>
:The whole point of the discussion, sans PHK's interlude, is to reduce the
context switches and indirection, not to increase it. But if you can show
decreased latency/higher-iops benefits of increasing it, more power to you. I
would think that the results of DFly's experiment with
parallelism
Pawel Jakub Dawidek wrote:
On Mon, Mar 22, 2010 at 08:23:43AM +, Poul-Henning Kamp wrote:
In message <4ba633a0.2090...@icyb.net.ua>, Andriy Gapon writes:
on 21/03/2010 16:05 Alexander Motin said the following:
Ivan Voras wrote:
Hmm, it looks like it could be easy to spawn more g_* threads
On Mar 22, 2010, at 5:36 PM, Pawel Jakub Dawidek wrote:
> On Mon, Mar 22, 2010 at 08:23:43AM +, Poul-Henning Kamp wrote:
>> In message <4ba633a0.2090...@icyb.net.ua>, Andriy Gapon writes:
>>> on 21/03/2010 16:05 Alexander Motin said the following:
Ivan Voras wrote:
> Hmm, it looks li
On Mon, Mar 22, 2010 at 08:23:43AM +, Poul-Henning Kamp wrote:
> In message <4ba633a0.2090...@icyb.net.ua>, Andriy Gapon writes:
> >on 21/03/2010 16:05 Alexander Motin said the following:
> >> Ivan Voras wrote:
> >>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
> >>> barr
In message <3c0b01821003221207p4e4eecabqb4f448813bf5a...@mail.gmail.com>, Alexa
nder Sack writes:
>Am I going crazy or does this sound a lot like Sun/SVR's stream based
>network stack?
That is a good and pertinent observation.
I did investigate a number of optimizations to the g_up/g_down scheme
On Mon, Mar 22, 2010 at 2:45 PM, M. Warner Losh wrote:
> In message:
> Scott Long writes:
> : I'd like to go in the opposite direction. The queue-dispatch-queue
> : model of GEOM is elegant and easy to extend, but very wasteful for
> : the simple case, where the simple case is one or
In message:
Scott Long writes:
: I'd like to go in the opposite direction. The queue-dispatch-queue
: model of GEOM is elegant and easy to extend, but very wasteful for
: the simple case, where the simple case is one or two simple
: partition transforms (mbr, bsdlabel) and/or a simpl
On Mar 22, 2010, at 9:52 AM, Alexander Sack wrote:
> On Mon, Mar 22, 2010 at 8:39 AM, John Baldwin wrote:
>> On Monday 22 March 2010 7:40:18 am Gary Jennejohn wrote:
>>> On Sun, 21 Mar 2010 19:03:56 +0200
>>> Alexander Motin wrote:
>>>
Scott Long wrote:
> Are there non-CAM drivers that
On Mon, Mar 22, 2010 at 8:39 AM, John Baldwin wrote:
> On Monday 22 March 2010 7:40:18 am Gary Jennejohn wrote:
>> On Sun, 21 Mar 2010 19:03:56 +0200
>> Alexander Motin wrote:
>>
>> > Scott Long wrote:
>> > > Are there non-CAM drivers that look at MAXPHYS, or that silently assume
> that
>> > > MA
On Mon, 22 Mar 2010 01:53, Alexander Motin wrote:
In Message-Id: <4ba705cb.9090...@freebsd.org>
jhell wrote:
On Sun, 21 Mar 2010 20:54, jhell@ wrote:
I played with it on one re-compile of a kernel and for the sake of it
DFLTPHYS=128 MAXPHYS=256 and found out that I could not cause a crash
dum
On Monday 22 March 2010 7:40:18 am Gary Jennejohn wrote:
> On Sun, 21 Mar 2010 19:03:56 +0200
> Alexander Motin wrote:
>
> > Scott Long wrote:
> > > Are there non-CAM drivers that look at MAXPHYS, or that silently assume
that
> > > MAXPHYS will never be more than 128k?
> >
> > That is a questio
Quoting Scott Long (from Sat, 20 Mar 2010 12:17:33 -0600):
code was actually taking advantage of the larger I/O's. The
improvement really
depends on the workload, of course, and I wouldn't expect it to be noticeable
for most people unless they're running something like a media server.
I do
On Sun, 21 Mar 2010 19:03:56 +0200
Alexander Motin wrote:
> Scott Long wrote:
> > Are there non-CAM drivers that look at MAXPHYS, or that silently assume that
> > MAXPHYS will never be more than 128k?
>
> That is a question.
>
I only did a quick&dirty grep looking for MAXPHYS in /sys.
Some dr
In message <4ba633a0.2090...@icyb.net.ua>, Andriy Gapon writes:
>on 21/03/2010 16:05 Alexander Motin said the following:
>> Ivan Voras wrote:
>>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>>> barring specific class behaviour, it has a fair chance of working out of
>>> the
jhell wrote:
> On Sun, 21 Mar 2010 20:54, jhell@ wrote:
>> I played with it on one re-compile of a kernel and for the sake of it
>> DFLTPHYS=128 MAXPHYS=256 and found out that I could not cause a crash
>> dump to be performed upon request (reboot -d) due to the boundary
>> being hit for DMA which i
On Sun, 21 Mar 2010 20:54, jhell@ wrote:
On Sun, 21 Mar 2010 10:04, mav@ wrote:
Julian Elischer wrote:
In the Fusion-io driver we find that the limiting factor is not the
size of MAXPHYS, but the fact that we can not push more than
170k tps through geom. (in my test machine. I've seen more on
On Sun, 21 Mar 2010 10:04, mav@ wrote:
Julian Elischer wrote:
In the Fusion-io driver we find that the limiting factor is not the
size of MAXPHYS, but the fact that we can not push more than
170k tps through geom. (in my test machine. I've seen more on some
beefier machines), but that is only a
Scott Long wrote:
I agree that more threads just creates many more race
complications. Even if it didn't, the storage driver is a
serialization point; it doesn't matter if you have a dozen g_*
threads if only one of them can be in the top half of the driver at
a time. No amount of fine-grained
On Mar 21, 2010, at 10:53 AM, Ulrich Spörlein wrote:
> [CC trimmed]
> On Sun, 21.03.2010 at 10:39:10 -0600, Scott Long wrote:
>> On Mar 21, 2010, at 10:30 AM, Ulrich Spörlein wrote:
>>> On Sat, 20.03.2010 at 12:17:33 -0600, Scott Long wrote:
Windows has a MAXPHYS equivalent of 1M. Linux has a
Scott Long wrote:
> On Mar 20, 2010, at 1:26 PM, Alexander Motin wrote:
>> As you should remember, we have made it in such way, that all unchecked
>> drivers keep using DFLTPHYS, which is not going to be changed ever. So
>> there is no problem. I would more worry about non-CAM storages and above
>>
[CC trimmed]
On Sun, 21.03.2010 at 10:39:10 -0600, Scott Long wrote:
> On Mar 21, 2010, at 10:30 AM, Ulrich Spörlein wrote:
> > On Sat, 20.03.2010 at 12:17:33 -0600, Scott Long wrote:
> >> Windows has a MAXPHYS equivalent of 1M. Linux has an equivalent of an
> >> odd number less than 512k. For th
On Mar 21, 2010, at 10:30 AM, Ulrich Spörlein wrote:
> On Sat, 20.03.2010 at 12:17:33 -0600, Scott Long wrote:
>> Windows has a MAXPHYS equivalent of 1M. Linux has an equivalent of an
>> odd number less than 512k. For the purpose of benchmarking against these
>> OS's, having comparable capabiliti
On Mar 20, 2010, at 1:26 PM, Alexander Motin wrote:
> Scott Long wrote:
>> On Mar 20, 2010, at 11:53 AM, Matthew Dillon wrote:
>>> Diminishing returns get hit pretty quickly with larger MAXPHYS values.
>>> As long as the I/O can be pipelined the reduced transaction rate
>>> becomes less int
On Sat, 20.03.2010 at 12:17:33 -0600, Scott Long wrote:
> Windows has a MAXPHYS equivalent of 1M. Linux has an equivalent of an
> odd number less than 512k. For the purpose of benchmarking against these
> OS's, having comparable capabilities is essential; Linux easily beats FreeBSD
> in the silly
m
On Mar 21, 2010, at 8:56 AM, Andriy Gapon wrote:
> on 21/03/2010 16:05 Alexander Motin said the following:
>> Ivan Voras wrote:
>>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>>> barring specific class behaviour, it has a fair chance of working out of
>>> the box) but th
On Mar 21, 2010, at 8:05 AM, Alexander Motin wrote:
> Ivan Voras wrote:
>> Julian Elischer wrote:
>>> You can get better throughput by using TSC for timing because the geom
>>> and devstat code does a bit of timing.. Geom can be told to turn off
>>> it's timing but devstat can't. The 170 ktps is
Andriy Gapon wrote:
on 21/03/2010 16:05 Alexander Motin said the following:
Ivan Voras wrote:
Hmm, it looks like it could be easy to spawn more g_* threads (and,
barring specific class behaviour, it has a fair chance of working out of
the box) but the incoming queue will need to also be broken
Alexander Motin wrote:
Julian Elischer wrote:
In the Fusion-io driver we find that the limiting factor is not the
size of MAXPHYS, but the fact that we can not push more than
170k tps through geom. (in my test machine. I've seen more on some
beefier machines), but that is only a limit on small t
on 21/03/2010 16:05 Alexander Motin said the following:
> Ivan Voras wrote:
>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>> barring specific class behaviour, it has a fair chance of working out of
>> the box) but the incoming queue will need to also be broken up for
>> gre
Ivan Voras wrote:
> Julian Elischer wrote:
>> You can get better throughput by using TSC for timing because the geom
>> and devstat code does a bit of timing.. Geom can be told to turn off
>> it's timing but devstat can't. The 170 ktps is with TSC as timer,
>> and geom timing turned off.
>
> I see
Julian Elischer wrote:
> In the Fusion-io driver we find that the limiting factor is not the
> size of MAXPHYS, but the fact that we can not push more than
> 170k tps through geom. (in my test machine. I've seen more on some
> beefier machines), but that is only a limit on small transacrtions,
> or
Ivan Voras wrote:
Julian Elischer wrote:
Alexander Motin wrote:
Scott Long wrote:
On Mar 20, 2010, at 11:53 AM, Matthew Dillon wrote:
Diminishing returns get hit pretty quickly with larger MAXPHYS
values.
As long as the I/O can be pipelined the reduced transaction rate
becomes less
Alexander Motin wrote:
Scott Long wrote:
On Mar 20, 2010, at 11:53 AM, Matthew Dillon wrote:
Diminishing returns get hit pretty quickly with larger MAXPHYS values.
As long as the I/O can be pipelined the reduced transaction rate
becomes less interesting when the transaction rate is les
2010/3/20 Alexander Motin
> Hi.
>
> With set of changes done to ATA, CAM and GEOM subsystems last time we
> may now get use for increased MAXPHYS (maximum physical I/O size) kernel
> constant from 128K to some bigger value.
[snip]
> All above I have successfully tested last months with MAXPHY
Scott Long wrote:
> On Mar 20, 2010, at 11:53 AM, Matthew Dillon wrote:
>>Diminishing returns get hit pretty quickly with larger MAXPHYS values.
>>As long as the I/O can be pipelined the reduced transaction rate
>>becomes less interesting when the transaction rate is less than a
>>c
On Sat, Mar 20, 2010 at 6:53 PM, Matthew Dillon
wrote:
>
> :All above I have successfully tested last months with MAXPHYS of 1MB on
> :i386 and amd64 platforms.
> :
> :So my questions are:
> :- does somebody know any issues denying increasing MAXPHYS in HEAD?
> :- are there any specific opinions a
:Pardon my ignorance, but wouldn't so much KVM make small embedded
:devices like Soekris boards with 128 MB of physical RAM totally unusable
:then? On my net4801, running RELENG_8:
:
:vm.kmem_size: 40878080
:
:hw.physmem: 125272064
:hw.usermen: 84840448
:hw.realmem: 134217728
KVM != physical m
On Mar 20, 2010, at 11:53 AM, Matthew Dillon wrote:
>
> :All above I have successfully tested last months with MAXPHYS of 1MB on
> :i386 and amd64 platforms.
> :
> :So my questions are:
> :- does somebody know any issues denying increasing MAXPHYS in HEAD?
> :- are there any specific opinions abou
:All above I have successfully tested last months with MAXPHYS of 1MB on
:i386 and amd64 platforms.
:
:So my questions are:
:- does somebody know any issues denying increasing MAXPHYS in HEAD?
:- are there any specific opinions about value? 512K, 1MB, MD?
:
:--
:Alexander Motin
(nswbuf * MAX
40 matches
Mail list logo