I think I've found some suboptimal behaviour in the iSCSI target code, but I'd
like another opinion.
Just as a caveat, this behaviour was first seen on a CentOS 7 kernel, but
looking at the code I think it'll behave the same in master.
Basically, the issue is that the iSCSI target code crea
On 04/04/2016 06:25 PM, Nicholas A. Bellinger wrote:
On Mon, 2016-04-04 at 17:01 -0600, Chris Friesen wrote:
I'm not trying to globally throttle IO on a particular block device. I'm trying
to control how much IO the iSCSI target in the kernel is allowed to drive on a
particular bl
On 04/04/2016 04:29 PM, Nicholas A. Bellinger wrote:
On Mon, 2016-04-04 at 09:20 -0600, Chris Friesen wrote:
On 04/02/2016 07:15 PM, Nicholas A. Bellinger wrote:
On Fri, 2016-04-01 at 12:35 -0600, Chris Friesen wrote:
On a slightly different note, is there any way to throttle or limit the
On 04/02/2016 07:15 PM, Nicholas A. Bellinger wrote:
On Fri, 2016-04-01 at 12:35 -0600, Chris Friesen wrote:
On a slightly different note, is there any way to throttle or limit the overall
bandwidth consumed by the iSCSI target in the kernel? I'd like to ensure that
the iSCSI traffic do
On 03/31/2016 01:05 AM, Nicholas A. Bellinger wrote:
On Wed, 2016-03-16 at 10:48 -0600, Chris Friesen wrote:
On 03/11/2016 01:45 AM, Nicholas A. Bellinger wrote:
On Thu, 2016-03-10 at 23:30 -0800, Christoph Hellwig wrote:
On Thu, Mar 10, 2016 at 04:24:25PM -0600, Chris Friesen wrote:
Hi
On 03/11/2016 01:45 AM, Nicholas A. Bellinger wrote:
On Thu, 2016-03-10 at 23:30 -0800, Christoph Hellwig wrote:
On Thu, Mar 10, 2016 at 04:24:25PM -0600, Chris Friesen wrote:
Hi,
I'm looking for information on whether the iSCSI target in the kernel offers
any way to do QoS between tr
Hi,
I'm looking for information on whether the iSCSI target in the kernel offers any
way to do QoS between traffic driven by different initiators.
I'm trying to make sure that one initiator can't do a denial-of-service attack
against others.
Does the kernel target have this sort of thing bu
On 11/07/2014 01:17 PM, Martin K. Petersen wrote:
I'd suggest trying /dev/sgN instead.
That seems to work. Much appreciated.
And it's now showing an "optimal_io_size" of 0, so I think the issue is
dealt with.
Thanks for all the help, it's been educational. :)
Chris
--
To unsubscribe from
On 11/07/2014 10:25 AM, Martin K. Petersen wrote:
>>>>>> "Chris" == Chris Friesen writes:
>
> Chris,
>
> Chris> Also, I think it's wrong for filesystems and userspace to use it
> Chris> for alignment. In E.4 and E.5 in the "sb
On 11/07/2014 11:42 AM, Martin K. Petersen wrote:
"Martin" == Martin K Petersen writes:
Martin> I know there was a bug open with Seagate. I assume it has been
Martin> fixed in their latest firmware.
Seagate confirms that this issue was fixed about a year ago. Will
provide more data when I hav
On 11/06/2014 07:56 PM, Martin K. Petersen wrote:
"Chris" == Chris Friesen writes:
Chris,
Chris> For a RAID card I expect it would be related to chunk size or
Chris> stripe width or something...but even then I would expect to be
Chris> able to cap it at 100MB or so. O
On 11/06/2014 12:12 PM, Martin K. Petersen wrote:
"Chris" == Chris Friesen
writes:
Chris> That'd work, but is it the best way to go? I mean, I found
one Chris> report of a similar problem on an SSD (model number
unknown). In Chris> that case it was a near-UINT_MAX va
On 11/06/2014 11:34 AM, Martin K. Petersen wrote:
"Chris" == Chris Friesen writes:
Chris> Perhaps the ST900MM0026 should be blacklisted as well?
Sure. I'll widen the net a bit for that Seagate model.
That'd work, but is it the best way to go? I mean, I found o
On 11/06/2014 10:47 AM, Chris Friesen wrote:
Hi,
I'm running a modified 3.4-stable on relatively recent X86 server-class
hardware.
I recently installed a Seagate ST900MM0026 (900GB 2.5in 10K SAS drive)
and it's reporting a value of 4294966784 for optimal_io_size. The other
param
On 02/22/2013 02:35 PM, Jan Engelhardt wrote:
On Friday 2013-02-22 20:28, Martin Svec wrote:
Yes, I've already tried the ROW scheduler. It helped for some low iodepths
depending on quantum settings but generally didn't solve the problem. I think
the key issue is that none of the schedulers can
On 12/05/2012 03:20 AM, James Bottomley wrote:
On Tue, 2012-12-04 at 16:00 -0600, Chris Friesen wrote:
As another data point, it looks like we may be doing a SEND DIAGNOSTIC
command specifying the default self-test in addition to the background
short self-test. This seems a bit risky and
On 12/03/2012 03:53 PM, Ric Wheeler wrote:
> On 12/03/2012 04:08 PM, Chris Friesen wrote:
>> On 12/03/2012 02:52 PM, Ric Wheeler wrote:
>>
>>> I jumped into this thread late - can you repost detail on the specific
>>> drive and HBA used here? In any case, it sound
On 12/03/2012 03:21 PM, Dave Jiang wrote:
On 12/03/2012 02:08 PM, Chris Friesen wrote:
On 12/03/2012 02:52 PM, Ric Wheeler wrote:
I jumped into this thread late - can you repost detail on the specific
drive and HBA used here? In any case, it sounds like this is a better
topic for the linux
On 12/03/2012 02:52 PM, Ric Wheeler wrote:
> I jumped into this thread late - can you repost detail on the specific
> drive and HBA used here? In any case, it sounds like this is a better
> topic for the linux-scsi or linux-ide list where most of the low level
> storage people lurk :)
Okay, ex
On 11/07/2012 07:02 PM, Jon Mason wrote:
I'm not a lawyer, nor do I play one on TV, but if
I understand the GPL correctly, RTS only needs to provide the relevant
source to their customers upon request.
Not quite.
Assuming the GPL applies, and that they have modified the code, then
they must e
Hi all,
We're seeing the following on startup:
Fusion MPT base driver 3.02.55
Copyright (c) 1999-2005 LSI Logic Corporation
Fusion MPT SAS Host driver 3.02.55
mptbase: Initiating ioc0 bringup
mptbase: ioc0: WARNING - IOC is in FAULT state!!!
FAULT code = 1804h
mptbase: ioc0: ERROR -
Moore, Eric wrote:
On Thursday, November 15, 2007 12:10 PM, Chris Friesen wrote:
Does this status mean that the command needs to be retried by the
userspace app, that it has already been retried by the lower
levels and
is now completed, or something else entirely?
The midlayer is
Moore, Eric wrote:
You already figured out the problem, I don't understand why your asking
if the kernel verison is behaving properly. You said between those
driver versions the device queue depth increased from 32 to 64, and that
is exactly what happened. The reason for the increase is some
Moore, Eric wrote:
QUEUE_FULL and SAM_STAT_TASK_SET_FULL are not errors.
I consider them errors in the same way that ENOMEM or ENOBUFS (or even
EAGAIN) are errors. "There is a shortage of resources and the command
could not be completed, please try again later."
Also, the behaviour has ch
Chris Friesen wrote:
We recently moved from 2.6.10 to 2.6.14 and now we're seeing occasional
QUEUE_FULL/SAM_STAT_TASK_SET_FULL errors being returned to userspace.
These didn't ever show up in 2.6.10.
I found something that might be interesting.
With the the 3.01.18 fusion driver
Suppose I send down an SG_IO command on a generic scsi device node. As
far as I can tell, the code path looks like this in 2.6.14:
sg_ioctl
sg_new_write
scsi_execute_async (sets up sg_cmd_done as callback)
scsi_do_req
Hi,
I asked this question on the list last Friday and haven't seen any
replies, so I thought I'd ask again and broaden the receiver list a bit.
We have x86-based hardware with dual LSI 53c1030 devices. We have a few
apps that issue SCSI requests on sg device nodes. The requests are
general
Hi all,
I've been asked to look into a SCSI problem. I know my way around the
kernel, but I'm new to SCSI/disk operations, so please bear with me (and
educate me) if my terminology is off.
We have an x86-based blade with dual LSI 53c1030 devices.
We recently moved to a new kernel version,
28 matches
Mail list logo