>>> Mike Christie schrieb am 28.08.2014 um 18:29 in
Nachricht <53ff58dc.7050...@cs.wisc.edu>:
> On 08/28/2014 12:59 AM, Ulrich Windl wrote:
>>> > To delete a device just do
>>> >
>>> > echo 1 > /sys/block/sdX/device/delete
>> I think the confusing thing is that you don't see a "delete" in
> /sy
On 08/28/2014 11:29 AM, Mike Christie wrote:
> On 08/28/2014 12:59 AM, Ulrich Windl wrote:
To delete a device just do
echo 1 > /sys/block/sdX/device/delete
>> I think the confusing thing is that you don't see a "delete" in
>> /sys/block/sdX/device.
>
> Not sure what you mean. I do
On 08/28/2014 12:59 AM, Ulrich Windl wrote:
>> > To delete a device just do
>> >
>> > echo 1 > /sys/block/sdX/device/delete
> I think the confusing thing is that you don't see a "delete" in
> /sys/block/sdX/device.
Not sure what you mean. I do:
ls /sys/block/sda/device/
block evt_me
>>> Mike Christie schrieb am 27.08.2014 um 23:49 in
Nachricht <53fe5276.2060...@cs.wisc.edu>:
> On 08/27/2014 02:24 AM, Ulrich Windl wrote:
> Learner Study schrieb am 27.08.2014 um 02:13 in
>> Nachricht
>> :
>>> Hi Mike,
>>>
>>> Thanks for suggestions
>>>
>>> I think you meant,
>>>
>>> ec
On 08/27/2014 02:24 AM, Ulrich Windl wrote:
Learner Study schrieb am 27.08.2014 um 02:13 in
> Nachricht
> :
>> Hi Mike,
>>
>> Thanks for suggestions
>>
>> I think you meant,
>>
>> echo 1 > /sys/block/sdX/device/delete
>>
>> I don't see /sys/block/sdX/device/remove in my setup.
>
> I'm no
I had applied the tuning for my 10g link but didn't see much impact. Actually
for me tcp is already line rate with 2/3 threads but iscsi/fio read is around
5.5gbps only - with 3/4 fio threads. Perhaps the bottleneck is somewhere else...
Thanks!
Sent from my iPhone
On Aug 27, 2014, at 8:25 AM,
On Tue, 26 Aug 2014 13:05:11 -0700
Learner wrote:
How many iscsi and underlying top sessions are u using? If multiple,
pls check if all to sessions are being used.
Btw, what tuning did u perform to fix Tcp BDP issue?
I'm just doing netcat tests to/from /dev/shm at the moment.
I wouldn't con
>>> Learner Study schrieb am 27.08.2014 um 02:13 in
Nachricht
:
> Hi Mike,
>
> Thanks for suggestions
>
> I think you meant,
>
> echo 1 > /sys/block/sdX/device/delete
>
> I don't see /sys/block/sdX/device/remove in my setup.
I'm not sure: Is it "echo offline > /sys/block/sdX/device/state"
I am monitoring with netstat -a...looking at sendq and recvq there for the
three iscsi/tcp sessions.
Also checked with tcpdump.
Thanks!
Sent from my iPhone
On Aug 26, 2014, at 9:46 PM, "Michael Christie" wrote:
>
> On Aug 26, 2014, at 6:49 PM, Michael Christie wrote:
>
>>
>> On Aug 26, 2
On Aug 26, 2014, at 6:49 PM, Michael Christie wrote:
>
> On Aug 26, 2014, at 3:11 PM, Learner wrote:
>
>> Another related observation and some questions;
>>
>> I am using open iscsi on init with IET on trgt over a single 10gbps link
>>
>> There are three ip aliases on each side
>>
>> I hav
Hi Mike,
Thanks for suggestions
I think you meant,
echo 1 > /sys/block/sdX/device/delete
I don't see /sys/block/sdX/device/remove in my setup.
How do following FIO options look?
[default]
rw=read
size=4g
bs=1m
ioengine=libaio
direct=1
numjobs=1
filename=/dev/sda
runtime=360
iodepth=256
T
On Aug 26, 2014, at 3:11 PM, Learner wrote:
> Another related observation and some questions;
>
> I am using open iscsi on init with IET on trgt over a single 10gbps link
>
> There are three ip aliases on each side
>
> I have 3 ramdisks exported by IET to init
>
> I do iscsi login 3 times,
You are likely getting hit by the bandwidth-delay product.
Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
and http://www.kehlet.cx/articles/99.html
On 08/25/2014 02:58 PM, Mark Lehrer wrote:
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi
I have a couple of iscsi links running on 1G and not in your range of hw
and demand at all.
I ran an ISP for about 20 years and got bitten by the BDP a number of
times now so when someone describes the problem I know what to look for.
On 08/26/2014 04:05 PM, Learner wrote:
How many iscsi
iperf performance for TCP is line rate in both directions using 3 threads
However, I can just get 700MB/s Write and 570MB/s Reads with iSCSI.
Thanks for any pointers!
On Tuesday, August 26, 2014 1:11:59 PM UTC-7, learner.study wrote:
>
> Another related observation and some questions;
>
> I am
Another related observation and some questions;
I am using open iscsi on init with IET on trgt over a single 10gbps link
There are three ip aliases on each side
I have 3 ramdisks exported by IET to init
I do iscsi login 3 times, once using each underlying ip address and notice
that each iscsi
How many iscsi and underlying top sessions are u using? If multiple, pls check
if all to sessions are being used.
Btw, what tuning did u perform to fix Tcp BDP issue?
Thanks
Sent from my iPhone
On Aug 26, 2014, at 12:53 PM, "Mark Lehrer" wrote:
> On Tue, 26 Aug 2014 08:58:46 -0400 Alvin St
On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr wrote:
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)
On a semi-related note, are there any good guides out there to
tuning Linux for maximum single-socket performance? On my 40 gigabit
You are lik
You are likely getting hit by the bandwidth-delay product.
Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
and http://www.kehlet.cx/articles/99.html
On 08/25/2014 02:58 PM, Mark Lehrer wrote:
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi
>>> "Mark Lehrer" schrieb am 25.08.2014 um 20:58 in Nachricht
:
>> > I am trying to achieve10Gbps in my single initiator/single target
>>> env. (open-iscsi and IET)
>
> On a semi-related note, are there any good guides out there to tuning Linux
> for maximum single-socket performance? On my 40
On 08/25/2014 04:40 PM, Mark Lehrer wrote:
> On Mon, 25 Aug 2014 15:48:02 -0500
> Mike Christie wrote:
>> On 08/25/2014 03:31 PM, Donald Williams wrote:
>> On a semi-related note, are there any good guides out there to
>> tuning Linux for maximum single-socket performance?
>>
>> What kernel are
On Mon, 25 Aug 2014 15:48:02 -0500
Mike Christie wrote:
On 08/25/2014 03:31 PM, Donald Williams wrote:
On a semi-related note, are there any good guides out there to
tuning Linux for maximum single-socket performance?
What kernel are you using? Are you doing IO to one LU or multiple?
Single
On 08/25/2014 03:31 PM, Donald Williams wrote:
> On a semi-related note, are there any good guides out there to tuning
> Linux for maximum single-socket performance? On my 40 gigabit setup, I
> seem to hit a wall around 3 gigabits when doing a single TCP socket. To
> go far above that I need to d
I find upping some of the default Linux network params helps with
throughput
Edit /etc/sysctl.conf, then update the system using #sysctl –p
# Increase network buffer sizes net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 6
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)
On a semi-related note, are there any good guides out there to tuning Linux
for maximum single-socket performance? On my 40 gigabit setup, I seem to
hit a wall around 3 gigabits when doing a single TCP
Thanks Mike - That helped
On Saturday, August 23, 2014 2:41:01 AM UTC+5:30, Mike Christie wrote:
>
>
> On Aug 22, 2014, at 12:07 PM, Redwood Hyd > wrote:
>
> Hi All,
> I am trying to achieve10Gbps in my single initiator/single target env.
> (open-iscsi and IET)
>
> I exported 3 Ramdisks, via 3
On Aug 22, 2014, at 12:07 PM, Redwood Hyd wrote:
> Hi All,
> I am trying to achieve10Gbps in my single initiator/single target env.
> (open-iscsi and IET)
>
> I exported 3 Ramdisks, via 3 different IP aliases to initator, did three
> iscsi logins , 3 mounts points and then 3 fio jobs in paral
Hi All,
I am trying to achieve10Gbps in my single initiator/single target env.
(open-iscsi and IET)
I exported 3 Ramdisks, via 3 different IP aliases to initator, did three
iscsi logins , 3 mounts points and then 3 fio jobs in parallel (256K block
size each).
Question 1) Is above a real use ca
28 matches
Mail list logo