Re: [sheepdog] Redundancy policy via iSCSI

2015-02-12 Thread Hitoshi Mitake
At Sat, 7 Feb 2015 10:03:18 +0800,
hujianyang wrote:
> 
> On 2015/2/6 16:41, Hitoshi Mitake wrote:
> > At Wed, 04 Feb 2015 11:24:21 +0900,
> > Hitoshi Mitake wrote:
> >>
> >> At Tue, 3 Feb 2015 17:17:42 +0800,
> >> hujianyang wrote:
> >>>
> >>> Hi Saeki,
> >>>
> >>> On 2015/2/3 16:53, Saeki Masaki wrote:
>  Hi Hu,
> 
>  Since Sheepdog has a mechanism that does not place objects in the same 
>  zone_id.
>  Can you try to change ZONE id in each node.
> >>>Id   Host:Port V-Nodes   Zone
> >>> 0   130.1.0.147:7000128  0
> >>> 1   130.1.0.148:7000128  0
> >>> 2   130.1.0.149:7000128  0
> 
>  Best Regards, Saeki.
> 
> >>>
> >>> Good suggestions~!
> >>>
> >>> Seems OK now. But write performance is too slow in my environment.
> >>
> >> 1.1MB/s seems to be too slow, how about changing input file from
> >> /dev/random to /dev/zero? And I'd like to know perofrmance of default
> >> backing store of tgt (use file as iSCSI target) on your environment.
> >>
> >> Thanks,
> >> Hitoshi
> > 
> > BTW, I have pending patchset for parallelizing iSCSI PDU send/recv of
> > tgtd:
> > https://github.com/mitake/tgt/commits/iscsi-pdu-rxtx-mt
> > 
> > You can activate the feature with new option -T:
> > $ tgtd -T 16
> > 
> > It is still half-baked, but in some cases it can improve performance
> > of iSCSI + sheepdog.
> > 
> > Thanks,
> > Hitoshi
> > 
> 
> Hi Hitoshi,
> 
> Sorry for reply late. You know, there always many stuffs need to
> been done before Spring Festival.
> 
> Actually my current environment is just for testing the features
> of sheepdog. Performance is not a urgent issue. Thanks for your
> kindness.
> 
> I have tested sheepdog with fio on my testing environment:
> 
> [global]
> runtime=300
> direct=1
> iodepth=1
> bs=256K
> size=100G
> numjobs=1
> time_based
> 
> KB/s  readrandreadwrite   randwrite
> local 179957  36262   179933  66752
> iSCSI redundancy(3x)  51553   51303   17826   15984
>   redundancy(4:2) 43166   42775   20370   14234
> SBD   redundancy(3x)  51112   51106   17515   20311
> 
> 
> I'm not quite sure why randwrite is better than randread via local
> access.

Hmm, seems odd. But thanks for your report.

BTW, I'm preparing LTTng tracepoints in sheepdog. It is still ongoing
but it will useful for analyzing performance of sheepdog. If you know
LTTng, please try it :)

Thanks,
Hitoshi

> 
> Thanks,
> Hu
> 
> -- 
> sheepdog mailing list
> sheepdog@lists.wpkg.org
> https://lists.wpkg.org/mailman/listinfo/sheepdog
-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-08 Thread hujianyang
On 2015/2/8 19:24, Vasiliy Tolstov wrote:
> 
> 2015-02-07 5:03 GMT+03:00 hujianyang  >:
> 
> KB/sreadrandreadwrite   randwrite
> local   179957  36262   179933  66752
> iSCSI   redundancy(3x)  51553   51303   17826   15984
> redundancy(4:2) 43166   42775   20370   14234
> SBD redundancy(3x)  51112   51106   17515   20311
> 
> 
> Local means not sheepdog read/write or local via sheepdog local daemon 
> running?
> 

Yes, it is.

Just directly run fio on /dev/sd*. I think maybe the strange performance
is because the RAID? Not quite sure. But it's not related to sheepdog.

Thanks,
Hu

> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tols...@selfip.ru 
> jabber: v...@selfip.ru 


-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-08 Thread Vasiliy Tolstov
2015-02-07 5:03 GMT+03:00 hujianyang :

> KB/sreadrandreadwrite   randwrite
> local   179957  36262   179933  66752
> iSCSI   redundancy(3x)  51553   51303   17826   15984
> redundancy(4:2) 43166   42775   20370   14234
> SBD redundancy(3x)  51112   51106   17515   20311
>

Local means not sheepdog read/write or local via sheepdog local daemon
running?


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru
-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-06 Thread hujianyang
On 2015/2/6 16:41, Hitoshi Mitake wrote:
> At Wed, 04 Feb 2015 11:24:21 +0900,
> Hitoshi Mitake wrote:
>>
>> At Tue, 3 Feb 2015 17:17:42 +0800,
>> hujianyang wrote:
>>>
>>> Hi Saeki,
>>>
>>> On 2015/2/3 16:53, Saeki Masaki wrote:
 Hi Hu,

 Since Sheepdog has a mechanism that does not place objects in the same 
 zone_id.
 Can you try to change ZONE id in each node.
>>>Id   Host:Port V-Nodes   Zone
>>> 0   130.1.0.147:7000128  0
>>> 1   130.1.0.148:7000128  0
>>> 2   130.1.0.149:7000128  0

 Best Regards, Saeki.

>>>
>>> Good suggestions~!
>>>
>>> Seems OK now. But write performance is too slow in my environment.
>>
>> 1.1MB/s seems to be too slow, how about changing input file from
>> /dev/random to /dev/zero? And I'd like to know perofrmance of default
>> backing store of tgt (use file as iSCSI target) on your environment.
>>
>> Thanks,
>> Hitoshi
> 
> BTW, I have pending patchset for parallelizing iSCSI PDU send/recv of
> tgtd:
> https://github.com/mitake/tgt/commits/iscsi-pdu-rxtx-mt
> 
> You can activate the feature with new option -T:
> $ tgtd -T 16
> 
> It is still half-baked, but in some cases it can improve performance
> of iSCSI + sheepdog.
> 
> Thanks,
> Hitoshi
> 

Hi Hitoshi,

Sorry for reply late. You know, there always many stuffs need to
been done before Spring Festival.

Actually my current environment is just for testing the features
of sheepdog. Performance is not a urgent issue. Thanks for your
kindness.

I have tested sheepdog with fio on my testing environment:

[global]
runtime=300
direct=1
iodepth=1
bs=256K
size=100G
numjobs=1
time_based

KB/sreadrandreadwrite   randwrite
local   179957  36262   179933  66752
iSCSI   redundancy(3x)  51553   51303   17826   15984
redundancy(4:2) 43166   42775   20370   14234
SBD redundancy(3x)  51112   51106   17515   20311


I'm not quite sure why randwrite is better than randread via local
access.

Thanks,
Hu

-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-06 Thread Hitoshi Mitake
At Wed, 04 Feb 2015 11:24:21 +0900,
Hitoshi Mitake wrote:
> 
> At Tue, 3 Feb 2015 17:17:42 +0800,
> hujianyang wrote:
> > 
> > Hi Saeki,
> > 
> > On 2015/2/3 16:53, Saeki Masaki wrote:
> > > Hi Hu,
> > > 
> > > Since Sheepdog has a mechanism that does not place objects in the same 
> > > zone_id.
> > > Can you try to change ZONE id in each node.
> > Id   Host:Port V-Nodes   Zone
> >  0   130.1.0.147:7000128  0
> >  1   130.1.0.148:7000128  0
> >  2   130.1.0.149:7000128  0
> > > 
> > > Best Regards, Saeki.
> > > 
> > 
> > Good suggestions~!
> > 
> > Seems OK now. But write performance is too slow in my environment.
> 
> 1.1MB/s seems to be too slow, how about changing input file from
> /dev/random to /dev/zero? And I'd like to know perofrmance of default
> backing store of tgt (use file as iSCSI target) on your environment.
> 
> Thanks,
> Hitoshi

BTW, I have pending patchset for parallelizing iSCSI PDU send/recv of
tgtd:
https://github.com/mitake/tgt/commits/iscsi-pdu-rxtx-mt

You can activate the feature with new option -T:
$ tgtd -T 16

It is still half-baked, but in some cases it can improve performance
of iSCSI + sheepdog.

Thanks,
Hitoshi

> 
> > 
> > Thank your very much!
> > Hu
> > 
> > 
> > -- 
> > sheepdog mailing list
> > sheepdog@lists.wpkg.org
> > https://lists.wpkg.org/mailman/listinfo/sheepdog
-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread Hitoshi Mitake
At Tue, 3 Feb 2015 17:17:42 +0800,
hujianyang wrote:
> 
> Hi Saeki,
> 
> On 2015/2/3 16:53, Saeki Masaki wrote:
> > Hi Hu,
> > 
> > Since Sheepdog has a mechanism that does not place objects in the same 
> > zone_id.
> > Can you try to change ZONE id in each node.
> Id   Host:Port V-Nodes   Zone
>  0   130.1.0.147:7000128  0
>  1   130.1.0.148:7000128  0
>  2   130.1.0.149:7000128  0
> > 
> > Best Regards, Saeki.
> > 
> 
> Good suggestions~!
> 
> Seems OK now. But write performance is too slow in my environment.

1.1MB/s seems to be too slow, how about changing input file from
/dev/random to /dev/zero? And I'd like to know perofrmance of default
backing store of tgt (use file as iSCSI target) on your environment.

Thanks,
Hitoshi

> 
> Thank your very much!
> Hu
> 
> 
> -- 
> sheepdog mailing list
> sheepdog@lists.wpkg.org
> https://lists.wpkg.org/mailman/listinfo/sheepdog
-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread hujianyang
On 2015/2/3 17:19, Bastian Scholz wrote:
> Am 2015-02-03 09:54, schrieb hujianyang:
>> I'm not clear with this. Just run "sheep /mnt/store/0 -z 0 -p 7000"
>> on each host.
> 
> I guess this is your problem. Sheep replicates in different zones,
> since you have only one zone defined, sheep dont replicate the data,
> it only distribute the data across your nodes.
> 
> Try a different zone for each host, or leave -z XXX at all (sheep
> generates a zone id by itself)
> 
> Cheers
> 
> Bastian
> 
> 

Yes, this problem is already fixed in this way.

Thank you all.

-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread Bastian Scholz

Am 2015-02-03 09:54, schrieb hujianyang:

I'm not clear with this. Just run "sheep /mnt/store/0 -z 0 -p 7000"
on each host.


I guess this is your problem. Sheep replicates in different zones,
since you have only one zone defined, sheep dont replicate the data,
it only distribute the data across your nodes.

Try a different zone for each host, or leave -z XXX at all (sheep
generates a zone id by itself)

Cheers

Bastian


--
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread hujianyang
Hi Saeki,

On 2015/2/3 16:53, Saeki Masaki wrote:
> Hi Hu,
> 
> Since Sheepdog has a mechanism that does not place objects in the same 
> zone_id.
> Can you try to change ZONE id in each node.
Id   Host:Port V-Nodes   Zone
 0   130.1.0.147:7000128  0
 1   130.1.0.148:7000128  0
 2   130.1.0.149:7000128  0
> 
> Best Regards, Saeki.
> 

Good suggestions~!

Seems OK now. But write performance is too slow in my environment.

Thank your very much!
Hu


-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread Saeki Masaki

Hi Hu,

Since Sheepdog has a mechanism that does not place objects in the same zone_id.
Can you try to change ZONE id in each node.

   Id   Host:Port V-Nodes   Zone
0   130.1.0.147:7000128  0
1   130.1.0.148:7000128  0
2   130.1.0.149:7000128  0


Best Regards, Saeki.

On 2015/02/03 17:09, Hitoshi Mitake wrote:

At Tue, 03 Feb 2015 17:02:24 +0900,
Hitoshi Mitake wrote:



Hi Hu,
Thanks for your report!

At Tue, 3 Feb 2015 15:52:12 +0800,
hujianyang wrote:


Hi Hitoshi,

Sorry for disturb.

I'm testing redundancy policy of sheepdog via iSCSI. I think
if I create a 1G v-disk, the total space cost of this device
should be 3*1G under a 3 copies policy. But after tests, I
find the cost of this device is only 1G. Seems no additional
copy is created.

I don't know what happened. I'd like to should my configurations
and wish you could take some time to help me. Many thanks!


linux-rme9:/mnt # dog cluster info
Cluster status: running, auto-recovery enabled

Cluster created at Tue Feb  3 23:07:13 2015

Epoch Time   Version [Host:Port:V-Nodes,,,]
2015-02-03 23:07:13  1 [130.1.0.147:7000:128, 130.1.0.148:7000:128, 
130.1.0.149:7000:128]
linux-rme9:/mnt # dog node list
   Id   Host:Port V-Nodes   Zone
0   130.1.0.147:7000128  0
1   130.1.0.148:7000128  0
2   130.1.0.149:7000128  0
linux-rme9:/mnt # dog vdi list
   NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   Hu0  0  1.0 GB  1.0 GB  0.0 MB 2015-02-03 23:12   6e7762  3  
  22
linux-rme9:/mnt # dog node info
Id  SizeUsedAvail   Use%
  0 261 GB  368 MB  260 GB0%
  1 261 GB  336 MB  261 GB0%
  2 261 GB  320 MB  261 GB0%
Total   783 GB  1.0 GB  782 GB0%

Total virtual image size1.0 GB

linux-rme9:/mnt # tgtadm --op show --mode target
Target 1: iqn.2015.01.org.sheepdog
 System information:
 Driver: iscsi
 State: ready
 I_T nexus information:
 I_T nexus: 3
 Initiator: iqn.1996-04.de.suse:01:23a8f73738e7 alias: Fs-Server
 Connection: 0
 IP Address: 130.1.0.10
 LUN information:
 LUN: 0
 Type: controller
 SCSI ID: IET 0001
 SCSI SN: beaf10
 Size: 0 MB, Block size: 1
 Online: Yes
 Removable media: No
 Prevent removal: No
 Readonly: No
 SWP: No
 Thin-provisioning: No
 Backing store type: null
 Backing store path: None
 Backing store flags:
 LUN: 1
 Type: disk
 SCSI ID: IET 00010001
 SCSI SN: beaf11
 Size: 1074 MB, Block size: 512
 Online: Yes
 Removable media: No
 Prevent removal: No
 Readonly: No
 SWP: No
 Thin-provisioning: No
 Backing store type: sheepdog
 Backing store path: tcp:130.1.0.147:7000:Hu0
 Backing store flags:
 Account information:
 ACL information:
 ALL


Client:
  # iscsiadm -m node --targetname iqn.2015.01.org.sheepdog --portal 
130.1.0.147:3260 --rescan
Rescanning session [sid: 4, target: iqn.2015.01.org.sheepdog, portal: 
130.1.0.147,3260]
  # dd if=/dev/random of=/dev/sdg bs=2M
dd: writing `/dev/sdg': No space left on device
0+13611539 records in
0+13611538 records out
1073741824 bytes (1.1 GB) copied, 956.511 s, 1.1 MB/s


Hmm, seems strange. For diagnosing, I have some questions:

1. Can you see any error messages in log files of sheep?
2. Could you provide lists of obj/ directories of sheep servers?
3. Is this reproducible even you make file system on the iSCSI target
and put data on the file system?
4. Is this reproducible even you append oflag=sync to the option of dd?


Additionaly, could you provide options of sheep?

Thanks,
Hitoshi



Thanks,
Hitoshi



Thanks!
Hu

--
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog



--
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread hujianyang
Hi Hitoshi,

Thanks for your help!

Try to answer your questions.

On 2015/2/3 16:02, Hitoshi Mitake wrote:
> 
> Hmm, seems strange. For diagnosing, I have some questions:
> 
> 1. Can you see any error messages in log files of sheep?

log while performing dd on a formatted filesystem:

Feb 04 00:16:05   INFO [main] rx_main(830) req=0x7faf6c0008b0, fd=289, 
client=127.0.0.1:57302, op=DEL_VDI, data=(not string)
Feb 04 00:16:05   INFO [main] run_vid_gc(2147) all members of the family (root: 
6e7762) are deleted
Feb 04 00:16:19   INFO [main] cluster_release_vdi_main(1431) node: IPv4 
ip:130.1.0.147 port:7000 is unlocking VDI (type: shared): 6e7762
Feb 04 00:16:19  ERROR [main] vdi_unlock(717) no vdi state entry of 6e7762 found
Feb 04 00:16:41   INFO [main] rx_main(830) req=0x1b7fbf0, fd=221, 
client=127.0.0.1:57306, op=NEW_VDI, data=(not string)
Feb 04 00:16:41   INFO [main] post_cluster_new_vdi(133) req->vdi.base_vdi_id: 
0, rsp->vdi.vdi_id: 6e7762
Feb 04 00:16:41   INFO [main] tx_main(882) req=0x1b7fbf0, fd=221, 
client=127.0.0.1:57306, op=NEW_VDI, result=00
Feb 04 00:17:21   INFO [main] cluster_lock_vdi_main(1408) node: IPv4 
ip:130.1.0.147 port:7000 is locking VDI (type: shared): 6e776

> 2. Could you provide lists of obj/ directories of sheep servers?

original case: dd if=/dev/random of=/dev/sdg bs=2M

130.1.0.147

linux-rme9:/mnt/store/0/obj # ls
006e7762  006e7762003d  006e77620064  006e7762008a  
006e776200b2  006e776200db
006e77620003  006e77620044  006e77620068  006e7762008b  
006e776200b5  006e776200dd
006e77620008  006e77620046  006e7762006a  006e7762008c  
006e776200b6  006e776200e0
006e77620010  006e77620047  006e7762006c  006e7762008f  
006e776200bc  006e776200e5
006e77620012  006e77620048  006e7762006f  006e77620091  
006e776200bd  006e776200e8
006e7762001b  006e7762004a  006e77620073  006e77620094  
006e776200c0  006e776200ea
006e7762001c  006e7762004b  006e77620074  006e77620095  
006e776200c3  006e776200f2
006e77620020  006e7762004e  006e77620078  006e7762009a  
006e776200c4  006e776200f4
006e77620021  006e77620054  006e7762007c  006e7762009e  
006e776200c5  006e776200f5
006e77620023  006e77620056  006e7762007d  006e776200a4  
006e776200c7  006e776200f6
006e77620029  006e77620057  006e77620080  006e776200a6  
006e776200ca  006e776200f8
006e7762002c  006e77620058  006e77620081  006e776200a7  
006e776200d2  006e776200fd
006e77620034  006e7762005a  006e77620085  006e776200aa  
006e776200d3  806e7762
006e77620035  006e7762005f  006e77620086  006e776200ac  
006e776200d6
006e77620037  006e77620060  006e77620087  006e776200ad  
006e776200d7
006e77620039  006e77620063  006e77620088  006e776200b0  
006e776200d8

130.1.0.148

linux-2hp8:/mnt/store/0/obj # ls
006e77620001  006e7762001a  006e7762004c  006e7762007a  
006e776200a5  006e776200d4
006e77620002  006e7762001e  006e7762004d  006e7762007b  
006e776200ab  006e776200d9
006e77620007  006e77620024  006e77620051  006e7762007f  
006e776200b1  006e776200da
006e77620009  006e77620025  006e77620055  006e77620082  
006e776200b4  006e776200de
006e7762000a  006e77620026  006e77620059  006e77620084  
006e776200b9  006e776200df
006e7762000b  006e7762002a  006e7762005d  006e77620089  
006e776200ba  006e776200e2
006e7762000d  006e7762002f  006e7762005e  006e7762008e  
006e776200bb  006e776200e7
006e7762000e  006e77620032  006e77620061  006e77620090  
006e776200be  006e776200e9
006e7762000f  006e77620033  006e77620067  006e77620092  
006e776200c1  006e776200ee
006e77620011  006e77620038  006e77620069  006e77620093  
006e776200c2  006e776200ef
006e77620013  006e7762003b  006e7762006b  006e77620097  
006e776200c8  006e776200f0
006e77620014  006e7762003e  006e7762006e  006e77620099  
006e776200c9  006e776200f1
006e77620016  006e77620045  006e77620071  006e7762009d  
006e776200cb  006e776200fa
006e77620017  006e77620049  006e77620077  006e776200a3  
006e776200cc  006e776200fc

130.1.0.149

linux-cjck:/mnt/store/0/obj # ls
006e77620004  006e7762002e  006e77620053  006e7762008d  
006e776200b7  006e776200e6
006e77620005  006e77620030  006e7762005b  006e77620096  
006e776200b8  006e776200eb
006e77620006  006e77620031  006e7762005c  006e77620098  
006e776200bf  006e776200ec
006e7762000c  006e7

Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread Hitoshi Mitake
At Tue, 03 Feb 2015 17:02:24 +0900,
Hitoshi Mitake wrote:
> 
> 
> Hi Hu,
> Thanks for your report!
> 
> At Tue, 3 Feb 2015 15:52:12 +0800,
> hujianyang wrote:
> > 
> > Hi Hitoshi,
> > 
> > Sorry for disturb.
> > 
> > I'm testing redundancy policy of sheepdog via iSCSI. I think
> > if I create a 1G v-disk, the total space cost of this device
> > should be 3*1G under a 3 copies policy. But after tests, I
> > find the cost of this device is only 1G. Seems no additional
> > copy is created.
> > 
> > I don't know what happened. I'd like to should my configurations
> > and wish you could take some time to help me. Many thanks!
> > 
> > 
> > linux-rme9:/mnt # dog cluster info
> > Cluster status: running, auto-recovery enabled
> > 
> > Cluster created at Tue Feb  3 23:07:13 2015
> > 
> > Epoch Time   Version [Host:Port:V-Nodes,,,]
> > 2015-02-03 23:07:13  1 [130.1.0.147:7000:128, 130.1.0.148:7000:128, 
> > 130.1.0.149:7000:128]
> > linux-rme9:/mnt # dog node list
> >   Id   Host:Port V-Nodes   Zone
> >0   130.1.0.147:7000 128  0
> >1   130.1.0.148:7000 128  0
> >2   130.1.0.149:7000 128  0
> > linux-rme9:/mnt # dog vdi list
> >   NameIdSizeUsed  SharedCreation time   VDI id  Copies  
> > Tag   Block Size Shift
> >   Hu0  0  1.0 GB  1.0 GB  0.0 MB 2015-02-03 23:12   6e7762  3   
> >  22
> > linux-rme9:/mnt # dog node info
> > Id  SizeUsedAvail   Use%
> >  0  261 GB  368 MB  260 GB0%
> >  1  261 GB  336 MB  261 GB0%
> >  2  261 GB  320 MB  261 GB0%
> > Total   783 GB  1.0 GB  782 GB0%
> > 
> > Total virtual image size1.0 GB
> > 
> > linux-rme9:/mnt # tgtadm --op show --mode target
> > Target 1: iqn.2015.01.org.sheepdog
> > System information:
> > Driver: iscsi
> > State: ready
> > I_T nexus information:
> > I_T nexus: 3
> > Initiator: iqn.1996-04.de.suse:01:23a8f73738e7 alias: Fs-Server
> > Connection: 0
> > IP Address: 130.1.0.10
> > LUN information:
> > LUN: 0
> > Type: controller
> > SCSI ID: IET 0001
> > SCSI SN: beaf10
> > Size: 0 MB, Block size: 1
> > Online: Yes
> > Removable media: No
> > Prevent removal: No
> > Readonly: No
> > SWP: No
> > Thin-provisioning: No
> > Backing store type: null
> > Backing store path: None
> > Backing store flags:
> > LUN: 1
> > Type: disk
> > SCSI ID: IET 00010001
> > SCSI SN: beaf11
> > Size: 1074 MB, Block size: 512
> > Online: Yes
> > Removable media: No
> > Prevent removal: No
> > Readonly: No
> > SWP: No
> > Thin-provisioning: No
> > Backing store type: sheepdog
> > Backing store path: tcp:130.1.0.147:7000:Hu0
> > Backing store flags:
> > Account information:
> > ACL information:
> > ALL
> > 
> > 
> > Client:
> >  # iscsiadm -m node --targetname iqn.2015.01.org.sheepdog --portal 
> > 130.1.0.147:3260 --rescan
> > Rescanning session [sid: 4, target: iqn.2015.01.org.sheepdog, portal: 
> > 130.1.0.147,3260]
> >  # dd if=/dev/random of=/dev/sdg bs=2M
> > dd: writing `/dev/sdg': No space left on device
> > 0+13611539 records in
> > 0+13611538 records out
> > 1073741824 bytes (1.1 GB) copied, 956.511 s, 1.1 MB/s
> 
> Hmm, seems strange. For diagnosing, I have some questions:
> 
> 1. Can you see any error messages in log files of sheep?
> 2. Could you provide lists of obj/ directories of sheep servers?
> 3. Is this reproducible even you make file system on the iSCSI target
>and put data on the file system?
> 4. Is this reproducible even you append oflag=sync to the option of dd?

Additionaly, could you provide options of sheep?

Thanks,
Hitoshi

> 
> Thanks,
> Hitoshi
> 
> > 
> > Thanks!
> > Hu
> > 
> > -- 
> > sheepdog mailing list
> > sheepdog@lists.wpkg.org
> > https://lists.wpkg.org/mailman/listinfo/sheepdog
-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [sheepdog] Redundancy policy via iSCSI

2015-02-03 Thread Hitoshi Mitake

Hi Hu,
Thanks for your report!

At Tue, 3 Feb 2015 15:52:12 +0800,
hujianyang wrote:
> 
> Hi Hitoshi,
> 
> Sorry for disturb.
> 
> I'm testing redundancy policy of sheepdog via iSCSI. I think
> if I create a 1G v-disk, the total space cost of this device
> should be 3*1G under a 3 copies policy. But after tests, I
> find the cost of this device is only 1G. Seems no additional
> copy is created.
> 
> I don't know what happened. I'd like to should my configurations
> and wish you could take some time to help me. Many thanks!
> 
> 
> linux-rme9:/mnt # dog cluster info
> Cluster status: running, auto-recovery enabled
> 
> Cluster created at Tue Feb  3 23:07:13 2015
> 
> Epoch Time   Version [Host:Port:V-Nodes,,,]
> 2015-02-03 23:07:13  1 [130.1.0.147:7000:128, 130.1.0.148:7000:128, 
> 130.1.0.149:7000:128]
> linux-rme9:/mnt # dog node list
>   Id   Host:Port V-Nodes   Zone
>0   130.1.0.147:7000   128  0
>1   130.1.0.148:7000   128  0
>2   130.1.0.149:7000   128  0
> linux-rme9:/mnt # dog vdi list
>   NameIdSizeUsed  SharedCreation time   VDI id  Copies  
> Tag   Block Size Shift
>   Hu0  0  1.0 GB  1.0 GB  0.0 MB 2015-02-03 23:12   6e7762  3 
>22
> linux-rme9:/mnt # dog node info
> IdSizeUsedAvail   Use%
>  0261 GB  368 MB  260 GB0%
>  1261 GB  336 MB  261 GB0%
>  2261 GB  320 MB  261 GB0%
> Total 783 GB  1.0 GB  782 GB0%
> 
> Total virtual image size  1.0 GB
> 
> linux-rme9:/mnt # tgtadm --op show --mode target
> Target 1: iqn.2015.01.org.sheepdog
> System information:
> Driver: iscsi
> State: ready
> I_T nexus information:
> I_T nexus: 3
> Initiator: iqn.1996-04.de.suse:01:23a8f73738e7 alias: Fs-Server
> Connection: 0
> IP Address: 130.1.0.10
> LUN information:
> LUN: 0
> Type: controller
> SCSI ID: IET 0001
> SCSI SN: beaf10
> Size: 0 MB, Block size: 1
> Online: Yes
> Removable media: No
> Prevent removal: No
> Readonly: No
> SWP: No
> Thin-provisioning: No
> Backing store type: null
> Backing store path: None
> Backing store flags:
> LUN: 1
> Type: disk
> SCSI ID: IET 00010001
> SCSI SN: beaf11
> Size: 1074 MB, Block size: 512
> Online: Yes
> Removable media: No
> Prevent removal: No
> Readonly: No
> SWP: No
> Thin-provisioning: No
> Backing store type: sheepdog
> Backing store path: tcp:130.1.0.147:7000:Hu0
> Backing store flags:
> Account information:
> ACL information:
> ALL
> 
> 
> Client:
>  # iscsiadm -m node --targetname iqn.2015.01.org.sheepdog --portal 
> 130.1.0.147:3260 --rescan
> Rescanning session [sid: 4, target: iqn.2015.01.org.sheepdog, portal: 
> 130.1.0.147,3260]
>  # dd if=/dev/random of=/dev/sdg bs=2M
> dd: writing `/dev/sdg': No space left on device
> 0+13611539 records in
> 0+13611538 records out
> 1073741824 bytes (1.1 GB) copied, 956.511 s, 1.1 MB/s

Hmm, seems strange. For diagnosing, I have some questions:

1. Can you see any error messages in log files of sheep?
2. Could you provide lists of obj/ directories of sheep servers?
3. Is this reproducible even you make file system on the iSCSI target
   and put data on the file system?
4. Is this reproducible even you append oflag=sync to the option of dd?

Thanks,
Hitoshi

> 
> Thanks!
> Hu
> 
> -- 
> sheepdog mailing list
> sheepdog@lists.wpkg.org
> https://lists.wpkg.org/mailman/listinfo/sheepdog
-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog