Re: [ceph-users] Read Stalls with Multiple OSD Servers

2016-08-03 Thread Christoph Adomeit
her with any 
> attachments, and be advised that any dissemination or copying of this message 
> is prohibited.
> 
> ____________
> 
> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Helander, 
> Thomas [thomas.helan...@kla-tencor.com]
> Sent: Monday, August 01, 2016 10:06 AM
> To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Subject: [ceph-users] Read Stalls with Multiple OSD Servers
> Hi,
> 
> I’m running a three server cluster (one monitor, two OSD) and am having a 
> problem where after adding the second OSD server, my read rate drops 
> significantly and eventually the reads stall (writes are improved as 
> expected). Attached is a log of the rados benchmarks for the two 
> configurations and below is my hardware configuration. I’m not using replicas 
> (capacity is more important than uptime for our use case) and am using a 
> single 10GbE network. The pool (rbd) is configured with 128 placement groups.
> 
> I’ve checked the CPU utilization of the ceph-osd processes and they all hover 
> around 10% until the stall. After the stall, the CPU usage is 0% and the 
> disks all show zero operations via iostat. Iperf reports 9.9Gb/s between the 
> monitor and OSD servers.
> 
> I’m looking for any advice/help on how to identify the source of this issue 
> as my attempts so far have proven fruitless…
> 
> Monitor server:
> 2x E5-2680V3
> 32GB DDR4
> 2x 4TB HDD in RAID1 on an Avago/LSI 3108 with Cachevault, configured as 
> write-back
> 10GbE
> 
> OSD servers:
> 2x E5-2680V3
> 128GB DDR4
> 2x 8+2 RAID6 using 8TB SAS12 drives on an Avago/LSI 9380 controller with 
> Cachevault, configured as write-back.
> - Each RAID6 is an OSD
> 10GbE
> 
> Thanks,
> 
> Tom Helander
> 
> KLA-Tencor
> One Technology Dr | M/S 5-2042R | Milpitas, CA | 95035
> 
> CONFIDENTIALITY NOTICE: This e-mail transmission, and any documents, files or 
> previous e-mail messages attached to it, may contain confidential 
> information. If you are not the intended recipient, or a person responsible 
> for delivering it to the intended recipient, you are hereby notified that any 
> disclosure, copying, distribution or use of any of the information contained 
> in or attached to this message is STRICTLY PROHIBITED. If you have received 
> this transmission in error, please immediately notify us by reply e-mail at 
> thomas.helan...@kla-tencor.com<mailto:thomas.helan...@kla-tencor.com> or by 
> telephone at (408) 875-7819, and destroy the original transmission and its 
> attachments without reading them or saving them to disk. Thank you.
> 



> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm Terstappen

christoph.adom...@gatworks.de Internetloesungen vom Feinsten
Fon. +49 2166 9149-32  Fax. +49 2166 9149-10
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Read Stalls with Multiple OSD Servers

2016-08-02 Thread Helander, Thomas
Hi David,

There’s a good amount of backstory to our configuration, but I’m happy to 
report I found the source of my problem.

We were applying some “optimizations” for our 10GbE via sysctl, including 
disabling net.ipv4.tcp_sack. Re-enabling net.ipv4.tcp_sack resolved the issue.

Thanks,
Tom

From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Monday, August 01, 2016 12:06 PM
To: Helander, Thomas <thomas.helan...@kla-tencor.com>; ceph-users@lists.ceph.com
Subject: RE: Read Stalls with Multiple OSD Servers

Why are you running Raid 6 osds?  Ceph's usefulness is a lot of osds that can 
fail and be replaced.  With your processors/ram, you should be running these as 
individual osds.  That will utilize your dual processor setup much more.  Ceph 
is optimal for 1 core per osd.  Extra cores are more or less wasted in the 
storage node.  You only have 2 storage nodes, so you can't utilize a lot of the 
benefits of Ceph.  Your setup looks like you're much better suited for a 
Gluster cluster instead of a Ceph cluster.  I don't know what your needs are, 
but that's what it looks like from here.

[cid:image001.jpg@01D1ECB5.B37D8B00]<https://storagecraft.com>

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: Helander, Thomas [thomas.helan...@kla-tencor.com]
Sent: Monday, August 01, 2016 11:10 AM
To: David Turner; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: Read Stalls with Multiple OSD Servers
Hi David,

Thanks for the quick response and suggestion. I do have just a basic network 
config (one network, no VLANs) and am able to ping between the storage servers 
using hostnames and IPs.

Thanks,
Tom

From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Monday, August 01, 2016 9:14 AM
To: Helander, Thomas 
<thomas.helan...@kla-tencor.com<mailto:thomas.helan...@kla-tencor.com>>; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: Read Stalls with Multiple OSD Servers

This could be explained by your osds not being able to communicate with each 
other.  We have 2 vlans between our storage nodes, the public and private 
networks for ceph to use.  We added 2 new nodes in a new rack on new switches 
and as soon as we added a single osd for one of them to the cluster, the 
peering never finished and we had a lot of blocked requests that never went 
away.

In testing we found that the rest of the cluster could not communicate with 
these nodes on the private vlan and after fixing the network switch config, 
everything worked perfectly for adding in the 2 new nodes.

If you are using a basic network configuration with only one network and/or 
vlan, then this is likely not to be your issue.  But to check and make sure, 
you should test pinging between your nodes on all of the IPs they have.

[cid:image001.jpg@01D1ECB5.B37D8B00]<https://storagecraft.com>

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Helander, 
Thomas [thomas.helan...@kla-tencor.com]
Sent: Monday, August 01, 2016 10:06 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Read Stalls with Multiple OSD Servers
Hi,

I’m running a three server cluster (one monitor, two OSD) and am having a 
problem where after adding the second OSD server, my read rate drops 
significantly and eventually the reads stall (writes are improved as expected). 
Attached is a log of the rados benchmarks for the two configurations and below 
is my hardware configuration. I’m not using replicas (capacity is more 
important than uptime for our use case) and am using a single 10GbE network. 
The pool (rbd) is configured with 128 placement groups.

I’ve checked the CPU utilization of the ceph-osd processes and they all hover 
around 10% until the stall. After the stall, the CPU usage is 0% and the disks 
all show zero operations vi

Re: [ceph-users] Read Stalls with Multiple OSD Servers

2016-08-01 Thread David Turner
Why are you running Raid 6 osds?  Ceph's usefulness is a lot of osds that can 
fail and be replaced.  With your processors/ram, you should be running these as 
individual osds.  That will utilize your dual processor setup much more.  Ceph 
is optimal for 1 core per osd.  Extra cores are more or less wasted in the 
storage node.  You only have 2 storage nodes, so you can't utilize a lot of the 
benefits of Ceph.  Your setup looks like you're much better suited for a 
Gluster cluster instead of a Ceph cluster.  I don't know what your needs are, 
but that's what it looks like from here.



[cid:image81c5d9.JPG@4153b3b9.4998942b]<https://storagecraft.com>   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: Helander, Thomas [thomas.helan...@kla-tencor.com]
Sent: Monday, August 01, 2016 11:10 AM
To: David Turner; ceph-users@lists.ceph.com
Subject: RE: Read Stalls with Multiple OSD Servers

Hi David,

Thanks for the quick response and suggestion. I do have just a basic network 
config (one network, no VLANs) and am able to ping between the storage servers 
using hostnames and IPs.

Thanks,
Tom

From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Monday, August 01, 2016 9:14 AM
To: Helander, Thomas <thomas.helan...@kla-tencor.com>; ceph-users@lists.ceph.com
Subject: RE: Read Stalls with Multiple OSD Servers

This could be explained by your osds not being able to communicate with each 
other.  We have 2 vlans between our storage nodes, the public and private 
networks for ceph to use.  We added 2 new nodes in a new rack on new switches 
and as soon as we added a single osd for one of them to the cluster, the 
peering never finished and we had a lot of blocked requests that never went 
away.

In testing we found that the rest of the cluster could not communicate with 
these nodes on the private vlan and after fixing the network switch config, 
everything worked perfectly for adding in the 2 new nodes.

If you are using a basic network configuration with only one network and/or 
vlan, then this is likely not to be your issue.  But to check and make sure, 
you should test pinging between your nodes on all of the IPs they have.

[cid:image001.jpg@01D1EBDC.C9630FA0]<https://storagecraft.com>

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Helander, 
Thomas [thomas.helan...@kla-tencor.com]
Sent: Monday, August 01, 2016 10:06 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Read Stalls with Multiple OSD Servers
Hi,

I’m running a three server cluster (one monitor, two OSD) and am having a 
problem where after adding the second OSD server, my read rate drops 
significantly and eventually the reads stall (writes are improved as expected). 
Attached is a log of the rados benchmarks for the two configurations and below 
is my hardware configuration. I’m not using replicas (capacity is more 
important than uptime for our use case) and am using a single 10GbE network. 
The pool (rbd) is configured with 128 placement groups.

I’ve checked the CPU utilization of the ceph-osd processes and they all hover 
around 10% until the stall. After the stall, the CPU usage is 0% and the disks 
all show zero operations via iostat. Iperf reports 9.9Gb/s between the monitor 
and OSD servers.

I’m looking for any advice/help on how to identify the source of this issue as 
my attempts so far have proven fruitless…

Monitor server:
2x E5-2680V3
32GB DDR4
2x 4TB HDD in RAID1 on an Avago/LSI 3108 with Cachevault, configured as 
write-back
10GbE

OSD servers:
2x E5-2680V3
128GB DDR4
2x 8+2 RAID6 using 8TB SAS12 drives on an Avago/LSI 9380 controller with 
Cachevault, configured as write-back.
- Each RAID6 is an OSD
10GbE

Thanks,

Tom Helander

KLA-Tencor
One Technology Dr | M/S 5-2042R | Milpitas, CA | 95035

CONFIDENTIALITY NOTICE: This e-mail tr

Re: [ceph-users] Read Stalls with Multiple OSD Servers

2016-08-01 Thread Helander, Thomas
Hi David,

Thanks for the quick response and suggestion. I do have just a basic network 
config (one network, no VLANs) and am able to ping between the storage servers 
using hostnames and IPs.

Thanks,
Tom

From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Monday, August 01, 2016 9:14 AM
To: Helander, Thomas <thomas.helan...@kla-tencor.com>; ceph-users@lists.ceph.com
Subject: RE: Read Stalls with Multiple OSD Servers

This could be explained by your osds not being able to communicate with each 
other.  We have 2 vlans between our storage nodes, the public and private 
networks for ceph to use.  We added 2 new nodes in a new rack on new switches 
and as soon as we added a single osd for one of them to the cluster, the 
peering never finished and we had a lot of blocked requests that never went 
away.

In testing we found that the rest of the cluster could not communicate with 
these nodes on the private vlan and after fixing the network switch config, 
everything worked perfectly for adding in the 2 new nodes.

If you are using a basic network configuration with only one network and/or 
vlan, then this is likely not to be your issue.  But to check and make sure, 
you should test pinging between your nodes on all of the IPs they have.

[cid:image001.jpg@01D1EBDC.C9630FA0]<https://storagecraft.com>

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Helander, 
Thomas [thomas.helan...@kla-tencor.com]
Sent: Monday, August 01, 2016 10:06 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Read Stalls with Multiple OSD Servers
Hi,

I’m running a three server cluster (one monitor, two OSD) and am having a 
problem where after adding the second OSD server, my read rate drops 
significantly and eventually the reads stall (writes are improved as expected). 
Attached is a log of the rados benchmarks for the two configurations and below 
is my hardware configuration. I’m not using replicas (capacity is more 
important than uptime for our use case) and am using a single 10GbE network. 
The pool (rbd) is configured with 128 placement groups.

I’ve checked the CPU utilization of the ceph-osd processes and they all hover 
around 10% until the stall. After the stall, the CPU usage is 0% and the disks 
all show zero operations via iostat. Iperf reports 9.9Gb/s between the monitor 
and OSD servers.

I’m looking for any advice/help on how to identify the source of this issue as 
my attempts so far have proven fruitless…

Monitor server:
2x E5-2680V3
32GB DDR4
2x 4TB HDD in RAID1 on an Avago/LSI 3108 with Cachevault, configured as 
write-back
10GbE

OSD servers:
2x E5-2680V3
128GB DDR4
2x 8+2 RAID6 using 8TB SAS12 drives on an Avago/LSI 9380 controller with 
Cachevault, configured as write-back.
- Each RAID6 is an OSD
10GbE

Thanks,

Tom Helander

KLA-Tencor
One Technology Dr | M/S 5-2042R | Milpitas, CA | 95035

CONFIDENTIALITY NOTICE: This e-mail transmission, and any documents, files or 
previous e-mail messages attached to it, may contain confidential information. 
If you are not the intended recipient, or a person responsible for delivering 
it to the intended recipient, you are hereby notified that any disclosure, 
copying, distribution or use of any of the information contained in or attached 
to this message is STRICTLY PROHIBITED. If you have received this transmission 
in error, please immediately notify us by reply e-mail at 
thomas.helan...@kla-tencor.com<mailto:thomas.helan...@kla-tencor.com> or by 
telephone at (408) 875-7819, and destroy the original transmission and its 
attachments without reading them or saving them to disk. Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Read Stalls with Multiple OSD Servers

2016-08-01 Thread David Turner
This could be explained by your osds not being able to communicate with each 
other.  We have 2 vlans between our storage nodes, the public and private 
networks for ceph to use.  We added 2 new nodes in a new rack on new switches 
and as soon as we added a single osd for one of them to the cluster, the 
peering never finished and we had a lot of blocked requests that never went 
away.

In testing we found that the rest of the cluster could not communicate with 
these nodes on the private vlan and after fixing the network switch config, 
everything worked perfectly for adding in the 2 new nodes.

If you are using a basic network configuration with only one network and/or 
vlan, then this is likely not to be your issue.  But to check and make sure, 
you should test pinging between your nodes on all of the IPs they have.



[cid:image784f2f.JPG@6aec5c76.479d003a]<https://storagecraft.com>   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Helander, 
Thomas [thomas.helan...@kla-tencor.com]
Sent: Monday, August 01, 2016 10:06 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Read Stalls with Multiple OSD Servers

Hi,

I’m running a three server cluster (one monitor, two OSD) and am having a 
problem where after adding the second OSD server, my read rate drops 
significantly and eventually the reads stall (writes are improved as expected). 
Attached is a log of the rados benchmarks for the two configurations and below 
is my hardware configuration. I’m not using replicas (capacity is more 
important than uptime for our use case) and am using a single 10GbE network. 
The pool (rbd) is configured with 128 placement groups.

I’ve checked the CPU utilization of the ceph-osd processes and they all hover 
around 10% until the stall. After the stall, the CPU usage is 0% and the disks 
all show zero operations via iostat. Iperf reports 9.9Gb/s between the monitor 
and OSD servers.

I’m looking for any advice/help on how to identify the source of this issue as 
my attempts so far have proven fruitless…

Monitor server:
2x E5-2680V3
32GB DDR4
2x 4TB HDD in RAID1 on an Avago/LSI 3108 with Cachevault, configured as 
write-back
10GbE

OSD servers:
2x E5-2680V3
128GB DDR4
2x 8+2 RAID6 using 8TB SAS12 drives on an Avago/LSI 9380 controller with 
Cachevault, configured as write-back.
- Each RAID6 is an OSD
10GbE

Thanks,

Tom Helander

KLA-Tencor
One Technology Dr | M/S 5-2042R | Milpitas, CA | 95035

CONFIDENTIALITY NOTICE: This e-mail transmission, and any documents, files or 
previous e-mail messages attached to it, may contain confidential information. 
If you are not the intended recipient, or a person responsible for delivering 
it to the intended recipient, you are hereby notified that any disclosure, 
copying, distribution or use of any of the information contained in or attached 
to this message is STRICTLY PROHIBITED. If you have received this transmission 
in error, please immediately notify us by reply e-mail at 
thomas.helan...@kla-tencor.com<mailto:thomas.helan...@kla-tencor.com> or by 
telephone at (408) 875-7819, and destroy the original transmission and its 
attachments without reading them or saving them to disk. Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Read Stalls with Multiple OSD Servers

2016-08-01 Thread Helander, Thomas
Hi,

I'm running a three server cluster (one monitor, two OSD) and am having a 
problem where after adding the second OSD server, my read rate drops 
significantly and eventually the reads stall (writes are improved as expected). 
Attached is a log of the rados benchmarks for the two configurations and below 
is my hardware configuration. I'm not using replicas (capacity is more 
important than uptime for our use case) and am using a single 10GbE network. 
The pool (rbd) is configured with 128 placement groups.

I've checked the CPU utilization of the ceph-osd processes and they all hover 
around 10% until the stall. After the stall, the CPU usage is 0% and the disks 
all show zero operations via iostat. Iperf reports 9.9Gb/s between the monitor 
and OSD servers.

I'm looking for any advice/help on how to identify the source of this issue as 
my attempts so far have proven fruitless...

Monitor server:
2x E5-2680V3
32GB DDR4
2x 4TB HDD in RAID1 on an Avago/LSI 3108 with Cachevault, configured as 
write-back
10GbE

OSD servers:
2x E5-2680V3
128GB DDR4
2x 8+2 RAID6 using 8TB SAS12 drives on an Avago/LSI 9380 controller with 
Cachevault, configured as write-back.
- Each RAID6 is an OSD
10GbE

Thanks,

Tom Helander

KLA-Tencor
One Technology Dr | M/S 5-2042R | Milpitas, CA | 95035

CONFIDENTIALITY NOTICE: This e-mail transmission, and any documents, files or 
previous e-mail messages attached to it, may contain confidential information. 
If you are not the intended recipient, or a person responsible for delivering 
it to the intended recipient, you are hereby notified that any disclosure, 
copying, distribution or use of any of the information contained in or attached 
to this message is STRICTLY PROHIBITED. If you have received this transmission 
in error, please immediately notify us by reply e-mail at 
thomas.helan...@kla-tencor.com or by 
telephone at (408) 875-7819, and destroy the original transmission and its 
attachments without reading them or saving them to disk. Thank you.

[root@control01 system_initialize]# cat /etc/ceph/ceph.conf 
[global]
  fsid = 9501b36d-0ac0-44ec-b740-9906d092bac2
  mon initial members = control01
  mon host = 192.168.2.11:6789
  auth client required = cephx
  auth cluster required = cephx
  auth service required = cephx
  cluster network = 192.168.2.0/23
  osd crush chooseleaf type = 1
  osd pool default min size = 1
  osd pool default pg num = 128
  osd pool default size = 1
  public network = 192.168.2.0/23

[root@control01 system_initialize]# ceph -s
cluster 9501b36d-0ac0-44ec-b740-9906d092bac2
 health HEALTH_OK
 monmap e1: 1 mons at {control01=192.168.2.11:6789/0}
election epoch 2, quorum 0 control01
 osdmap e40: 2 osds: 2 up, 2 in
flags sortbitwise
  pgmap v1826: 128 pgs, 1 pools, 0 bytes data, 0 objects
102 MB used, 116 TB / 116 TB avail
 128 active+clean
 
[root@control01 system_initialize]# ceph osd tree
ID WEIGHTTYPE NAME  UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 116.41660 root default 
-2 116.41660 host storage01   
 0  58.20830 osd.0   up  1.0  1.0 
 1  58.20830 osd.1   up  1.0  1.0
 
[root@control01 system_initialize]# rados -p rbd bench 60 write --no-cleanup
Maintaining 16 concurrent writes of 4194304 bytes for up to 60 seconds
Object prefix: benchmark_data_control01.adc_16939
  sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
0   0 0 0 0 0 - 0
1  16   180   164   655.951   656 0.0712128 0.0943403
2  16   323   307   613.933   572 0.0132957 0.0882554
3  16   421   405   539.946   392 0.0121464  0.105845
4  16   503   487   486.951   3280.00786829  0.117621
5  16   650   634   507.149   588 0.0344602  0.125399
6  16   730   714   475.952   320 0.0465193  0.133839
7  16   804   788   450.241   296 0.0704427  0.133889
8  16   952   936   467.954   592  0.103373  0.135459
9  16  1078  1062   471.956   504  0.172915  0.134309
   10  16  1166  1150   459.956   352 0.0232164  0.136015
   11  16  1282  1266460.32   464  0.226732  0.137086
   12  16  1378  1362   453.956   384  0.273416  0.140061
   13  16  1504  1488   457.802   504 0.0169217  0.138367
   14  16  1597  1581   451.671   372 0.0261604  0.140748
   15  16  1693  1677   447.157   384   0.30309  0.142337
   16  16  1774  1758   439.458   324 0.0165604  0.142555
   17  16  1913  1897446.31