Re: [ceph-users] Ceph pool resize

2017-02-07 Thread Vikhyat Umrao
On Tue, Feb 7, 2017 at 12:15 PM, Patrick McGarry 
wrote:

> Moving this to ceph-user
>
> On Mon, Feb 6, 2017 at 3:51 PM, nigel davies  wrote:
> > Hay
> >
> > I am helping to run an small ceph cluster 2 nodes set up.
> >
> > We have recently bought a 3rd storage node and the management want to
> > increase the replication from two to three.
> >
> > As soon as i changed the pool size from 2 to 3, the cluster go's in to
> > warning.
>

Can you please attach below command outputs in a pastebin.

$ceph osd dump | grep -i pool

and crushmap.txt in pastebin

$ceph osd getcrushmap -o /tmp/crushmap
$crushtool -d crushmap -o /tmp/curshmap.txt


> >
> >  health HEALTH_WARN
> > 512 pgs degraded
> > 512 pgs stuck unclean
> > 512 pgs undersized
> > recovery 5560/19162 objects degraded (29.016%)
> > election epoch 50, quorum 0,1
> >  osdmap e243: 20 osds: 20 up, 20 in
> > flags sortbitwise
> >   pgmap v79260: 2624 pgs, 3 pools, 26873 MB data, 6801 objects
> > 54518 MB used, 55808 GB / 55862 GB avail
> > 5560/19162 objects degraded (29.016%)
> > 2112 active+clean
> >  512 active+undersized+degraded
> >
> > The cluster is not recovering it self, any help would be grate full on
> this
> >
> >
>
>
>
> --
>
> Best Regards,
>
> Patrick McGarry
> Director Ceph Community || Red Hat
> http://ceph.com  ||  http://community.redhat.com
> @scuttlemonkey || @ceph
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw geo-replication

2015-04-24 Thread Vikhyat Umrao


On 04/24/2015 05:17 PM, GuangYang wrote:

Hi cephers,
Recently I am investigating the geo-replication of rgw, from the example at 
[1], it looks like if we want to do data geo replication between us east and us 
west, we will need to build *one* (super) RADOS cluster which cross us east and 
west, and only deploy two different radosgw instances. Is my understanding 
correct here?
You can do that but it is not recommended , I think doc says it would be 
very good if you have two clusters with different radosgw servers.

https://ceph.com/docs/master/radosgw/federated-config/#background

1. You may deploy a single Ceph Storage Cluster with a federated 
architecture if you have low latency network connections (*this isn’t 
recommended*).


2. You may also deploy one Ceph Storage Cluster per region with a 
separate set of pools for each zone *(**typical)*.


3. You may also deploy a separate Ceph Storage Cluster for each zone if 
your requirements and resources warrant this level of redundancy.


Regards,
Vikhyat


If that is the case, is there any reason preventing us to deploy two completed 
isolated clusters (not only rgw, but only mon and osd) and replicate data 
between them?

[1] 
https://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication


Thanks,
Guang   
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph site is very slow

2015-04-16 Thread Vikhyat Umrao

I hope this will help you : http://docs.ceph.com/docs/master/

Regards,
Vikhyat

On 04/16/2015 02:39 PM, unixkeeper wrote:

it still on DDOS ATTACK?
is there have a mirror site could get docguide?
thx a  lot



On Wed, Apr 15, 2015 at 11:32 PM, Gregory Farnum g...@gregs42.com 
mailto:g...@gregs42.com wrote:


People are working on it but I understand there was/is a DoS
attack going on. :/
-Greg

On Wed, Apr 15, 2015 at 1:50 AM Ignazio Cassano
ignaziocass...@gmail.com mailto:ignaziocass...@gmail.com wrote:

Many thanks

2015-04-15 10:44 GMT+02:00 Wido den Hollander w...@42on.com
mailto:w...@42on.com:

On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
 Hi all,
 why ceph.com http://ceph.com is very slow ?

Not known right now. But you can try eu.ceph.com
http://eu.ceph.com for your packages and
downloads.

 It is impossible download files for installing ceph.
 Regards
 Ignazio



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902 tel:%2B31%20%280%2920%20700%209902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD replacement

2015-04-14 Thread Vikhyat Umrao

Hi,

I hope you are following this :
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual

After removing the osd successfully run the following command :

# ceph-deploy --overwrite-conf osd create osd-host:device-path 
--zap-disk


It will give you the same osd id for new osd as old one for that disk.

Regards,
Vikhyat


On 04/14/2015 05:54 PM, Corey Kovacs wrote:
I am fairly new to ceph and so far things are going great. That said, 
when I try to replace a failed OSD, I can't seem to get it to use the 
same OSD id#. I have gotten it to point which a ceph osd create does 
use the correct id# but when I try to use ceph-deploy to instantiate 
the replacement, I get working osd which uses the next highest number.


My setup is


18 nodes
12 OSD's each
216 total OSD's
Ceph 0.80.7
ceph-deploy 1.15.22( also tried 1.15.11)
RHEL 6.6

All of the docs I've read say that if what has happened to me does 
occurr, then it's likely the original OSD references are not quite 
cleared out. So my questions are


1. How do I track down all traces of the old OSD
2. Can someone point me to a known good set of instructions for using 
cep-deploy to replace an OSD using the samel ID?

3. Is using the same ID a deprecated idea?


Thanks

-C


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph Performance vs PG counts

2015-02-10 Thread Vikhyat Umrao

Hi,

Just a heads up I hope , you are aware of this tool:
http://ceph.com/pgcalc/

Regards,
Vikhyat

On 02/11/2015 09:11 AM, Sumit Gaur wrote:

Hi ,
I am not sure why PG numbers have not given that much importance in 
the ceph documents, I am seeing huge variation in performance number 
by changing PG numbers.

Just an example

*without SSD* :*
*
36 OSD HDD = PG count 2048 gives me random write (1024K bz) 
performance of 550 MBps


*with SSD  :*
6 SSD for journals + 24 OSD HDD = PG count 2048 gives me random write 
(1024K bz) performance 250 MBps

if I change it to
6 SSD for journals + 24 OSD HDD = PG count 512 gives me random write 
(1024K bz) performance 700 MBps


Variation of PG numbers make SSD looks bad in number. I am bit 
confused here with this behaviour.


Thanks
sumit




On Mon, Feb 9, 2015 at 11:36 AM, Gregory Farnum g...@gregs42.com 
mailto:g...@gregs42.com wrote:


On Sun, Feb 8, 2015 at 6:00 PM, Sumit Gaur sumitkg...@gmail.com
mailto:sumitkg...@gmail.com wrote:
 Hi
 I have installed 6 node ceph cluster and doing a performance
bench mark for
 the same using Nova VMs. What I have observed that FIO random
write reports
 around 250 MBps for 1M block size and PGs 4096 and 650MBps for
iM block size
 and PG counts 2048  . Can some body let me know if I am missing
any ceph
 Architecture point here ? As per my understanding PG numbers are
mainly
 involved in calculating the hash and should not effect
performance so much.

PGs are also serialization points within the codebase, so depending on
how you're testing you can run into contention if you have multiple
objects within a single PG that you're trying to write to at once.
This isn't normally a problem, but for a single benchmark run the
random collisions can become noticeable.
-Greg




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao

Hello,

Your osd does not have weights , please assign some weight to your ceph 
cluster osds as Udo said in his last comment.


osd crush reweight name float[0.0-]  change name's weight to 
weight in

  crush map

sudo ceph osd crush reweight 0.0095 osd.0 to osd.5.

Regards,
Vikhyat

On 02/10/2015 06:11 PM, B L wrote:

Hello Udo,

Thanks for your answer .. 2 questions here:

1- Does what you say mean that I have to remove my drive devices (8GB 
each) and add new ones with at least 10GB?
2- Shall I manually re-weight after disk creation and preparation 
using this command (*ceph osd reweight osd.2 1.0*), or things will 
work automatically with no too much fuss when disk drives are bigger 
than or equal 10GB?


Beanos


On Feb 10, 2015, at 2:26 PM, Udo Lembke ulem...@polarzone.de 
mailto:ulem...@polarzone.de wrote:


Hi,
your will get further trouble, because your weight is not correct.

You need an weight = 0.01 for each OSD. This mean, you OSD must be 10GB
or greater!


Udo

Am 10.02.2015 12:22, schrieb B L:

Hi Vickie,

My OSD tree looks like this:

ceph@ceph-node3:/home/ubuntu$ ceph osd tree
# idweighttype nameup/downreweight
-10root default
-20host ceph-node1
00osd.0up1
10osd.1up1
-30host ceph-node3
20osd.2up1
30osd.3up1
-40host ceph-node2
40osd.4up1
50osd.5up1








___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao

Oh , I have miss placed the places for osd names and weight

ceph osd crush reweight osd.0 0.0095  and so on ..

Regards,
Vikhyat

On 02/10/2015 07:31 PM, B L wrote:

Thanks Vikhyat,

As suggested ..

ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0

Invalid command:  osd.0 doesn't represent a float
osd crush reweight name float[0.0-] :  change name's weight to 
weight in crush map

Error EINVAL: invalid command

What do you think


On Feb 10, 2015, at 3:18 PM, Vikhyat Umrao vum...@redhat.com 
mailto:vum...@redhat.com wrote:


sudo ceph osd crushreweight 0.0095 osd.0 to osd.5




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [rbd] Ceph RBD kernel client using with cephx

2015-02-09 Thread Vikhyat Umrao

Hi,

While using rbd kernel client with cephx , admin user without admin 
keyring was not able to map the rbd image to a block device and this 
should be the work flow.


But issue is once I unmap rbd image without admin keyring it is allowing 
to unmap the image and as per my understanding it should not be the case 
, it should not all and give error as when it has given while mapping.


Is it a normal behaviour or I am missing something , may be needed a fix 
(bug) ?




[ceph@dell-per620-1 ceph]$ ls -l /etc/ceph/
total 16
-rw-r--r--. 1 root root  63 Feb  9 22:30 ceph.client.admin.keyring
-rw-r--r--. 1 root root  71 Feb  9 22:23 ceph.client.dell-per620-1.keyring
-rw-r--r--. 1 root root 467 Feb  9 22:22 ceph.conf
-rwxr-xr-x. 1 root root  92 Oct 15 01:03 rbdmap
[ceph@dell-per620-1 ceph]$


[ceph@dell-per620-1 ceph]$ sudo mv /etc/ceph/ceph.client.admin.keyring 
/tmp/.

[ceph@dell-per620-1 ceph]$ ls -l /etc/ceph/
total 12
-rw-r--r--. 1 root root  71 Feb  9 22:23 ceph.client.dell-per620-1.keyring
-rw-r--r--. 1 root root 467 Feb  9 22:22 ceph.conf
-rwxr-xr-x. 1 root root  92 Oct 15 01:03 rbdmap
[ceph@dell-per620-1 ceph]$

[ceph@dell-per620-1 ceph]$ sudo rbd map testcephx
rbd: add failed: (22) Invalid argument

[ceph@dell-per620-1 ceph]$ sudo dmesg
[437447.308705] libceph: no secret set (for auth_x protocol)
[437447.308761] libceph: error -22 on auth protocol 2 init
[437447.308809] libceph: client4954 fsid 
d57d909f-8adf-46aa-8cc6-3168974df332


[ceph@dell-per620-1 ceph]$ sudo mv /tmp/ceph.client.admin.keyring /etc/ceph/
[ceph@dell-per620-1 ceph]$ ls -l /etc/ceph/
total 16
-rw-r--r--. 1 root root  63 Feb  9 22:30 ceph.client.admin.keyring
-rw-r--r--. 1 root root  71 Feb  9 22:23 ceph.client.dell-per620-1.keyring
-rw-r--r--. 1 root root 467 Feb  9 22:22 ceph.conf
-rwxr-xr-x. 1 root root  92 Oct 15 01:03 rbdmap

[ceph@dell-per620-1 ceph]$ sudo rbd map testcephx

[ceph@dell-per620-1 ceph]$ sudo rbd showmapped
id pool image snap device
0  rbd  testcephx -/dev/rbd0

[ceph@dell-per620-1 ceph]$ sudo dmesg
[437447.308705] libceph: no secret set (for auth_x protocol)
[437447.308761] libceph: error -22 on auth protocol 2 init
[437447.308809] libceph: client4954 fsid 
d57d909f-8adf-46aa-8cc6-3168974df332
[437496.444701] libceph: client4961 fsid 
d57d909f-8adf-46aa-8cc6-3168974df332

[437496.447833] libceph: mon1 10.65.200.118:6789 session established
[437496.482913]  rbd0: unknown partition table
[437496.483037] rbd: rbd0: added with size 0x800
[ceph@dell-per620-1 ceph]$

[ceph@dell-per620-1 ceph]$ sudo mv /etc/ceph/ceph.client.admin.keyring 
/tmp/.

[ceph@dell-per620-1 ceph]$ ls -l /etc/ceph/
total 12
-rw-r--r--. 1 root root  71 Feb  9 22:23 ceph.client.dell-per620-1.keyring
-rw-r--r--. 1 root root 467 Feb  9 22:22 ceph.conf
-rwxr-xr-x. 1 root root  92 Oct 15 01:03 rbdmap

[ceph@dell-per620-1 ceph]$ sudo rbd unmap /dev/rbd/rbd/testcephx   
--- If we see here it has allowed unmaping rbd image 
without keyring


[ceph@dell-per620-1 ceph]$ sudo rbd showmapped --- no mapped image

-

Regards,
Vikhyat











___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com