Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Roger Brown
Ah, yes. This cluster has had all all the versions of Luminous on it.
Started with Kraken and went to every Luminous release candidate to date.
So I guess I'll just do the `ceph osd pool application enable` commands and
be done with it.

I appreciate your assistance.

Roger


On Fri, Aug 4, 2017 at 4:32 PM Gregory Farnum  wrote:

> Roger, was this a test cluster that was already running Luminous? The
> auto-assignment logic won't work in that case (it's already got the
> CEPH_RELEASE_LUMINOUS feature set which we're using to run it).
>
> I'm not sure if there's a good way to do that upgrade that's worth the
> effort.
> -Greg
>
> On Fri, Aug 4, 2017 at 9:21 AM Gregory Farnum  wrote:
>
>> Yes. https://github.com/ceph/ceph/blob/master/src/mon/OSDMonitor.cc#L1069
>>
>> On Fri, Aug 4, 2017 at 9:14 AM David Turner 
>> wrote:
>>
>>> Should they be auto-marked if you upgraded an existing cluster to
>>> Luminous?
>>>
>>> On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum 
>>> wrote:
>>>
 All those pools should have been auto-marked as owned by rgw though. We
 do have a ticket around that (http://tracker.ceph.com/issues/20891)
 but so far it's just confusing.
 -Greg

 On Fri, Aug 4, 2017 at 9:07 AM Roger Brown 
 wrote:

> Got it, thanks!
>
> On Fri, Aug 4, 2017 at 9:48 AM David Turner 
> wrote:
>
>> In the 12.1.2 release notes it stated...
>>
>>   Pools are now expected to be associated with the application using
>> them.
>>   Upon completing the upgrade to Luminous, the cluster will attempt
>> to associate
>>   existing pools to known applications (i.e. CephFS, RBD, and RGW).
>> In-use pools
>>   that are not associated to an application will generate a health
>> warning. Any
>>   unassociated pools can be manually associated using the new
>>   "ceph osd pool application enable" command. For more details see
>>   "Associate Pool to Application" in the documentation.
>>
>> It is always a good idea to read the release notes before upgrading
>> to a new version of Ceph.
>>
>> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown 
>> wrote:
>>
>>> Is this something new in Luminous 12.1.2, or did I break something?
>>> Stuff still seems to function despite the warnings.
>>>
>>> $ ceph health detail
>>> 
>>> POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
>>> application not enabled on pool 'default.rgw.buckets.non-ec'
>>> application not enabled on pool 'default.rgw.control'
>>> application not enabled on pool 'default.rgw.data.root'
>>> application not enabled on pool 'default.rgw.gc'
>>> application not enabled on pool 'default.rgw.lc'
>>> application not enabled on pool 'default.rgw.log'
>>> application not enabled on pool 'default.rgw.users.uid'
>>> application not enabled on pool 'default.rgw.users.email'
>>> application not enabled on pool 'default.rgw.users.keys'
>>> application not enabled on pool 'default.rgw.buckets.index'
>>> application not enabled on pool 'default.rgw.users.swift'
>>> application not enabled on pool '.rgw.root'
>>> application not enabled on pool 'default.rgw.reshard'
>>> application not enabled on pool 'default.rgw.buckets.data'
>>> use 'ceph osd pool application enable  ',
>>> where  is 'cephfs', 'rbd', 'rgw', or freeform for custom
>>> applications.
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Gregory Farnum
Roger, was this a test cluster that was already running Luminous? The
auto-assignment logic won't work in that case (it's already got the
CEPH_RELEASE_LUMINOUS feature set which we're using to run it).

I'm not sure if there's a good way to do that upgrade that's worth the
effort.
-Greg

On Fri, Aug 4, 2017 at 9:21 AM Gregory Farnum  wrote:

> Yes. https://github.com/ceph/ceph/blob/master/src/mon/OSDMonitor.cc#L1069
>
> On Fri, Aug 4, 2017 at 9:14 AM David Turner  wrote:
>
>> Should they be auto-marked if you upgraded an existing cluster to
>> Luminous?
>>
>> On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum 
>> wrote:
>>
>>> All those pools should have been auto-marked as owned by rgw though. We
>>> do have a ticket around that (http://tracker.ceph.com/issues/20891) but
>>> so far it's just confusing.
>>> -Greg
>>>
>>> On Fri, Aug 4, 2017 at 9:07 AM Roger Brown 
>>> wrote:
>>>
 Got it, thanks!

 On Fri, Aug 4, 2017 at 9:48 AM David Turner 
 wrote:

> In the 12.1.2 release notes it stated...
>
>   Pools are now expected to be associated with the application using
> them.
>   Upon completing the upgrade to Luminous, the cluster will attempt to
> associate
>   existing pools to known applications (i.e. CephFS, RBD, and RGW).
> In-use pools
>   that are not associated to an application will generate a health
> warning. Any
>   unassociated pools can be manually associated using the new
>   "ceph osd pool application enable" command. For more details see
>   "Associate Pool to Application" in the documentation.
>
> It is always a good idea to read the release notes before upgrading to
> a new version of Ceph.
>
> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown 
> wrote:
>
>> Is this something new in Luminous 12.1.2, or did I break something?
>> Stuff still seems to function despite the warnings.
>>
>> $ ceph health detail
>> 
>> POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
>> application not enabled on pool 'default.rgw.buckets.non-ec'
>> application not enabled on pool 'default.rgw.control'
>> application not enabled on pool 'default.rgw.data.root'
>> application not enabled on pool 'default.rgw.gc'
>> application not enabled on pool 'default.rgw.lc'
>> application not enabled on pool 'default.rgw.log'
>> application not enabled on pool 'default.rgw.users.uid'
>> application not enabled on pool 'default.rgw.users.email'
>> application not enabled on pool 'default.rgw.users.keys'
>> application not enabled on pool 'default.rgw.buckets.index'
>> application not enabled on pool 'default.rgw.users.swift'
>> application not enabled on pool '.rgw.root'
>> application not enabled on pool 'default.rgw.reshard'
>> application not enabled on pool 'default.rgw.buckets.data'
>> use 'ceph osd pool application enable  ',
>> where  is 'cephfs', 'rbd', 'rgw', or freeform for custom
>> applications.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] broken parent/child relationship

2017-08-04 Thread Shawn Edwards
I have a child rbd that doesn't acknowledge its parent.  this is with
Kraken (11.2.0)

The misbehaving child was 'flatten'ed from its parent, but now I can't
remove the snapshot because it thinks it has a child still.

root@tyr-ceph-mon0:~# rbd snap ls
tyr-p0/51774a43-8d67-4d6d-9711-d0b1e4e6b5e9_delete
SNAPID NAMESIZE
  2530 c20a31c5-fd88-4104-8579-a6b3cd723f2b 1000 GB
root@tyr-ceph-mon0:~# rbd children
tyr-p0/51774a43-8d67-4d6d-9711-d0b1e4e6b5e9_delete@c20a31c5-fd88-4104-8579-a6b3cd723f2b
tyr-p0/a56eae5f-fd35-4299-bcdc-65839a00f14c
root@tyr-ceph-mon0:~# rbd flatten tyr-p0/a56eae5f-fd35-4299-bcdc-65839a00f14c
Image flatten: 0% complete...failed.
rbd: flatten error: (22) Invalid argument
2017-08-04 08:33:09.719796 7f5bfb7d53c0 -1 librbd::Operations: image
has no parent
root@tyr-ceph-mon0:~# rbd snap unprotect
tyr-p0/51774a43-8d67-4d6d-9711-d0b1e4e6b5e9_delete@c20a31c5-fd88-4104-8579-a6b3cd723f2b
2017-08-04 08:34:20.649532 7f91f5ffb700 -1
librbd::SnapshotUnprotectRequest: cannot unprotect: at least 1
child(ren) [1d0bce6194cfc3] in pool 'tyr-p0'
2017-08-04 08:34:20.649545 7f91f5ffb700 -1
librbd::SnapshotUnprotectRequest: encountered error: (16) Device or
resource busy
2017-08-04 08:34:20.649550 7f91f5ffb700 -1
librbd::SnapshotUnprotectRequest: 0x55d69346da40
should_complete_error: ret_val=-16
2017-08-04 08:34:20.651800 7f91f5ffb700 -1
librbd::SnapshotUnprotectRequest: 0x55d69346da40
should_complete_error: ret_val=-16
rbd: unprotecting snap failed: (16) Device or resource busy
root@tyr-ceph-mon0:~# rbd info tyr-p0/a56eae5f-fd35-4299-bcdc-65839a00f14c
rbd image 'a56eae5f-fd35-4299-bcdc-65839a00f14c':
size 1000 GB in 256000 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1d0bce6194cfc3
format: 2
features: layering
flags:
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph activities at LCA

2017-08-04 Thread Leonardo Vaz
Dear Cephers,

As most of you know the deadline for submitting talks on LCA (Linux Conf
Australia) is on this Saturday (Aug 4) and we would like to know if
anyone here is planning to participate the conference and present talks
on Ceph.

I was just talking with Sage and besides the talks submitted to the main
LCA program we may have a Ceph Miniconf at the conference.

Kindest regards,

Leo

-- 
Leonardo Vaz
Ceph Community Manager
Open Source and Standards Team
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread Marc Roos
 
I am still on 12.1.1, it is still a test 3 node cluster, nothing much 
happening. 2nd node had some issues a while ago, I had an osd.8 that 
didn’t want to start so I replaced it. 



-Original Message-
From: David Turner [mailto:drakonst...@gmail.com] 
Sent: vrijdag 4 augustus 2017 17:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Pg inconsistent / export_files error -5

It _should_ be enough. What happened in your cluster recently? Power 
Outage, OSD failures, upgrade, added new hardware, any changes at all. 
What is your Ceph version?

On Fri, Aug 4, 2017 at 11:22 AM Marc Roos  
wrote:



I have got a placement group inconsistency, and saw some manual 
where
you can export and import this on another osd. But I am getting an
export error on every osd.

What does this export_files error -5 actually mean? I thought 3 
copies
should be enough to secure your data.


> PG_DAMAGED Possible data damage: 1 pg inconsistent
>pg 17.36 is active+clean+inconsistent, acting [9,0,12]


> 2017-08-04 05:39:51.534489 7f2f623d6700 -1 log_channel(cluster) 
log
[ERR] : 17.36 soid
17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4: failed to 
pick
suitable object info
> 2017-08-04 05:41:12.715393 7f2f623d6700 -1 log_channel(cluster) 
log
[ERR] : 17.36 deep-scrub 3 errors
> 2017-08-04 15:21:12.445799 7f2f623d6700 -1 log_channel(cluster) 
log
[ERR] : 17.36 soid
17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4: failed to 
pick
suitable object info
> 2017-08-04 15:22:35.646635 7f2f623d6700 -1 log_channel(cluster) 
log
[ERR] : 17.36 repair 3 errors, 0 fixed

ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-12 --pgid 
17.36
--op export --file /tmp/recover.17.36

...
Read #17:6c9f811c:::rbd_data.1b42f52ae8944a.1a32:head#
Read #17:6ca035fc:::rbd_data.1fff61238e1f29.b31a:head#
Read #17:6ca0b4f8:::rbd_data.1fff61238e1f29.6fcc:head#
Read #17:6ca0ffbc:::rbd_data.1fff61238e1f29.a214:head#
Read #17:6ca10b29:::rbd_data.1fff61238e1f29.9923:head#
Read #17:6ca11ab9:::rbd_data.1fa8ef2ae8944a.11b4:head#
Read #17:6ca13bed:::rbd_data.1f114174b0dc51.02c6:head#
Read #17:6ca1a791:::rbd_data.1fff61238e1f29.f101:head#
Read #17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4#
export_files error -5
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs increase max file size

2017-08-04 Thread Brady Deetz
https://www.spinics.net/lists/ceph-users/msg36285.html

On Aug 4, 2017 8:28 AM, "Rhian Resnick"  wrote:

> Morning,
>
>
> We ran into an issue with the default max file size of a cephfs file. Is
> it possible to increase this value to 20 TB from 1 TB without recreating
> the file system?
>
>
> Rhian Resnick
>
> Assistant Director Middleware and HPC
>
> Office of Information Technology
>
>
> Florida Atlantic University
>
> 777 Glades Road, CM22, Rm 173B
>
> Boca Raton, FL 33431
>
> Phone 561.297.2647 <(561)%20297-2647>
>
> Fax 561.297.0222 <(561)%20297-0222>
>
>  [image: image] 
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Gregory Farnum
Yes. https://github.com/ceph/ceph/blob/master/src/mon/OSDMonitor.cc#L1069

On Fri, Aug 4, 2017 at 9:14 AM David Turner  wrote:

> Should they be auto-marked if you upgraded an existing cluster to Luminous?
>
> On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum  wrote:
>
>> All those pools should have been auto-marked as owned by rgw though. We
>> do have a ticket around that (http://tracker.ceph.com/issues/20891) but
>> so far it's just confusing.
>> -Greg
>>
>> On Fri, Aug 4, 2017 at 9:07 AM Roger Brown  wrote:
>>
>>> Got it, thanks!
>>>
>>> On Fri, Aug 4, 2017 at 9:48 AM David Turner 
>>> wrote:
>>>
 In the 12.1.2 release notes it stated...

   Pools are now expected to be associated with the application using
 them.
   Upon completing the upgrade to Luminous, the cluster will attempt to
 associate
   existing pools to known applications (i.e. CephFS, RBD, and RGW).
 In-use pools
   that are not associated to an application will generate a health
 warning. Any
   unassociated pools can be manually associated using the new
   "ceph osd pool application enable" command. For more details see
   "Associate Pool to Application" in the documentation.

 It is always a good idea to read the release notes before upgrading to
 a new version of Ceph.

 On Fri, Aug 4, 2017 at 10:29 AM Roger Brown 
 wrote:

> Is this something new in Luminous 12.1.2, or did I break something?
> Stuff still seems to function despite the warnings.
>
> $ ceph health detail
> 
> POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
> application not enabled on pool 'default.rgw.buckets.non-ec'
> application not enabled on pool 'default.rgw.control'
> application not enabled on pool 'default.rgw.data.root'
> application not enabled on pool 'default.rgw.gc'
> application not enabled on pool 'default.rgw.lc'
> application not enabled on pool 'default.rgw.log'
> application not enabled on pool 'default.rgw.users.uid'
> application not enabled on pool 'default.rgw.users.email'
> application not enabled on pool 'default.rgw.users.keys'
> application not enabled on pool 'default.rgw.buckets.index'
> application not enabled on pool 'default.rgw.users.swift'
> application not enabled on pool '.rgw.root'
> application not enabled on pool 'default.rgw.reshard'
> application not enabled on pool 'default.rgw.buckets.data'
> use 'ceph osd pool application enable  ',
> where  is 'cephfs', 'rbd', 'rgw', or freeform for custom
> applications.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
 ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] application not enabled on pool

2017-08-04 Thread David Turner
Should they be auto-marked if you upgraded an existing cluster to Luminous?

On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum  wrote:

> All those pools should have been auto-marked as owned by rgw though. We do
> have a ticket around that (http://tracker.ceph.com/issues/20891) but so
> far it's just confusing.
> -Greg
>
> On Fri, Aug 4, 2017 at 9:07 AM Roger Brown  wrote:
>
>> Got it, thanks!
>>
>> On Fri, Aug 4, 2017 at 9:48 AM David Turner 
>> wrote:
>>
>>> In the 12.1.2 release notes it stated...
>>>
>>>   Pools are now expected to be associated with the application using
>>> them.
>>>   Upon completing the upgrade to Luminous, the cluster will attempt to
>>> associate
>>>   existing pools to known applications (i.e. CephFS, RBD, and RGW).
>>> In-use pools
>>>   that are not associated to an application will generate a health
>>> warning. Any
>>>   unassociated pools can be manually associated using the new
>>>   "ceph osd pool application enable" command. For more details see
>>>   "Associate Pool to Application" in the documentation.
>>>
>>> It is always a good idea to read the release notes before upgrading to a
>>> new version of Ceph.
>>>
>>> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown 
>>> wrote:
>>>
 Is this something new in Luminous 12.1.2, or did I break something?
 Stuff still seems to function despite the warnings.

 $ ceph health detail
 
 POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
 application not enabled on pool 'default.rgw.buckets.non-ec'
 application not enabled on pool 'default.rgw.control'
 application not enabled on pool 'default.rgw.data.root'
 application not enabled on pool 'default.rgw.gc'
 application not enabled on pool 'default.rgw.lc'
 application not enabled on pool 'default.rgw.log'
 application not enabled on pool 'default.rgw.users.uid'
 application not enabled on pool 'default.rgw.users.email'
 application not enabled on pool 'default.rgw.users.keys'
 application not enabled on pool 'default.rgw.buckets.index'
 application not enabled on pool 'default.rgw.users.swift'
 application not enabled on pool '.rgw.root'
 application not enabled on pool 'default.rgw.reshard'
 application not enabled on pool 'default.rgw.buckets.data'
 use 'ceph osd pool application enable  ',
 where  is 'cephfs', 'rbd', 'rgw', or freeform for custom
 applications.

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Gregory Farnum
All those pools should have been auto-marked as owned by rgw though. We do
have a ticket around that (http://tracker.ceph.com/issues/20891) but so far
it's just confusing.
-Greg
On Fri, Aug 4, 2017 at 9:07 AM Roger Brown  wrote:

> Got it, thanks!
>
> On Fri, Aug 4, 2017 at 9:48 AM David Turner  wrote:
>
>> In the 12.1.2 release notes it stated...
>>
>>   Pools are now expected to be associated with the application using
>> them.
>>   Upon completing the upgrade to Luminous, the cluster will attempt to
>> associate
>>   existing pools to known applications (i.e. CephFS, RBD, and RGW).
>> In-use pools
>>   that are not associated to an application will generate a health
>> warning. Any
>>   unassociated pools can be manually associated using the new
>>   "ceph osd pool application enable" command. For more details see
>>   "Associate Pool to Application" in the documentation.
>>
>> It is always a good idea to read the release notes before upgrading to a
>> new version of Ceph.
>>
>> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown 
>> wrote:
>>
>>> Is this something new in Luminous 12.1.2, or did I break something?
>>> Stuff still seems to function despite the warnings.
>>>
>>> $ ceph health detail
>>> 
>>> POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
>>> application not enabled on pool 'default.rgw.buckets.non-ec'
>>> application not enabled on pool 'default.rgw.control'
>>> application not enabled on pool 'default.rgw.data.root'
>>> application not enabled on pool 'default.rgw.gc'
>>> application not enabled on pool 'default.rgw.lc'
>>> application not enabled on pool 'default.rgw.log'
>>> application not enabled on pool 'default.rgw.users.uid'
>>> application not enabled on pool 'default.rgw.users.email'
>>> application not enabled on pool 'default.rgw.users.keys'
>>> application not enabled on pool 'default.rgw.buckets.index'
>>> application not enabled on pool 'default.rgw.users.swift'
>>> application not enabled on pool '.rgw.root'
>>> application not enabled on pool 'default.rgw.reshard'
>>> application not enabled on pool 'default.rgw.buckets.data'
>>> use 'ceph osd pool application enable  ', where
>>>  is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Roger Brown
Got it, thanks!

On Fri, Aug 4, 2017 at 9:48 AM David Turner  wrote:

> In the 12.1.2 release notes it stated...
>
>   Pools are now expected to be associated with the application using them.
>   Upon completing the upgrade to Luminous, the cluster will attempt to
> associate
>   existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use
> pools
>   that are not associated to an application will generate a health
> warning. Any
>   unassociated pools can be manually associated using the new
>   "ceph osd pool application enable" command. For more details see
>   "Associate Pool to Application" in the documentation.
>
> It is always a good idea to read the release notes before upgrading to a
> new version of Ceph.
>
> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown  wrote:
>
>> Is this something new in Luminous 12.1.2, or did I break something? Stuff
>> still seems to function despite the warnings.
>>
>> $ ceph health detail
>> 
>> POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
>> application not enabled on pool 'default.rgw.buckets.non-ec'
>> application not enabled on pool 'default.rgw.control'
>> application not enabled on pool 'default.rgw.data.root'
>> application not enabled on pool 'default.rgw.gc'
>> application not enabled on pool 'default.rgw.lc'
>> application not enabled on pool 'default.rgw.log'
>> application not enabled on pool 'default.rgw.users.uid'
>> application not enabled on pool 'default.rgw.users.email'
>> application not enabled on pool 'default.rgw.users.keys'
>> application not enabled on pool 'default.rgw.buckets.index'
>> application not enabled on pool 'default.rgw.users.swift'
>> application not enabled on pool '.rgw.root'
>> application not enabled on pool 'default.rgw.reshard'
>> application not enabled on pool 'default.rgw.buckets.data'
>> use 'ceph osd pool application enable  ', where
>>  is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread David Turner
It _should_ be enough. What happened in your cluster recently? Power
Outage, OSD failures, upgrade, added new hardware, any changes at all. What
is your Ceph version?

On Fri, Aug 4, 2017 at 11:22 AM Marc Roos  wrote:

>
> I have got a placement group inconsistency, and saw some manual where
> you can export and import this on another osd. But I am getting an
> export error on every osd.
>
> What does this export_files error -5 actually mean? I thought 3 copies
> should be enough to secure your data.
>
>
> > PG_DAMAGED Possible data damage: 1 pg inconsistent
> >pg 17.36 is active+clean+inconsistent, acting [9,0,12]
>
>
> > 2017-08-04 05:39:51.534489 7f2f623d6700 -1 log_channel(cluster) log
> [ERR] : 17.36 soid
> 17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4: failed to pick
> suitable object info
> > 2017-08-04 05:41:12.715393 7f2f623d6700 -1 log_channel(cluster) log
> [ERR] : 17.36 deep-scrub 3 errors
> > 2017-08-04 15:21:12.445799 7f2f623d6700 -1 log_channel(cluster) log
> [ERR] : 17.36 soid
> 17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4: failed to pick
> suitable object info
> > 2017-08-04 15:22:35.646635 7f2f623d6700 -1 log_channel(cluster) log
> [ERR] : 17.36 repair 3 errors, 0 fixed
>
> ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-12 --pgid 17.36
> --op export --file /tmp/recover.17.36
>
> ...
> Read #17:6c9f811c:::rbd_data.1b42f52ae8944a.1a32:head#
> Read #17:6ca035fc:::rbd_data.1fff61238e1f29.b31a:head#
> Read #17:6ca0b4f8:::rbd_data.1fff61238e1f29.6fcc:head#
> Read #17:6ca0ffbc:::rbd_data.1fff61238e1f29.a214:head#
> Read #17:6ca10b29:::rbd_data.1fff61238e1f29.9923:head#
> Read #17:6ca11ab9:::rbd_data.1fa8ef2ae8944a.11b4:head#
> Read #17:6ca13bed:::rbd_data.1f114174b0dc51.02c6:head#
> Read #17:6ca1a791:::rbd_data.1fff61238e1f29.f101:head#
> Read #17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4#
> export_files error -5
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] application not enabled on pool

2017-08-04 Thread David Turner
In the 12.1.2 release notes it stated...

  Pools are now expected to be associated with the application using them.
  Upon completing the upgrade to Luminous, the cluster will attempt to
associate
  existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use
pools
  that are not associated to an application will generate a health warning.
Any
  unassociated pools can be manually associated using the new
  "ceph osd pool application enable" command. For more details see
  "Associate Pool to Application" in the documentation.

It is always a good idea to read the release notes before upgrading to a
new version of Ceph.

On Fri, Aug 4, 2017 at 10:29 AM Roger Brown  wrote:

> Is this something new in Luminous 12.1.2, or did I break something? Stuff
> still seems to function despite the warnings.
>
> $ ceph health detail
> 
> POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
> application not enabled on pool 'default.rgw.buckets.non-ec'
> application not enabled on pool 'default.rgw.control'
> application not enabled on pool 'default.rgw.data.root'
> application not enabled on pool 'default.rgw.gc'
> application not enabled on pool 'default.rgw.lc'
> application not enabled on pool 'default.rgw.log'
> application not enabled on pool 'default.rgw.users.uid'
> application not enabled on pool 'default.rgw.users.email'
> application not enabled on pool 'default.rgw.users.keys'
> application not enabled on pool 'default.rgw.buckets.index'
> application not enabled on pool 'default.rgw.users.swift'
> application not enabled on pool '.rgw.root'
> application not enabled on pool 'default.rgw.reshard'
> application not enabled on pool 'default.rgw.buckets.data'
> use 'ceph osd pool application enable  ', where
>  is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread Marc Roos

I have got a placement group inconsistency, and saw some manual where 
you can export and import this on another osd. But I am getting an 
export error on every osd. 

What does this export_files error -5 actually mean? I thought 3 copies 
should be enough to secure your data.


> PG_DAMAGED Possible data damage: 1 pg inconsistent
>pg 17.36 is active+clean+inconsistent, acting [9,0,12]


> 2017-08-04 05:39:51.534489 7f2f623d6700 -1 log_channel(cluster) log 
[ERR] : 17.36 soid 
17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4: failed to pick 
suitable object info
> 2017-08-04 05:41:12.715393 7f2f623d6700 -1 log_channel(cluster) log 
[ERR] : 17.36 deep-scrub 3 errors
> 2017-08-04 15:21:12.445799 7f2f623d6700 -1 log_channel(cluster) log 
[ERR] : 17.36 soid 
17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4: failed to pick 
suitable object info
> 2017-08-04 15:22:35.646635 7f2f623d6700 -1 log_channel(cluster) log 
[ERR] : 17.36 repair 3 errors, 0 fixed

ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-12 --pgid 17.36 
--op export --file /tmp/recover.17.36

...
Read #17:6c9f811c:::rbd_data.1b42f52ae8944a.1a32:head#
Read #17:6ca035fc:::rbd_data.1fff61238e1f29.b31a:head#
Read #17:6ca0b4f8:::rbd_data.1fff61238e1f29.6fcc:head#
Read #17:6ca0ffbc:::rbd_data.1fff61238e1f29.a214:head#
Read #17:6ca10b29:::rbd_data.1fff61238e1f29.9923:head#
Read #17:6ca11ab9:::rbd_data.1fa8ef2ae8944a.11b4:head#
Read #17:6ca13bed:::rbd_data.1f114174b0dc51.02c6:head#
Read #17:6ca1a791:::rbd_data.1fff61238e1f29.f101:head#
Read #17:6ca1f70a:::rbd_data.1f114174b0dc51.0974:4#
export_files error -5
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] application not enabled on pool

2017-08-04 Thread Roger Brown
Is this something new in Luminous 12.1.2, or did I break something? Stuff
still seems to function despite the warnings.

$ ceph health detail

POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
application not enabled on pool 'default.rgw.buckets.non-ec'
application not enabled on pool 'default.rgw.control'
application not enabled on pool 'default.rgw.data.root'
application not enabled on pool 'default.rgw.gc'
application not enabled on pool 'default.rgw.lc'
application not enabled on pool 'default.rgw.log'
application not enabled on pool 'default.rgw.users.uid'
application not enabled on pool 'default.rgw.users.email'
application not enabled on pool 'default.rgw.users.keys'
application not enabled on pool 'default.rgw.buckets.index'
application not enabled on pool 'default.rgw.users.swift'
application not enabled on pool '.rgw.root'
application not enabled on pool 'default.rgw.reshard'
application not enabled on pool 'default.rgw.buckets.data'
use 'ceph osd pool application enable  ', where
 is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs increase max file size

2017-08-04 Thread Roger Brown
Woops, nvm my last. My eyes deceived me.


On Fri, Aug 4, 2017 at 8:21 AM Roger Brown  wrote:

> Did you really mean to say "increase this value to 20 TB from 1 TB"?
>
>
> On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick  wrote:
>
>> Morning,
>>
>>
>> We ran into an issue with the default max file size of a cephfs file. Is
>> it possible to increase this value to 20 TB from 1 TB without recreating
>> the file system?
>>
>>
>> Rhian Resnick
>>
>> Assistant Director Middleware and HPC
>>
>> Office of Information Technology
>>
>>
>> Florida Atlantic University
>>
>> 777 Glades Road, CM22, Rm 173B
>>
>> Boca Raton, FL 33431
>>
>> Phone 561.297.2647 <(561)%20297-2647>
>>
>> Fax 561.297.0222 <(561)%20297-0222>
>>
>>  [image: image]
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs increase max file size

2017-08-04 Thread Roger Brown
Did you really mean to say "increase this value to 20 TB from 1 TB"?


On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick  wrote:

> Morning,
>
>
> We ran into an issue with the default max file size of a cephfs file. Is
> it possible to increase this value to 20 TB from 1 TB without recreating
> the file system?
>
>
> Rhian Resnick
>
> Assistant Director Middleware and HPC
>
> Office of Information Technology
>
>
> Florida Atlantic University
>
> 777 Glades Road, CM22, Rm 173B
>
> Boca Raton, FL 33431
>
> Phone 561.297.2647 <(561)%20297-2647>
>
> Fax 561.297.0222 <(561)%20297-0222>
>
>  [image: image] 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs increase max file size

2017-08-04 Thread Rhian Resnick
Morning,


We ran into an issue with the default max file size of a cephfs file. Is it 
possible to increase this value to 20 TB from 1 TB without recreating the file 
system?


Rhian Resnick

Assistant Director Middleware and HPC

Office of Information Technology


Florida Atlantic University

777 Glades Road, CM22, Rm 173B

Boca Raton, FL 33431

Phone 561.297.2647

Fax 561.297.0222

 [image] 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Rados lib object clone api

2017-08-04 Thread Muthusamy Muthiah
Thank you Greg,

I will look into it and I hope the self managed and pool snapshot will work
for Erasure pool also, we predominantly use Erasure coding.

Thanks,
Muthu

On Wednesday, 2 August 2017, Gregory Farnum  wrote:

> On Tue, Aug 1, 2017 at 8:29 AM Muthusamy Muthiah <
> muthiah.muthus...@gmail.com
> > wrote:
>
>> Hi,
>>
>> Is there an librados API to clone objects ?
>>
>> I could able to see options available on radosgw API to copy object and
>> rbd to clone images. Not able to find similar options on librados native
>> library to clone object.
>>
>> It would be good if you can point be to right document if it is possible.
>>
>> Thanks,
>> Muthu
>>
>
>
> There's not much librados documentation in general, but librados.h
> (ceph/src/include/rados/librados.h) has pretty good header docs and if
> you search for "snap" you will find a lot there. You can also search the
> lists and docs or check out my talk at the OpenStack Boston Ceph sessions
> for info about selfmanaged and pool snapshots for more background info.
> -Greg
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] expanding cluster with minimal impact

2017-08-04 Thread bruno.canning
Hi Laszlo,

I've used Dan's script to deploy 9 storage nodes (36 x 6TB data disks/node) 
into our dev cluster as practice for deployment into our production cluster.

The script performs very well. In general, disruption to a cluster (e.g. impact 
on client I/O) is minimised by osd_max_backfills which take a default value of 
1 if not defined in ceph.conf. I did find that with 324 OSDs, 60s was too short 
a time period for one reweight run to complete before the next run started, but 
this is configurable.

The command I ran was:
/usr/local/bin/ceph-gentle-reweight -o osd.78,...,osd.401 -b 0 -d 0.01 -t 5.458 
-l 100 -p dteam -i 300 -r

With 85TB of data on 342TB of capacity that was grown to 2286TB, the process 
took 57h to get to 25% of target, 87h to get to 50%, 110h to get to 75% and 
128h to complete.

Best wishes,
Bruno


Bruno Canning
LHC Data Store System Administrator
Scientific Computing Department
STFC Rutherford Appleton Laboratory
Harwell Oxford
Didcot
OX11 0QX
Tel. +44 ((0)1235) 446621


-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dan 
van der Ster
Sent: 04 August 2017 08:58
To: Laszlo Budai
Cc: ceph-users
Subject: Re: [ceph-users] expanding cluster with minimal impact

Hi Laszlo,

The script defaults are what we used to do a large intervention (the default 
delta weight is 0.01). For our clusters going any faster becomes disruptive, 
but this really depends on your cluster size and activity.

BTW, in case it wasn't clear, to use this script for adding capacity you need 
to create the new OSDs to your cluster with initial crush weight = 0.0

osd crush initial weight = 0
osd crush update on start = true

-- Dan



On Thu, Aug 3, 2017 at 8:12 PM, Laszlo Budai  wrote:
> Dear all,
>
> I need to expand a ceph cluster with minimal impact. Reading previous 
> threads on this topic from the list I've found the 
> ceph-gentle-reweight script
> (https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentl
> e-reweight) created by Dan van der Ster (Thank you Dan for sharing the 
> script with us!).
>
> I've done some experiments, and it looks promising, but it is needed 
> to properly set the parameters. Did any of you tested this script 
> before? what is the recommended delta_weight to be used? From the 
> default parameters of the script I can see that the default delta 
> weight is .5% of the target weight that means 200 reweighting cycles. 
> I have experimented with a reweight ratio of 5% while running a fio 
> test on a client. The results were OK (I mean no slow requests), but my  test 
> cluster was a very small one.
>
> If any of you has done some larger experiments with this script I 
> would be really interested to read about your results.
>
> Thank you!
> Laszlo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] expanding cluster with minimal impact

2017-08-04 Thread Dan van der Ster
Hi Laszlo,

The script defaults are what we used to do a large intervention (the
default delta weight is 0.01). For our clusters going any faster
becomes disruptive, but this really depends on your cluster size and
activity.

BTW, in case it wasn't clear, to use this script for adding capacity
you need to create the new OSDs to your cluster with initial crush
weight = 0.0

osd crush initial weight = 0
osd crush update on start = true

-- Dan



On Thu, Aug 3, 2017 at 8:12 PM, Laszlo Budai  wrote:
> Dear all,
>
> I need to expand a ceph cluster with minimal impact. Reading previous
> threads on this topic from the list I've found the ceph-gentle-reweight
> script
> (https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight)
> created by Dan van der Ster (Thank you Dan for sharing the script with us!).
>
> I've done some experiments, and it looks promising, but it is needed to
> properly set the parameters. Did any of you tested this script before? what
> is the recommended delta_weight to be used? From the default parameters of
> the script I can see that the default delta weight is .5% of the target
> weight that means 200 reweighting cycles. I have experimented with a
> reweight ratio of 5% while running a fio test on a client. The results were
> OK (I mean no slow requests), but my  test cluster was a very small one.
>
> If any of you has done some larger experiments with this script I would be
> really interested to read about your results.
>
> Thank you!
> Laszlo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com