>>Of course, I always have to ask the use-case behind mapping the same image on
>>multiple hosts. Perhaps CephFS would be a better fit if you are trying to
>>serve out a filesystem?
Hi jason,
Currently I'm sharing rbd images between multiple webservers vm with ocfs2 on
top.
They have old kern
On 06/30/17 05:21, Sage Weil wrote:
> We're having a series of problems with the valgrind included in xenial[1]
> that have led us to restrict all valgrind tests to centos nodes. At teh
> same time, we're also seeing spurious ENOSPC errors from btrfs on both
> centos on xenial kernels[2], makin
HI all.
Im trying to deploy openstack with ceph kraken bluestore osds.
Deploy went well, but then i execute ceph osd tree i can see wrong weight on
bluestore disks.
ceph osd tree | tail
-3 0.91849 host krk-str02
23 0.00980 osd.23 up 1.0 1.0
24 0.90869
I am trying to get radosgw to act like swift does when its provisioned with
Openstack. I did end up figuring out how to make it work properly for my
use case, and it would be helpful if it was documented somewhere.
For anyone curious how to make radosgw function more similar to the way
swift does,
On 17-06-23 17:13, Abhishek L wrote:
* You can now *optimize CRUSH weights* can now be optimized to
maintain a *near-perfect distribution of data* across OSDs.
It would be great to get some information on how to use this feature.
___
c
v12.1.0 Luminous RC released
BlueStore:
The new BlueStore backend for ceph-osd is now stable and the new
default for newly created OSDs.
[global]
fsid = a737f8ad-b959-4d44-ada7-2ed6a2b8802b
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.148.189,192.168.148.5,192.168.148.43
auth_cl
> Op 30 juni 2017 om 13:35 schreef Малков Петр Викторович :
>
>
> v12.1.0 Luminous RC released
> BlueStore:
> The new BlueStore backend for ceph-osd is now stable and the new
> default for newly created OSDs.
>
> [global]
> fsid = a737f8ad-b959-4d44-ada7-2ed6a2b8802b
> mon_initial_members = cep
Hello,
I have RGW multisite setup on Jewel and I would like to turn off data
replication there so that only metadata (users, created buckets, etc)
would be synced but not the data.
Is it possible to make such setup?
___
ceph-users mailing list
ceph
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: 30 June 2017 03:54
To: Ceph Users ; n...@fisk.me.uk
Subject: Re: [ceph-users] Kernel mounted RBD's hanging
On Thu, Jun 29, 2017 at 10:30 AM Nick Fisk mailto:n...@fisk.me.uk> > wrote:
Hi All,
Putting out a call for help to see if
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 29 June 2017 18:54
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] Kernel mounted RBD's hanging
>
> On Thu, Jun 29, 2017 at 6:22 PM, Nick Fisk wrote:
> >> -Original Message-
> >> From: I
On 2017-06-29 16:30, Nick Fisk wrote:
> Hi All,
>
> Putting out a call for help to see if anyone can shed some light on this.
>
> Configuration:
> Ceph cluster presenting RBD's->XFS->NFS->ESXi
> Running 10.2.7 on the OSD's and 4.11 kernel on the NFS gateways in a
> pacemaker cluster
> Both OSD's
On Fri, Jun 30, 2017 at 2:14 PM, Nick Fisk wrote:
>
>
>> -Original Message-
>> From: Ilya Dryomov [mailto:idryo...@gmail.com]
>> Sent: 29 June 2017 18:54
>> To: Nick Fisk
>> Cc: Ceph Users
>> Subject: Re: [ceph-users] Kernel mounted RBD's hanging
>>
>> On Thu, Jun 29, 2017 at 6:22 PM, Ni
Hello!
Are there any recommendations for how many PGs to allocate to a CephFS
meta-data pool?
Assuming a simple case of a cluster with 512 PGs, to be distributed
across the FS data and metadata pools, how would you make the split?
Thanks,
Riccardo
___
Hi!
I'd like to set up new OSDs with bluestore: the real data ("block") on a
spinning disk, and DB+WAL on a SSD partition.
But I do not use ceph-deploy, and never used ceph-disk (I set up the filestore
OSDs manually).
Google tells me that ceph-disk does not (yet) support splitting the component
Hi Sage,
On 06/30/2017 05:21 AM, Sage Weil wrote:
> The easiest thing is to
>
> 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
> against btrfs for a long time and are moving toward bluestore anyway.
Searching the documentation for "btrfs" does not really give a user an
On 06/29/2017 08:16 PM, donglifec...@gmail.com wrote:
zhiqiang, Josn
what about the async recovery feature? I didn't see any update on
github recently,will it be further developed?
Yes, post-luminous at this point.
___
ceph-users mailing list
ceph-u
On Fri, Jun 30, 2017 at 8:31 AM, Martin Emrich
wrote:
> Hi!
>
>
>
> I’d like to set up new OSDs with bluestore: the real data (“block”) on a
> spinning disk, and DB+WAL on a SSD partition.
>
>
>
> But I do not use ceph-deploy, and never used ceph-disk (I set up the
> filestore OSDs manually).
>
>
On Fri, 30 Jun 2017, Lenz Grimmer said:
> > 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
> > against btrfs for a long time and are moving toward bluestore anyway.
>
> Searching the documentation for "btrfs" does not really give a user any
> clue that the use of Btrfs is
I actually don't see either of these as issues with just flat out saying
that Btrfs will not be supported in Luminous. It's a full new release and
it sounds like it is no longer a relevant Filestore backend in Luminous.
People can either plan to migrate their OSDs to Bluestore once they reach
Lumi
On Fri, Jun 30, 2017 at 8:12 AM Nick Fisk wrote:
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 30 June 2017 03:54
> *To:* Ceph Users ; n...@fisk.me.uk
>
>
> *Subject:* Re: [ceph-users] Kernel mounted RBD's hanging
>
>
>
>
>
> On Thu, Jun 29, 2017 at 10:30 AM Nick Fisk wrot
On Fri, 30 Jun 2017 16:29:43 + David Turner wrote:
> I actually don't see either of these as issues with just flat out saying
> that Btrfs will not be supported in Luminous. It's a full new release and
> it sounds like it is no longer a relevant Filestore backend in Luminous.
> People can eit
On Fri, 30 Jun 2017, Lenz Grimmer wrote:
> Hi Sage,
>
> On 06/30/2017 05:21 AM, Sage Weil wrote:
>
> > The easiest thing is to
> >
> > 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
> > against btrfs for a long time and are moving toward bluestore anyway.
>
> Searching
On 30/06/2017 18:48, Sage Weil wrote:
> We can, however,
>
> - prominently feature this in the luminous release notes, and
> - require the 'enable experimental unrecoverable data corrupting features =
> btrfs' in order to use it, so that users are explicitly opting in to
> luminous+btrfs territ
> Op 30 juni 2017 om 18:48 schreef Sage Weil :
>
>
> On Fri, 30 Jun 2017, Lenz Grimmer wrote:
> > Hi Sage,
> >
> > On 06/30/2017 05:21 AM, Sage Weil wrote:
> >
> > > The easiest thing is to
> > >
> > > 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
> > > against btrf
On Fri, Jun 30, 2017 at 4:49 AM, Henrik Korkuc wrote:
> Hello,
>
> I have RGW multisite setup on Jewel and I would like to turn off data
> replication there so that only metadata (users, created buckets, etc) would
> be synced but not the data.
>
>
FWIW, not in jewel, but in kraken the zone info
Hello,
My question is about steam security of connections between ceph services.
I've read that connection is verified by private keys and signed packets,
but my question is if that packets are ciphered in any way to avoid packets
sniffers, because I want to know if can be used through internet wi
Which part of ceph are you looking at using through the Internet? RGW,
multi-site, multi-datacenter crush maps, etc?
On Fri, Jun 30, 2017 at 2:28 PM Daniel Carrasco
wrote:
> Hello,
>
> My question is about steam security of connections between ceph services.
> I've read that connection is verif
Mainly fuse clients with the other (MDS, ODS and MON will be on a private
network), and maybe one day I'll try to create a multi-site cluster.
Greetings!!
El 30 jun. 2017 8:33 p. m., "David Turner" escribió:
Which part of ceph are you looking at using through the Internet? RGW,
multi-site, mul
Hey folks:
I was wondering if the community can provide any advice — over time and
due to some external issues, we have managed to accumulate thousands of
snapshots of RBD images, which are now in need of cleaning up. I have recently
attempted to roll through a “for" loop to perform a “
So you will have all of your cluster servers in the same location, but then
use ceph-fuse to the cluster from clients across the Internet that are
mounting a CephFS volume?
That will not work. All ceph clients need to be able to communicate with
the Ceph cluster on the public_network specified in
When you delete a snapshot, Ceph places the removed snapshot into a list in
the OSD map and places the objects in the snapshot into a snap_trim_q.
Once those 2 things are done, the RBD command returns and you are moving
onto the next snapshot. The snap_trim_q is an n^2 operation (like all
deletes
"This same behavior can be seen when deleting an RBD that has 100,000
objects vs 200,000 objects, it takes twice as long"
Correction, it will take a squared amount of time, but that's not really
the most important part of the response.
On Fri, Jun 30, 2017 at 4:24 PM David Turner wrote:
> When
On Wed, Jun 21, 2017 at 6:57 AM Andras Pataki
wrote:
> Hi cephers,
>
> I noticed something I don't understand about ceph's behavior when adding
> an OSD. When I start with a clean cluster (all PG's active+clean) and add
> an OSD (via ceph-deploy for example), the crush map gets updated and PGs
>
On Fri, Jun 30, 2017 at 1:24 PM, David Turner wrote:
> When you delete a snapshot, Ceph places the removed snapshot into a list in
> the OSD map and places the objects in the snapshot into a snap_trim_q. Once
> those 2 things are done, the RBD command returns and you are moving onto the
> next sn
That comes from using Ceph. I've just done lots of deleting of large
amounts of data and paid attention to how long things took to delete. If
you don't believe me, I gave steps that you responded to to duplicate it.
I haven't asked a Ceph dev or bothered to look through the code, but every
time I
On Fri, Jun 30, 2017 at 2:07 PM, David Turner wrote:
> That comes from using Ceph. I've just done lots of deleting of large
> amounts of data and paid attention to how long things took to delete. If
> you don't believe me, I gave steps that you responded to to duplicate it. I
> haven't asked a
I'll concede that I cannot duplicate this in Jewel. When I was seeing
this, it was in Hammer and I was 100% able to duplicate it with empty RBDs,
RBDs filled with /dev/zero, and RBDs filled with /dev/random. I could
duplicate an n^2 time difference in `time rbd rm test`. We mapped it all
the way
I would use the calculator at ceph and just set for "all in one".
http://ceph.com/pgcalc/
On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri
wrote:
> Hello!
>
> Are there any recommendations for how many PGs to allocate to a CephFS
> meta-data pool?
>
> Assuming a simple case of a cluster with 512
Hi all
I have two osds that are asserting , see https://pastebin.com/raw/xmDPg84a
I am running kraken 11.2.0 and am kinda blocked by this. Anything i try
to do with these osds, results in a abrt.
Need to recover a down pg from one of the osds and even the pg export seg
faults.
I'm at my wits
Hello,
I am getting the below error and I am unable to get them resolved even after
starting and stopping the OSD's. All the OSD's seems to be up.
How do I repair the OSD's or fix them manually. I am using cephFS. But oddly
the ceph df is showing 100% used(which is showing in KB). But the pool
ceph status
ceph osd tree
Is your meta pool on ssds instead of the same root and osds as the rest of
the cluster?
On Fri, Jun 30, 2017, 9:29 PM Deepak Naidu wrote:
> Hello,
>
>
>
> I am getting the below error and I am unable to get them resolved even
> after starting and stopping the OSD’s. Al
OK, I fixed the issue. But this is very weird. But will list them so its easy
for other to check when there is similar issue.
1) I had create rack aware osd tree
2) I have SATA OSD’s and NVME OSD
3) I created rack aware policy for both SATA and NVME OSD
4) NVME OSD was use
OK, so looks like its ceph crushmap behavior
http://docs.ceph.com/docs/master/rados/operations/crush-map/
--
Deepak
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Deepak
Naidu
Sent: Friday, June 30, 2017 7:06 PM
To: David Turner; ceph-users@lists.ceph.com
Subject: Re:
Sorry for the spam, but more clear way of doing custom crushmap
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/038835.html
--
Deepak
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Deepak
Naidu
Sent: Friday, June 30, 2017 7:22 PM
To: David Turner; ceph-u
44 matches
Mail list logo