Would this be able to be modular for other front ends as well? We really
like using nginx for load balancing and it is capable of reloading the
config after modifications as well.
On Tue, Jul 11, 2017, 4:36 PM Sage Weil wrote:
> On Tue, 11 Jul 2017, Wido den Hollander wrote:
> > > Op 11 juli 201
On Wed, Jul 12, 2017 at 12:58 AM, David Turner wrote:
> I haven't seen any release notes for 10.2.8 yet. Is there a document
> somewhere stating what's in the release?
https://github.com/ceph/ceph/pull/16274 for now although it should
make it into the master doc tree soon.
>
> On Mon, Jul 10, 2
I'm seeing a problem with OSD startup on our centos7.3 / jewel 10.2.7
cluster.
Each storage node has 24 HDD OSDs with journal on NVME, and 6 SSD osds -
all 30 osds set up with dmcrypt.
What I see is that on reboot, not all the OSDs start up successfully -
usually ~24 out of 30 start.
Manua
Thank you Richard, that mostly worked for me.
But I notice that when I switch it from FastCGI to Civitweb that
the S3-style subdomains (e.g., bucket-name.domain-name.com) stops working
and I haven't been able to figure out why on my own.
- ceph.conf excerpt:
[client.radosgw.gateway]
host
On Tue, 11 Jul 2017, Wido den Hollander wrote:
> > Op 11 juli 2017 om 17:03 schreef Sage Weil :
> >
> >
> > Hi all,
> >
> > Luminous features a new 'service map' that lets rgw's (and rgw nfs
> > gateways and iscsi gateways and rbd mirror daemons and ...) advertise
> > themselves to the cluster
Hello Mark,
Perhaps something like
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 1.fs1 --op
export --file /tmp/test
Could help ya to get your PG back.
I have Never used the above Command. It is a Notice from a Post in the Mailing
List so i recommend to Read more about ceph
On Tue, Jul 11, 2017 at 8:36 PM, Wido den Hollander wrote:
>
>> Op 11 juli 2017 om 17:03 schreef Sage Weil :
>>
>>
>> Hi all,
>>
>> Luminous features a new 'service map' that lets rgw's (and rgw nfs
>> gateways and iscsi gateways and rbd mirror daemons and ...) advertise
>> themselves to the clust
> Op 11 juli 2017 om 17:03 schreef Sage Weil :
>
>
> Hi all,
>
> Luminous features a new 'service map' that lets rgw's (and rgw nfs
> gateways and iscsi gateways and rbd mirror daemons and ...) advertise
> themselves to the cluster along with some metadata (like the addresses
> they are bind
Hi,
I would have loved to join, but it's a bit of short notice to travel from the
Netherlands :-)
Wido
> Op 10 juli 2017 om 9:39 schreef Robert Sander :
>
>
> Hi,
>
> https://www.meetup.com/de-DE/Ceph-Berlin/events/240812906/
>
> Come join us for an introduction into Ceph and DESY including
> Op 10 juli 2017 om 2:06 schreef Chris Apsey :
>
>
> All,
>
> Had a fairly substantial network interruption that knocked out about
> ~270 osds:
>
> health HEALTH_ERR
> [...]
> 273/384 in osds are down
> noup,nodown,noout flag(s) set
> monmap
Hi Richard,
Thanks for the explanation, that makes perfect sense. I've missed the
difference between ceph osd reweight and ceph osd crush reweight. I have to
study that better.
Is there a way to get ceph to prioritise fixing degraded objects over fixing
misplaced ones?
--
Eino Tuominen
_
(Trim lots of good related content).
The upcoming HAProxy 1.8 has landed further patches for improving hot
restarts/reloads of HAProxy, which previously lead to a brief gap period
when new connections were not serviced. Lots of other approaches had
been seen, including delaying TCP SYN momentarily
On Thu, Jul 6, 2017 at 8:47 PM, Blair Bethwaite
wrote:
> Hi all,
>
> Are there any "official" plans to have Ceph events co-hosted with OpenStack
> Summit Sydney, like in Boston?
>
> The call for presentations closes in a week. The Forum will be organised
> throughout September and (I think) that i
Hi Bruno,
We have similar types of nodes and minimal configuration is required
(RHEL7-derived OS). Install device-mapper-multipath or equivalent
package, configure /etc/multipath.conf and enable 'multipathd'. If
working correctly the command 'multipath -ll' should output multipath
devices and co
On Tue, Jul 11, 2017 at 7:48 AM, wrote:
> Hi All,
>
>
>
> And further to my last email, does anyone have any experience of using
> ceph-deploy with storage configured via multipath, please?
>
>
>
> Currently, we deploy new OSDs with:
>
> ceph-deploy disk zap ceph-sn1.example.com:sdb
>
> ceph-depl
I wasn't using multipath, but the same findings may possibly apply. When
deploying a cluster recently with jewel I was trying various permutations.
ceph-deploy does accept the full /dev/sdX path without issue.
However I really wanted to use more deterministic hardware-related
paths, so as to b
On Tue, Jul 11, 2017 at 7:36 AM, John Spray wrote:
> On Tue, Jul 11, 2017 at 3:23 PM, Webert de Souza Lima
> wrote:
>> Hello,
>>
>> today I got a MDS respawn with the following message:
>>
>> 2017-07-11 07:07:55.397645 7ffb7a1d7700 1 mds.b handle_mds_map i
>> (10.0.1.2:6822/28190) dne in the mds
On 11/07/17 17:08, Roger Brown wrote:
> What are some options for migrating from Apache/FastCGI to Civetweb for
> RadosGW object gateway *without* breaking other websites on the domain?
>
> I found documention on how to migrate the object gateway to Civetweb
> (http://docs.ceph.com/docs/luminous
On Tue, 11 Jul 2017, Dan van der Ster wrote:
> On Tue, Jul 11, 2017 at 5:40 PM, Sage Weil wrote:
> > On Tue, 11 Jul 2017, Haomai Wang wrote:
> >> On Tue, Jul 11, 2017 at 11:11 PM, Sage Weil wrote:
> >> > On Tue, 11 Jul 2017, Sage Weil wrote:
> >> >> Hi all,
> >> >>
> >> >> Luminous features a new
None of your RGW pools would require a cache tier. Your volumes for
OpenStack would need a cache tier. I use Erasure Coding for my data
volumes in VMs as well as for CephFS. I don't use Erasure Coding for
system volumes in VMs. I wanted to avoid the increased latency that would
impose onto the
What are some options for migrating from Apache/FastCGI to Civetweb for
RadosGW object gateway *without* breaking other websites on the domain?
I found documention on how to migrate the object gateway to Civetweb (
http://docs.ceph.com/docs/luminous/install/install-ceph-gateway/#migrating-from-apa
On Tue, Jul 11, 2017 at 5:40 PM, Sage Weil wrote:
> On Tue, 11 Jul 2017, Haomai Wang wrote:
>> On Tue, Jul 11, 2017 at 11:11 PM, Sage Weil wrote:
>> > On Tue, 11 Jul 2017, Sage Weil wrote:
>> >> Hi all,
>> >>
>> >> Luminous features a new 'service map' that lets rgw's (and rgw nfs
>> >> gateways
On Tue, 11 Jul 2017, Haomai Wang wrote:
> On Tue, Jul 11, 2017 at 11:11 PM, Sage Weil wrote:
> > On Tue, 11 Jul 2017, Sage Weil wrote:
> >> Hi all,
> >>
> >> Luminous features a new 'service map' that lets rgw's (and rgw nfs
> >> gateways and iscsi gateways and rbd mirror daemons and ...) advertis
Hi,
On 07/07/17 13:03, David Turner wrote:
> So many of your questions depends on what your cluster is used for. We
> don't even know rbd or cephfs from what you said and that still isn't
> enough to fully answer your questions. I have a much smaller 3 node
> cluster using Erasure coding for rbds
On Tue, Jul 11, 2017 at 11:11 PM, Sage Weil wrote:
> On Tue, 11 Jul 2017, Sage Weil wrote:
>> Hi all,
>>
>> Luminous features a new 'service map' that lets rgw's (and rgw nfs
>> gateways and iscsi gateways and rbd mirror daemons and ...) advertise
>> themselves to the cluster along with some metad
On Tue, 11 Jul 2017, Sage Weil wrote:
> Hi all,
>
> Luminous features a new 'service map' that lets rgw's (and rgw nfs
> gateways and iscsi gateways and rbd mirror daemons and ...) advertise
> themselves to the cluster along with some metadata (like the addresses
> they are binding to and the s
Hi all,
Luminous features a new 'service map' that lets rgw's (and rgw nfs
gateways and iscsi gateways and rbd mirror daemons and ...) advertise
themselves to the cluster along with some metadata (like the addresses
they are binding to and the services the provide).
It should be pretty straigh
I haven't seen any release notes for 10.2.8 yet. Is there a document
somewhere stating what's in the release?
On Mon, Jul 10, 2017 at 1:41 AM Henrik Korkuc wrote:
> On 17-07-10 08:29, Christian Balzer wrote:
> > Hello,
> >
> > so this morning I was greeted with the availability of 10.2.8 for bo
Hi All,
And further to my last email, does anyone have any experience of using
ceph-deploy with storage configured via multipath, please?
Currently, we deploy new OSDs with:
ceph-deploy disk zap ceph-sn1.example.com:sdb
ceph-deploy --overwrite-conf config pull ceph-sn1.example.com
ceph-deploy
Hi All,
I'd like to know if anyone has any experience of configuring multipath on ceph
storage nodes, please. I'd like to know how best to go about it.
We have a number of Dell PowerEdge R630 servers, each of which are fitted with
two SAS 12G HBA cards and each of which have two associated Dell
Thanks John,
I got this in the mds log too:
2017-07-11 07:10:06.293219 7f1836837700 1 mds.beacon.b _send skipping
beacon, heartbeat map not healthy
2017-07-11 07:10:08.330979 7f183b942700 1 heartbeat_map is_healthy
'MDSRank' had timed out after 15
but that respawn happened 2 minutes after I go
On Tue, Jul 11, 2017 at 3:23 PM, Webert de Souza Lima
wrote:
> Hello,
>
> today I got a MDS respawn with the following message:
>
> 2017-07-11 07:07:55.397645 7ffb7a1d7700 1 mds.b handle_mds_map i
> (10.0.1.2:6822/28190) dne in the mdsmap, respawning myself
"dne in the mdsmap" is what an MDS say
Hello,
today I got a MDS respawn with the following message:
2017-07-11 07:07:55.397645 7ffb7a1d7700 1 mds.b handle_mds_map i (
10.0.1.2:6822/28190) dne in the mdsmap, respawning myself
it happened 3 times within 5 minutes. After so, the MDS took 50 minutes to
recover.
I can't find what exactly
First of all, your disk removal process needs tuning. "ceph osd out" sets the
disk reweight to 0 but NOT the crush weight; this is why you're seeing
misplaced objects after removing the osd, because the crush weights have
changed (even though reweight meant that disk currently held no data). Use
Hi all,
One more example:
osd.109 down out weight 0 up_from 306818 up_thru 397714 down_at 397717
last_clean_interval [306031,306809) 130.232.243.80:6814/4733
192.168.70.113:6814/4733 192.168.70.113:6815/4733 130.232.243.80:6815/4733
exists cabdfaec-eb39-4e5a-8012-9bade04c5e03
root@ceph-osd
Dear all,
i have to create several VM in order to use them as a MON on my cluster.
All my Ceph Clients are centOS.
But i'm thinking about creating all the monitor using Ubuntu, because it
seems lighter.
Is this a matter of taste?
Or are there something I should know before go with a mixed OS c
Is it possible to change the cephfs meta data pool. I would like to
lower the pg's. And thought about just making a new pool, copying the
pool and then renaming them. But I guess cephfs works with the pool id
not? How can this be best done?
Thanks
___
37 matches
Mail list logo