Hello,
I am wondering if there are people out there that still use "old
fashion" CRON scripts to check Ceph's health, monitor and receive email
alerts.
If there are do you mind sharing your implementation?
Probably something similar to this:
Hello,
I would like to see people's opinion about memory configurations.
Would you prefer 2x8GB over 1x16GB or the opposite?
In addition what are the latest memory recommendations? Should we
should keep the rule of thumb of 1GB per TB
or now with Bluestore things have changed?
I am planning
Excellent! Good to know that the behavior is intentional!
Thanks a lot John for the feedback!
Best regards,
G.
On Thu, Mar 1, 2018 at 12:03 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
I have recently updated to Luminous (12.2.4) and I have noticed that
using
"ce
I have recently updated to Luminous (12.2.4) and I have noticed that
using "ceph -w" only produces an initial output like the one below but
never gets updated afterwards. Is this a feature because I was used to
the old way that was constantly
producing info.
Here is what I get as initial
on this ML details about MGR caps being incorrect for OSDs and
MONs after a Jewel to Luminous upgrade. The output of a ceph auth
list
command should help you find out if it’s the case.
Are your ceph daemons still running? What does a ceph daemon
mon.$(hostname -s) quorum_status gives you from a
ncorrect for OSDs and
MONs after a Jewel to Luminous upgrade. The output of a ceph auth
list
command should help you find out if it’s the case.
Are your ceph daemons still running? What does a ceph daemon
mon.$(hostname -s) quorum_status gives you from a MON server.
JC
On Feb 28, 2018, at
ory
/tmp/tmpPQ895t
[ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
All,
I have updated my test ceph cluster from Jewer (10.2.10) to
Luminous
(12.2.4) using CentOS packages.
I have
] Failed to connect to host:controller
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory
/tmp/tmpPQ895t
[ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
All,
I have updated my test ceph c
Failed to connect any mon
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
All,
I have updated my test ceph cluster from Jewer (10.2.10) to Luminous
(12.2.4) using CentOS packages.
I have updated all packages, restarted all services with the pr
All,
I have updated my test ceph cluster from Jewer (10.2.10) to Luminous
(12.2.4) using CentOS packages.
I have updated all packages, restarted all services with the proper
order but I get a warning that the Manager Daemon doesn't exist.
Here is the output:
# ceph -s
cluster:
id:
on the host .. so the pool really block all
requests
On 02/24/2018 01:45 PM, Georgios Dimitrakakis wrote:
The pool will not actually go read only. All read and write
requests
will block until both osds are back up. If I were you, I would use
min_size=2 and change it to 1 temporarily if needed to do
where down time is not an option.
On Thu, Feb 22, 2018, 5:31 PM Georgios Dimitrakakis wrote:
All right! Thank you very much Jack!
The way I understand this is that its not necessarily a bad
thing. I
mean as long as it doesnt harm any data or
cannot cause any other issue.
Unfortunately my
like to know if it's better to go with min_size=2 or not.
Regards,
G.
If min_size == size, a single OSD failure will place your pool read
only
On 02/22/2018 11:06 PM, Georgios Dimitrakakis wrote:
Dear all,
I would like to know if there are additional risks when running CEPH
with "Min
Dear all,
I would like to know if there are additional risks when running CEPH
with "Min Size" equal to "Replicated Size" for a given pool.
What are the drawbacks and what could be go wrong in such a scenario?
Best regards,
G.
___
ceph-users
if they lose data because you
have warned them to have 3 replicas. If they dont sign it, then tell
them you will no longer manage Ceph for them. Hopefully they wake up
and make everyones job easier by purchasing a third server.
On Thu, Nov 16, 2017 at 9:26 AM Georgios Dimitrakakis wrote:
Thank you all
ect?
G.
Op 16 november 2017 om 14:46 schreef Caspar Smit
<caspars...@supernas.eu>:
2017-11-16 14:43 GMT+01:00 Wido den Hollander <w...@42on.com>:
>
> > Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis <
> gior...@acmac.uoc.gr>:
> >
> >
&g
p does that copy all its data
from the only one available copy to the rest unaffected disks which will
consequently end in having again two copies on two different hosts?
Best,
G.
2017-11-16 14:05 GMT+01:00 Georgios Dimitrakakis :
Dear cephers,
I have an emergency on a rather small ceph
Dear cephers,
I have an emergency on a rather small ceph cluster.
My cluster consists of 2 OSD nodes with 10 disks x4TB each and 3
monitor nodes.
The version of ceph running is Firefly v.0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
The cluster originally was build with "Replicated
are Engineer, Concurrent Computer Corporation
On Sat, Apr 1, 2017 at 4:47 AM Georgios Dimitrakakis wrote:
Hi,
just to provide some more feedback on this one and what I ve done
to
solve it, although not sure if this is the most "elegant"
solution.
I have add manually to /et
no LVM) the issue disappeared.
Regards,
Tom
-
FROM: ceph-users on behalf of Georgios Dimitrakakis
SENT: Thursday, March 23, 2017 10:21:34 PM
TO: ceph-us...@ceph.com
SUBJECT: [ceph-users] CentOS7 Mounting Problem
Hello Ceph community!
I would like some help with a n
that I've seen myself too. After migrating to a standard
filesystem layout (i.e. no LVM) the issue disappeared.
Regards,
Tom
-
FROM: ceph-users on behalf of Georgios Dimitrakakis
SENT: Thursday, March 23, 2017 10:21:34 PM
TO: ceph-us...@ceph.com
SUBJECT: [ceph-users
Hello Ceph community!
I would like some help with a new CEPH installation.
I have install Jewel on CentOS7 and after the reboot my OSDs are not
mount automatically and as a consequence ceph is not operating
normally...
What can I do?
Could you please help me solve the problem?
Regards,
As a closure I would like to thank all people who contributed with
their knowledge in my problem although the final decision was not to try
any sort of recovery since the effort required would have been
tremendous with unambiguous results (to say at least).
Jason, Ilya, Brad, David, George,
Op 13 aug. 2016 om 03:19 heeft Bill Sharer het volgende geschreven:
If all the system disk does is handle the o/s (ie osd journals are
on dedicated or osd drives as well), no problem. Just rebuild the
system and copy the ceph.conf back in when you re-install ceph.Â
Keep a spare copy of your
spect you should
be able to see how the rbd image was prefixed/named at the time of
the delete.
HTH,
Brad
If you go through yours OSDs and look for the directories for PG
index 20, you might find some fragments from the deleted volume, but
it's a long shot...
On Aug 8, 2016, at 4:39 PM, Georg
ou may be able to find the rbd objects.
On Mon, Aug 8, 2016 at 7:28 PM, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first
let me
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2 of them are the OSD nodes as well) all with ceph
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and 3MON
nodes (2 of them are the OSD nodes as well) all with ceph version 0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
All,
I was wondering if anyone has integrated his CEPH installation with
Zenoss monitoring software and is willing to share his knowledge.
Best regards,
George
___
ceph-users mailing list
ceph-users@lists.ceph.com
Jan,
this is very handy to know! Thanks for sharing with us!
People, do you believe that it would be nice to have a place where we
can gather either good practices or problem resolutions or tips from the
community? We could have a voting system and those with the most votes
(or above a
Pavel,
unfortunately there isn't a way to rename a pool usign its ID as I
have learned myself the hard way since I 've faced a few months ago the
exact same issue.
It would be a good idea for developers to also include a way to
manipulate (rename, delete, etc.) pools using the ID which is
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/stories
On 26.05.2015, at 19:12, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Jens-Christian,
how did you test that? Did you just tried to write to them
simultaneously? Any other tests that one can perform
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/stories
On 26.05.2015, at 19:12, Georgios Dimitrakakis
gior
Jens-Christian,
how did you test that? Did you just tried to write to them
simultaneously? Any other tests that one can perform to verify that?
In our installation we have a VM with 30 RBD volumes mounted which are
all exported via NFS to other VMs.
No one has complaint for the moment but
(EMBARGOED CVE-2015-3456 qemu-kvm: qemu: floppy disk controller
flaw [rhel-6.6.z])
HTH.
Cheers,
Brad
HTH.
Cheers,
Brad
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, May 19, 2015 at 3:47 PM, Georgios Dimitrakakis wrote:
Erik
On Tue, 19 May 2015 13:45:50 +0300, Georgios Dimitrakakis wrote:
Hi!
The QEMU Venom vulnerability (http://venom.crowdstrike.com/) got my
attention and I would
like to know what are you people doing in order to have the latest
patched QEMU version
working with Ceph RBD?
In my case I am using the qemu
:33 PM, Georgios Dimitrakakis wrote:
I am trying to build the packages manually and I was wondering
is the flag --enable-rbd enough to have full Ceph functionality?
Does anybody know what else flags should I include in order to
have the same
functionality as the original CentOS package plus
/7Server/en/RHEV/SRPMS/
[21]
On May 19, 2015 2:47 PM, Georgios Dimitrakakis wrote:
Erik,
are you talking about the ones here :
http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/
[20] ???
From what I see the version is rather small 0.12.1.2-2.448
How one can verify that it has been
please excuse any typos.
On May 11, 2015 5:32 AM, Georgios Dimitrakakis wrote:
Hi Robert,
just to make sure I got it correctly:
Do you mean that the /etc/mtab entries are completely ignored and
no matter what the order
of the /dev/sdX device is Ceph will just mount correctly the
osd/ceph-X
://www.rsyslog.com;] start
May 11 11:54:56 srv-lab-ceph-node-01 rsyslogd: rsyslogd's groupid
changed to 103
May 11 11:54:57 srv-lab-ceph-node-01 rsyslogd: rsyslogd's userid
changed to 100
Sorry for noise, guys. Georgios, in any way, thanks for helping.
2015-05-10 12:44 GMT+03:00 Georgios Dimitrakakis
. Georgios, in any way, thanks for helping.
2015-05-10 12:44 GMT+03:00 Georgios Dimitrakakis
gior...@acmac.uoc.gr:
Timofey,
may be your best chance is to connect directly at the server and see
what is
going on.
Then you can try debug why the problem occurred. If you don't want
to wait
until
disks between servers (if you take the journals with it). Its magic!
But I think I just gave away the secret.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On May 7, 2015 5:16 AM, Georgios Dimitrakakis wrote:
Indeed it is not necessary to have any OSD entries
cluster and can test that happen
with clients, if crushmap like it was injected.
2015-05-10 8:23 GMT+03:00 Georgios Dimitrakakis
gior...@acmac.uoc.gr:
Hi Timofey,
assuming that you have more than one OSD hosts and that the
replicator
factor is equal (or less) to the number of the hosts why
Hi Timofey,
assuming that you have more than one OSD hosts and that the replicator
factor is equal (or less) to the number of the hosts why don't you just
change the crushmap to host replication?
You just need to change the default CRUSHmap rule from
step chooseleaf firstn 0 type osd
to
Indeed it is not necessary to have any OSD entries in the Ceph.conf
file
but what happens in the event of a disk failure resulting in changing
the mount device?
For what I can see is that OSDs are mounted from entries in /etc/mtab
(I am on CentOS 6.6)
like this:
/dev/sdj1
susceptible to errors.
Best regards,
George
Can you try
ceph osd pool rename new-name
On Tue, May 5, 2015 at 12:43 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi all!
Somehow I have a pool without a name...
$ ceph osd lspools
3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8
Hi all!
Somehow I have a pool without a name...
$ ceph osd lspools
3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10
.intent-log,11 .usage,12 .users,13 .users.email,14 .users.swift,15
.users.uid,16 .rgw.root,17 .rgw.buckets.index,18 .rgw.buckets,19
.rgw.buckets.extra,20
Hi!
Do you by any chance have your OSDs placed at a local directory path
rather than on a non utilized physical disk?
If I remember correctly from a similar setup that I had performed in
the past the ceph df command accounts for the entire disk and not just
for the OSD data directory. I am
Hi all!
I had a CEPH Cluster with 10x OSDs all of them in one node.
Since the cluster was built from the beginning with just one OSDs node
the crushmap had as a default
the replication to be on OSDs.
Here is the relevant part from my crushmap:
# rules
rule replicated_ruleset {
Indeed it is!
Thanks!
George
Thanks, thats quite helpful.
On 16 March 2015 at 08:29, Loic Dachary wrote:
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or
development. a new page was added to the documentation:
http://ceph.com/docs/master/releases/ [1] It is a matrix
.
I belive that this was the action that solved my problems.not quiet
confident though :-(
Thanks a lot to everyone that spend some time to deal with my problem!
All the best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
Sage,
correct me if I am wrong but this is when you
have some more VMs/servers/clients on 192.*
network... ?
On 14 March 2015 at 19:38, Georgios Dimitrakakis wrote:
Andrija,
I have two cards!
One on 15.12.* and one on 192.*
Obviously the 15.12.* is the external network (real public IP
address e.g used to access the node via SSH)
Thats why I
of data
writen to some particualr OSD, will generate 3 x 1GB of more writes,
to the replicas... - which ideally will take place over separate NICs
to speed up things...
On 14 March 2015 at 17:43, Georgios Dimitrakakis wrote:
Hi all!!
What is the meaning of public_network in ceph.conf
Hi all!!
What is the meaning of public_network in ceph.conf?
Is it the network that OSDs are talking and transferring data?
I have two nodes with two IP addresses each. One for internal network
192.168.1.0/24
and one external 15.12.6.*
I see the following in my logs:
osd.0 is down since
network) and
thus
speed up.
If i.e. replica count on pool is 3, that means, each 1GB of data
writen to some particualr OSD, will generate 3 x 1GB of more writes,
to the replicas... - which ideally will take place over separate
NICs
to speed up things...
On 14 March 2015 at 17:43, Georgios
= yy
[mon.zz]
mon_addr = x.x.x.x:6789
host = zz
On 14 March 2015 at 19:14, Georgios Dimitrakakis wrote:
I thought that it was easy but apparently its not!
I have the following in my conf file
mon_host = 192.168.1.100,192.168.1.101,192.168.1.102
public_network = MAILSCANNER WARNING: NUMERICAL
cards in servers, then you may use first 1G for
client traffic, and second 1G for OSD-to-OSD replication...
best
On 14 March 2015 at 19:33, Georgios Dimitrakakis wrote:
Andrija,
Thanks for you help!
In my case I just have one 192.* network, so should I put that for
both?
Besides monitors do I
; run 'new' to
create a new cluster
Regards,
George
Hi,
I think ceph-deploy mon add (instead of create) is what you should be
using.
Cheers
On 13/03/2015 22:25, Georgios Dimitrakakis wrote:
On an already available cluster I 've tried to add a new monitor!
I have used ceph-deploy mon
March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs/v0.80.5/start/quick-ceph-deploy/#adding-monitors
[4]
Should I first create and then add?? What is the proper order???
Should
the first one from scratch?
What would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph-mon.log now:
2015-03-14 08:16:39.286823 7f9f6920b700 1
mon.fu@0(electing).elector(1) init, last seen epoch 1
2015
are runnign ceph-deploy from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs/v0.80.5/start/quick-ceph-deploy/#adding-monitors
[4]
Should I first create
to
create a new cluster...
...means (if Im not mistaken) that you are runnign ceph-deploy
from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs
would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph-mon.log now:
2015-03-14 08:16:39.286823 7f9f6920b700 1
mon.fu@0(electing).elector(1) init, last seen epoch 1
2015-03-14 08:16:42.736674
...and provision new MONs or
OSDs,
etc.
Message:
[ceph_deploy][ERROR ] RuntimeError: mon keyring not found; run new to
create a new cluster...
...means (if Im not mistaken) that you are runnign ceph-deploy from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall
a new cluster...
...means (if Im not mistaken) that you are runnign ceph-deploy from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs/v0.80.5/start
Yes Sage!
Priority is to fix things!
Right now I don't have a healthy monitor!
Can I remove all of them and add the first one from scratch?
What would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph
?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 12, 2015 7:39 PM, Georgios Dimitrakakis wrote:
Hi Robert!
Thanks for the feedback! I am aware of the fact that the number of
the monitors should be odd
but this is a very basic setup just to test CEPH functionality
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to restart
CEPH a monitor a strange monitor is appearing!
Here is the output:
#/etc/init.d/ceph restart mon
=== mon.master ===
=== mon.master ===
Stopping Ceph mon.master on master...kill 10766...done
=== mon.master ===
I forgot to say that the monitors form a quorum and the cluster's
health is OK
so there aren't any serious troubles other than the annoying message.
Best,
George
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to restart
CEPH a monitor a strange monitor is appearing!
Here
PM, Georgios Dimitrakakis wrote:
I forgot to say that the monitors form a quorum and the clusters
health is OK
so there arent any serious troubles other than the annoying
message.
Best,
George
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to
restart
CEPH a monitor
Hi Italo,
Check the S3 Bucket OPS at :
http://ceph.com/docs/master/radosgw/s3/bucketops/
or use any of the examples provided in Python
(http://ceph.com/docs/master/radosgw/s3/python/) or PHP
(http://ceph.com/docs/master/radosgw/s3/php/) or JAVA
Daniel,
on CentOS the logrotate script was not invoked incorrectly because it
was called everywhere as radosgw:
e.g.
service radosgw reload /dev/null or
initctl reload radosgw cluster=$cluster id=$id 2/dev/null || :
but there isn't any radosgw service!
I had to change it into ceph-radosgw
://ceph.com/docs/master/start/quick-start-preflight/#open-required-ports
John
On Sat, Feb 7, 2015 at 4:33 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi all!
I am integrating my OpenStack Cluster with CEPH in order to be able
to
provide volumes for the instances!
I have managed to perform
/master/start/quick-start-preflight/#open-required-ports
John
On Sat, Feb 7, 2015 at 4:33 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi all!
I am integrating my OpenStack Cluster with CEPH in order to be able
to
provide volumes for the instances!
I have managed to perform all
Hi Christian,
On Fri, 30 Jan 2015 01:22:53 +0200 Georgios Dimitrakakis wrote:
Urged by a previous post by Mike Winfield where he suffered a
leveldb
loss
I would like to know which files are critical for CEPH operation
and
must
be backed-up regularly and how are you people doing
Urged by a previous post by Mike Winfield where he suffered a leveldb
loss
I would like to know which files are critical for CEPH operation and
must
be backed-up regularly and how are you people doing it?
Any points much appreciated!
Regards,
G.
at 3:58 PM, Georgios Dimitrakakis wrote:
Hi Craig!
For the moment I have only one node with 10 OSDs.
I want to add a second one with 10 more OSDs.
Each OSD in every node is a 4TB SATA drive. No SSD disks!
The data ara approximately 40GB and I will do my best to have zero
or at least very very
in would be the best time to switch back to host level
replication. The more data you have, the more painful that change
will become.
On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis wrote:
Hi Jiri,
thanks for the feedback.
My main concern is if its better to add each OSD one-by-one and
wait
-cluster
[3]
Jiri
On 15/01/2015 06:36, Georgios Dimitrakakis wrote:
Hi all!
I would like to expand our CEPH Cluster and add a second OSD node.
In this node I will have ten 4TB disks dedicated to CEPH.
What is the proper way of putting them in the already available
CEPH node?
I guess
Hi all!
I would like to expand our CEPH Cluster and add a second OSD node.
In this node I will have ten 4TB disks dedicated to CEPH.
What is the proper way of putting them in the already available CEPH
node?
I guess that the first thing to do is to prepare them with ceph-deploy
and mark
of the external IP.
In my case, I only have Apache bound to the internal interface. My
load balancer has an external and internal IP, and Im able to talk to
it on both interfaces.
On Mon, Dec 15, 2014 at 2:00 PM, Georgios Dimitrakakis wrote:
Hi all!
I have a single CEPH node which has two network
Hi all!
I have a single CEPH node which has two network interfaces.
One is configured to be accessed directly by the internet (153.*) and
the other one is configured on an internal LAN (192.*)
For the moment radosgw is listening on the external (internet)
interface.
Can I configure
.
Thanks,
Yehuda
On Fri, Dec 12, 2014 at 5:59 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
How silly of me!!!
I 've just noticed that the file isn't writable by the apache!
I 'll be back with the logs...
G.
I 'd be more than happy to provide to you all the info but for some
unknown
to build.
Yehuda
On Thu, Dec 11, 2014 at 12:03 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi again!
I have installed and enabled the development branch repositories as
described here:
http://ceph.com/docs/master/install/get-packages/#add-ceph-development
and when I try to update
the
dash character that you were using cannot be used safely in that
context. Maybe tilde ('~') would could work.
Yehuda
On Fri, Dec 12, 2014 at 2:41 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Dear Yehuda,
I have installed the patched version as you can see:
$ radosgw --version
ceph
This is very silly of me...
The file wasn't writable by apache.
I am writing it down for future reference.
G.
Hi all!
I have a CEPH installation with radosgw and the radosgw.log in the
/var/log/ceph directory is empty.
In the ceph.conf I have
log file = /var/log/ceph/radosgw.log
debug ms
that I see are to fix the client library, and/or
to
modify the character to one that does not require escaping. Sadly
the
dash character that you were using cannot be used safely in that
context. Maybe tilde ('~') would could work.
Yehuda
On Fri, Dec 12, 2014 at 2:41 AM, Georgios Dimitrakakis
to the repositories?
Regards,
George
On Mon, 08 Dec 2014 19:47:59 +0200, Georgios Dimitrakakis wrote:
I 've just created issues #10271
Best,
George
On Fri, 5 Dec 2014 09:30:45 -0800, Yehuda Sadeh wrote:
It looks like a bug. Can you open an issue on tracker.ceph.com,
describing what you see?
Thanks
at 8:38 AM, Yehuda Sadeh yeh...@redhat.com
wrote:
I don't think it has been fixed recently. I'm looking at it now, and
not sure why it hasn't triggered before in other areas.
Yehuda
On Thu, Dec 11, 2014 at 5:55 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
This issue seems very similar
, Dec 11, 2014 at 12:03 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi again!
I have installed and enabled the development branch repositories as
described here:
http://ceph.com/docs/master/install/get-packages/#add-ceph-development
and when I try to update the ceph-radosgw package I get
I 've just created issues #10271
Best,
George
On Fri, 5 Dec 2014 09:30:45 -0800, Yehuda Sadeh wrote:
It looks like a bug. Can you open an issue on tracker.ceph.com,
describing what you see?
Thanks,
Yehuda
On Fri, Dec 5, 2014 at 7:17 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote
Hi all!
I am using AWS SDK JS v.2.0.29 to perform a multipart upload into
Radosgw with ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3) and I am getting a 403 error.
I believe that the id which is send to all requests and has been
urlencoded by the aws-sdk-js doesn't match
For example if I try to perform the same multipart upload at an older
version ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
I can see the upload ID in the apache log as:
PUT
/test/.dat?partNumber=25uploadId=I3yihBFZmHx9CCqtcDjr8d-RhgfX8NW
HTTP/1.1 200 - -
It would be nice to see where and how uploadId
is being calculated...
Thanks,
George
For example if I try to perform the same multipart upload at an older
version ceph version 0.72.2
(a913ded2ff138aefb8cb84d347d72164099cfd60)
I can see the upload ID in the apache log as:
PUT
Hi all!
I have a CEPH installation with radosgw and the radosgw.log in the
/var/log/ceph directory is empty.
In the ceph.conf I have
log file = /var/log/ceph/radosgw.log
debug ms = 1
debug rgw = 20
under the: [client.radosgw.gateway]
Any ideas?
Best,
George
Hi!
On CentOS 6.6 I have installed CEPH and ceph-radosgw
When I try to (re)start the ceph-radosgw service I am getting the
following:
# service ceph-radosgw restart
Stopping radosgw instance(s)...[ OK ]
Starting radosgw instance(s)...
/usr/bin/dirname: extra
I was thinking the same thing for the following implementation:
I would like to have an RBD volume mounted and accessible at the same
time by different VMs (using OCFS2).
Therefore I was also thinking that I had to put VMs on the internal
CEPH network by adding a second NIC and plugging that
be set to 1
-
so that the cluster would still work with at least one PG being up.
After I've changed the min_size to 1 the cluster sorted itself out.
Try doing this for your pools.
Andrei
-
FROM: Georgios Dimitrakakis
TO: ceph-users@lists.ceph.com
SENT: Saturday, 29
1 - 100 of 121 matches
Mail list logo