with html vs plain text in gmail (ceph-list doesn't allow html email).
On Sat, Feb 9, 2013 at 6:21 PM, John Axel Eriksson j...@insane.se wrote:
This sounds very much like what we've been experiencing. Actually,
come to think of it, when I (a month ago or so) enabled more logging,
when one osd
Three times now, twice on one osd, once on another we've had the osd
crash. Restarting it wouldn't help - it would crash with the same
error. The only way I found to get it up again was to reformat both
the journal disk and the disk ceph is using for storage... basically
recreating the osd.
This
.
Guess I'll have to keep looking for answers, maybe someone else on the
list knows more?
Thanks!
On Sat, Feb 9, 2013 at 5:41 PM, Gregory Farnum g...@inktank.com wrote:
On Saturday, February 9, 2013 at 6:23 AM, John Axel Eriksson wrote:
Three times now, twice on one osd, once on another we've had
something about how to limit the rights of the user I created
like in the manual?
We want to create an account, which has no rights to create buckets. cannot
find a fitting manual for that.
Thank you very much
Regards
Philipp
Von: John Axel Eriksson [mailto:j...@insane.se]
Gesendet
I'm a Mac and Linux user so yeah, I'm a bit interested even though we
haven't cared much for the
file system part of ceph so far (we're mostly using the RGW).
On Tue, Oct 30, 2012 at 5:13 PM, Sage Weil s...@inktank.com wrote:
It is probably a relatively straighforward porting job (fixing up
I'm worried that data deleted in radosgw wasn't actually deleted from
disk/cluster.
Here's the output using df:
/dev/xvdf 1000G 779G 185G 81% /var/lib/ceph/osd/ceph-0
Quite full that disk. Now for ceph -s I get
health HEALTH_OK
lines removed
pgmap v256604: 2304 pgs: 2304
at 10:07 AM, John Axel Eriksson j...@insane.se wrote:
I'm running 0.48.1. Wow I had no idea that was the case. I guess
everything that's been deleted up until today can be removed
using radosgw-admin temp remove --date=2012-10-09... am I correct in
assuming this only removes garbage (e.g. deleted
it.
Hopefully the stability is better than it was before, and inline compression
is always great!
-Nick
John Axel Eriksson 09/17/12 5:26 PM
Our use of Ceph started pretty recently (this summer). We only use
rados together with the radosgw. We moved from another distributed
storage
but bad decisions,
misrepresenation of features and somewhat sparse documentation. By the
the way, Ceph has improved it's docs alot but still could use some
work.
-John
On Tue, Sep 18, 2012 at 9:47 AM, Plaetinck, Dieter die...@vimeo.com wrote:
On Tue, 18 Sep 2012 01:26:03 +0200
John Axel Eriksson j
the Riak CS. It all looks
great on paper, however, with the experiences of
Riak Luwak, which also looked great on paper, we
wouldn't even dare to consider.
I can't wait until the day we get rid off of it totally.
Best,
Xiaopong
On 09/18/2012 10:34 PM, John Axel Eriksson wrote:
I
Our use of Ceph started pretty recently (this summer). We only use
rados together with the radosgw. We moved from another distributed
storage solution that had failed us more than once and we lost data.
Since the old system had an http interface (not S3 compatible though)
we looked around for
I found somewhere that it's supposed to be
/var/lib/ceph/radosgw/ceph-$id. Ok in my case I guess that would mean:
/var/lib/ceph/radosgw/ceph-client.radosgw.gateway would that be
correct? Since I need to store the keyring in that directory for
example and I want to use the defaults.
John
--
To
So I first upgraded the mon, then went ahead and upgraded one of the
osds which crashed and keeps crashing - probably when trying to
upgrade the filestore.
How should I proceed? FS is btrfs, one mon two osds. This is the conf
for the osds:
[osd]
osd data = /srv/osd.$id
osd journal
It seems this was caused by problems with the underlying filesystem. I
was able to solve it by rebooting, not sure what the
problem was but there were errors i dmesg about it (using btrfs here).
On Tue, Jul 3, 2012 at 10:00 AM, John Axel Eriksson j...@insane.se wrote:
So I first upgraded the mon
I guess this has been asked before, I'm just new to the list and
wondered whether it's possible to do
rolling upgrades of mons, osds and radosgw? We will soon be in the
process of migrating from our current
storage solution to Ceph/RGW. We will only use the object storage,
actually mainly the
Currently we're running a test cluster with 1 mon, 1 radosgw and 2
osds. RGW runs on the same host as the mon while
the osds recides on two different servers. We have thought of maybe
running more than 1 osd on each storage server, where
the osds use different disks of course - is this something
I asked a similar question in a previous email but I didn't get any
satisfying answers. What exactly does cephx auth secure?
From the wiki I just get this makes your cluster more secure, well
from what? If I run on an internal network accessible only
by a few trusted people - what does cephx auth
,
On 06/12/2012 11:31 AM, John Axel Eriksson wrote:
I asked a similar question in a previous email but I didn't get any
satisfying answers. What exactly does cephx auth secure?
I wanted to get back on your e-mail from yesterday, but you beat me to it!
:)
From the wiki I just get
- will that data be copied to another OSD if one
fails, do I have to manually do anything, can I read the data as long
as there is one copy somewhere?
john
On Tue, Jun 12, 2012 at 6:56 PM, Sage Weil s...@inktank.com wrote:
A few things that Wido missed:
On Tue, 12 Jun 2012, John Axel Eriksson wrote
Oh sorry. I don't think I was clear on the auth question. What I meant
was if the admin.keyring and keys for the osd:s are really necessary
in a private ceph-cluster.
On Mon, Jun 11, 2012 at 2:40 PM, Wido den Hollander w...@widodh.nl wrote:
Hi,
On 06/11/2012 02:32 PM, John Axel Eriksson wrote
interface available right now (apart from
writing one ourselves, but we'd much rather use something that is
tested and works, we're a small company).
On Mon, Jun 11, 2012 at 2:51 PM, Wido den Hollander w...@widodh.nl wrote:
On 06/11/2012 02:41 PM, John Axel Eriksson wrote:
Oh sorry. I don't think I
Hi I'm new to the list.
We've been looking at Ceph as a possible replacement for our current
distributed storage system. In particular
we're interested in the object storage, so I started researching the
radosgw. It did take me some time to get setup
and the docs/wiki is missing lots of
:14 AM, Yehuda Sadeh yeh...@inktank.com wrote:
On Thu, Jun 7, 2012 at 12:03 AM, John Axel Eriksson j...@insane.se wrote:
Hi I'm new to the list.
We've been looking at Ceph as a possible replacement for our current
distributed storage system. In particular
we're interested in the object storage
Thanks Tommi!
Do you recommend btrfs or perhaps xfs for osds etc?
On Thu, Jun 7, 2012 at 6:33 PM, Tommi Virtanen t...@inktank.com wrote:
On Thu, Jun 7, 2012 at 3:16 AM, John Axel Eriksson j...@insane.se wrote:
In general I really like ceph but it does seem like it has a few rough
edges
24 matches
Mail list logo