Hi Hong,
That's interesting, for Mr. Bazli and I, we ended with MDS stuck in (up:replay)
and a flapping ceph-mds daemon, but then again we are using version 0.72.2.
Having said so the triggering point seem similar to us as well, which is the
following line:
-38 2014-03-20 20:08:44.495565
Luke,
Not sure what flapping ceph-mds daemon mean, but when I connected to MDS when
this happened there no longer was any process with ceph-mds when I ran one
daemon. When I ran three there were one left but wasn't doing much. I didn't
record the logs but behavior was very similar in 0.72
Hello,
I plan to setup a Ceph cluster for a small size hosting company. The aim is
to have customers data (website and mail folders) in a distributed cluster.
Then to setup different servers like web, smtp, pop and imap, accessing the
the cluster data.
The goals are:
* Store all data replicated
Hi Hong,
How's the client now? Would it able to mount to the filesystem now? It looks
similar to our case, http://www.spinics.net/lists/ceph-devel/msg18395.html
However, you need to collect some logs to confirm this.
Thanks.
From: hjcho616 [mailto:hjcho...@yahoo.com]
Sent: Friday, March 21,
If I want to use a disk dedicated for osd, can I just use something like
/dev/sdb instead of /dev/sdb1? Is there any negative impact on performance?
--
/ Use iftop to monitor current network \
\ activity connections per host. /
Hi All,
I have a ceph cluster with four nodes including the admin node. I have
integrated it with openstack.
Now, if I create volumes or boot up a VM for volumes, it creates the block
storage in Ceph.
One of the openstack node acts as ceph client. Now I want to remove ceph from
openstack .
Hi all,
I upload a file through swift API, then delete it in the “current” directory in
the secondary OSD manually, why the object can’t be recovered?
If I delete it in the primary OSD, the object is deleted directly in the pool
.rgw.bucket and it can’t be recovered from the secondary OSD.
Do
Well, looks like Ted talk about its project than uses xtreemfs.
On Fri, Mar 21, 2014 at 6:17 PM, Loic Dachary l...@dachary.org wrote:
Hi Ted,
Thanks for reaching out : it is nice to see more companies developing
solutions around Ceph. Could you tell me what version you are using ?
Cheers
As many of you may have noticed, Inktank has firmed up plans for a
Ceph Day event in Boston on Jun 10th:
http://www.inktank.com/cephdays/
As always, I want to keep these events heavy on community
participation (rather than Inktank doing all the talking). If you are
interested in sharing your
On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into
On Fri, Mar 21, 2014 at 6:25 AM, Gudu, Diana (SCC) diana.g...@kit.edu wrote:
Hi everyone,
I was wondering if the Copy Part feature of the S3 API is supported or
whether there is a plan to implement this in the near future.
There wasn't any plan, haven't seen any demand as you're the first one
Hi Ted,
Sorry if I misunderstood your initial message : I did not realize it was
marketing for the competition. My role in Cloudwatt is exclusively focused on
Ceph.
Cheers
On 21/03/2014 12:22, Ted wrote:
Hi Loic,
Actually we have developed a competitive product to Ceph that goes well
On 03/21/2014 04:20 PM, Loic Dachary wrote:
Hi Ted,
Sorry if I misunderstood your initial message : I did not realize it
was marketing for the competition.
Dear Loic,
I wanted to reach out to you with this exciting money transfer
opportunity that I believe your bank account could really
This development release includes two key features: erasure coding and
cache tiering. A huge amount of code was merged for this release and
several additional weeks were spent stabilizing the code base, and it
is now in a state where it is ready to be tested by a broader user
base.
This is *not*
14 matches
Mail list logo