On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena wrote:
>
> 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
> )
>
We are now running this - basically an intermediate/gateway node that
mounts ceph rbd objects and exports them as NFS.
http://waipeng.wordpress.com/2013/11/
On 11/14/2013 05:08 AM, Dan Van Der Ster wrote:
> Hi,
> We’re trying the same, on SLC. We tried rbdmap but it seems to have some
> ubuntu-isms which cause errors.
> We also tried with rc.local, and you can map and mount easily, but at
> shutdown we’re seeing the still-mapped images blocking a mac
On 11/14/2013 12:29 PM, Alfredo Deza wrote:
> On Thu, Nov 14, 2013 at 12:23 PM, Peter Matulis
> wrote:
>> On 11/13/2013 08:29 AM, Alfredo Deza wrote:
>>> * Automatic SSH key copying/generation for hosts that do not have keys
>>> setup when using `ceph-deploy new`
>>
>> How does it know what publi
Hi Ceph,
I submitted a request for a Ceph Stand / Booth / Tables @ FOSDEM 2014 and
received an ack today.
https://fosdem.org/2014/news/2013-09-17-call-for-participation-part-two/
If anyone knows people close to the organization and are willing to advocate
for it, that would be great :-) Th
On 15/11/2013 8:57 AM, Dane Elwell wrote:
[2] - I realise the dangers/stupidity of a replica size of 0, but some of the
data we wish
to store just isn’t /that/ important.
We've been thinking of this too. The application is storing boot-images, ISOs, local
repository mirrors etc where recovery
Hello,
We’ll be going into production with our Ceph cluster shortly and I’m looking
for some advice on the number of PGs per pool we should be using.
We have 216 OSDs totalling 588TB of storage. We’re intending on having several
pools, with not all pools sharing the same replica count - so, som
Forwarding to the list.
-- Forwarded message --
From: M. Piscaer
Date: Thu, Nov 14, 2013 at 7:22 PM
Subject: Re: [ceph-users] locking on mds
To: John Spray
Yes, after upgrading to ceph 0.72 on all nodes, the problem got fixed.
Thanks Yanzheng, Grag and John for the help.
Now
I've been using CephFS for a meager 40TB store of video clips for editing,
from Dumpling to Emperor, and (fingers crossed) so far I haven't had any
problems. The only disruption I've seen is that the metadata server will
crash every couple of days, and one of the standby MDS will pick up. The
repla
On 2013-11-14 19:59, Dimitri Maziuk wrote:
Cehpfs is in fact one of ceph's big selling points,
IMO the issue is more that since it's not supported, the Enterprise
sector won't touch it.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://li
On 11/14/2013 01:49 PM, James Pearce wrote:
> On 2013-11-14 16:08, Gautam Saxena wrote:
>> 3) create a large Centos 6.4 VM (eg 15 TB, 1 TB for OS using EXT4,
>> remaining 14 TB using either EXT4 or XTRFS) on RBD and then install
>> NFS and SAMBA on it.
> I'm looking at the same issue and (FWIW) h
On 2013-11-14 19:42, Alfredo Deza wrote:
- osd activate reports failure, but actually succeeds. On the
target, the
last entry in syslog is an error on fd0, so maybe some VMware rescan
issue
I would need some logs here. How does it report a failure?
Thanks, I'll post the logs in the morning
On 2013-11-14 16:08, Gautam Saxena wrote:
I've recently accepted the fact CEPH-FS is not stable...SAMBA no
longer working...
Alternatives
1) nfs over rbd...
2) nfs-ganesha for ceph...
3) create a large Centos 6.4 VM (eg 15 TB, 1 TB for OS using EXT4,
remaining 14 TB using either EXT4 or XTRFS)
On Thu, Nov 14, 2013 at 2:35 PM, James Pearce wrote:
> On 2013-11-14 17:14, Alfredo Deza wrote:
>>
>> I think that is just some leftover mishandled connection from the
>> library that ceph-deploy uses to connect remotely and can be ignored.
>>
>> Could you share the complete log that got you to th
On 2013-11-14 17:14, Alfredo Deza wrote:
I think that is just some leftover mishandled connection from the
library that ceph-deploy uses to connect remotely and can be ignored.
Could you share the complete log that got you to this point? I
believe
this bit is coming at the very end, right?
On Thu, Nov 14, 2013 at 6:00 AM, Haomai Wang wrote:
>> We are using the nova fork by Josh Durgin
>> https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd - are there
>> more patches that need to be integrated?
> I hope I can release or push commits to this branch contains live-migration,
>
Argh, premature email sending!
On Thu, Nov 14, 2013 at 12:28 PM, Berant Lemmenes wrote:
>
> On Thu, Nov 14, 2013 at 12:11 PM, Alfredo Deza
> wrote:
>
>> This is just one of the items that caught my attention:
>>
>> 2013-10-29 15:56:03,982 [ceph11][ERROR ] 2013-10-29 15:56:06.303009
>> 7feaafe8e7
On Thu, Nov 14, 2013 at 12:23 PM, Peter Matulis
wrote:
> On 11/13/2013 08:29 AM, Alfredo Deza wrote:
>> Hi All,
>>
>> I'm happy to announce a new release of ceph-deploy, the easy
>> deployment tool for Ceph.
>>
>> The only two (very important) changes made for this release are:
>>
>> * Automatic S
On Thu, Nov 14, 2013 at 12:11 PM, Alfredo Deza wrote:
>
>
> From your logs, it looks like you have a lot of errors from attempting
> to get monitors running and I can't see how/where ceph-deploy can
> be causing them. Are those errors known issues for you? Where they
> fixed or those are part of th
On 11/13/2013 08:29 AM, Alfredo Deza wrote:
> Hi All,
>
> I'm happy to announce a new release of ceph-deploy, the easy
> deployment tool for Ceph.
>
> The only two (very important) changes made for this release are:
>
> * Automatic SSH key copying/generation for hosts that do not have keys
> set
Hi,
That is awesome. We just Start in hacking ceph-deploy for gentoo And that will
definitly help us all.
We will test the code And looking forward to feedback.
Thanks to the gentoo-Folk @ceph ;-)
Best regards
Philipp v. Strobl-Albeg
Am 11.11.2013 um 21:19 schrieb Mikaël Cluseau :
> On 11/
On Thu, Nov 14, 2013 at 11:56 AM, James Pearce wrote:
> Sorry, should have included some output as well...,
>
> $ ceph-deploy mon create mon0-0
> ...
> Exception in thread Thread-5 (most likely raised ruing interpreter
> shutdown):
> Traceback (most recent call last):
> File "/usr/lib/python2.7/
On Wed, Nov 13, 2013 at 8:49 AM, Berant Lemmenes wrote:
>
> On Wed, Nov 13, 2013 at 8:05 AM, Alfredo Deza
> wrote:
>>
>>
>> Oh, and I just noticed you did say Ubuntu, so sysvinit should have
>> *not* been enabled at all, this should've been
>> Upstart all along.
>>
>> Again, logs would be super u
This is just a complicated version of the following test:
with one file
for n in N:
pick one of two clients
on that client:
open the file, increment a value, close the file
print the value after it was incremented.
check that the incremented value is equal to n+1
It seems like whe
Sorry, should have included some output as well...,
$ ceph-deploy mon create mon0-0
...
Exception in thread Thread-5 (most likely raised ruing interpreter
shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in
__bootstrap_inner
File
"/usr/lib/py
On Thu, Nov 14, 2013 at 11:39 AM, James Pearce wrote:
> On 2013-11-13 13:29, Alfredo Deza wrote:
>>
>>
>> I'm happy to announce a new release of ceph-deploy, the easy
>> deployment tool for Ceph.
>>
>
> Hi, is 1.3.2 known working with 0.72 release? With Ubuntu 13.04 clean
> builds it seems to be
Thank you all for the info. Any chance this may make it into mainline?
Thanks,
Dinu
On Nov 14, 2013, at 4:27 PM, Jens-Christian Fischer
wrote:
>> On Thu, Nov 14, 2013 at 9:12 PM, Jens-Christian Fischer
>> wrote:
>> We have migration working partially - it works through Horizon (to a random
On 2013-11-13 13:29, Alfredo Deza wrote:
I'm happy to announce a new release of ceph-deploy, the easy
deployment tool for Ceph.
Hi, is 1.3.2 known working with 0.72 release? With Ubuntu 13.04 clean
builds it seems to be failing even to generate keys. I've also seen the
breaks mentioned in
This Just happened to me yesterday:
A 90 32GB VM ROOT IMAGES stored in one 3TB RBD/XFS volume.
Mounting this volume gave an error, "structure needs cleaning" and won't mount.
Ran xfs_check.
Ran xfs_repair.
Mouted rbd volume, no error.
Booted all 90 VM ROOT IMAGES.
Have not used EXT4 on OSD's or
[ Adding list back. ]
I don't know php; I'm not sure what it means for "count" to drop to
zero in your script. What erroneous behavior do you believe the
filesystem is displaying?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Nov 14, 2013 at 12:04 AM, M. Piscaer wrot
I've recently accepted the fact CEPH-FS is not stable enough for production
based on 1) recent discussion this week with Inktank engineers, 2)
discovery that the documentation now explicitly states that all over the
place (http://eu.ceph.com/docs/wip-3060/cephfs/) and 3) a reading of the
recent bug
Search the docs or mailing list archives for a discussion of the CRUSH
tunables; you can see that's the problem here because CRUSH is only
mapping the PG to one OSD instead of two. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Nov 14, 2013 at 4:40 AM, Ugis wrote:
Hi,
I have an webcluster setup, where on the loadbalancers the persistence
timeout is 0. To share the sessions I use ceph version 0.56.7, like you
see on the diagram.
++
| Internet |
++
|
+-+---+
|
> On Thu, Nov 14, 2013 at 9:12 PM, Jens-Christian Fischer
> wrote:
> We have migration working partially - it works through Horizon (to a random
> host) and sometimes through the CLI.
>
> random host? Do you mean cold-migration? Live-migration should be specified
> destination host.
I have ju
On Thu, Nov 14, 2013 at 9:12 PM, Jens-Christian Fischer <
jens-christian.fisc...@switch.ch> wrote:
> We have migration working partially - it works through Horizon (to a
> random host) and sometimes through the CLI.
>
random host? Do you mean cold-migration? Live-migration should be specified
des
We have migration working partially - it works through Horizon (to a random
host) and sometimes through the CLI.
We are using the nova fork by Josh Durgin
https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd - are there more
patches that need to be integrated?
cheers
jc
--
SWITCH
Je
Hi,
Got 1 pg which does not fix and I dont know how to make it.
ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
# ceph health detail
HEALTH_WARN 1 pgs stuck unclean; recovery 23/4687006 degraded
(0.000%); 6 near full osd(s)
pg 4.2ab is stuck unclean for 172566.764239, current state
On Wed, Nov 13, 2013 at 8:30 PM, Xuan Bai wrote:
> Hi All,
>
> I am testing install ceph cluster from ceph-deploy 1.3.2, I get a python
> error when execute "ceph-deploy disk list".
> Here is my output:
>
> [root@ceph-02 my-cluster]# ceph-deploy disk list ceph-02
> [ceph_deploy.cli][INFO ] Invoke
Yes, we still need a patch to make live-migration work.
On Thu, Nov 14, 2013 at 8:12 PM, Matt Thompson wrote:
> Hi Dinu,
>
> Initial tests for me using Ubuntu's 2013.2-0ubuntu1~cloud0 indicate that
> live migrations do not work when using libvirt_images_type=rbd ("Live
> migration can not be use
Hi Dinu,
Initial tests for me using Ubuntu's 2013.2-0ubuntu1~cloud0 indicate that
live migrations do not work when using libvirt_images_type=rbd ("Live
migration can not be used without shared storage.") I'll need to dig
through LP to see if this has since been addressed.
On a side note, live mi
On 2013-11-14 03:20, Youd, Douglas wrote:
> Hey All,
>
>
>
> Having a crack at installing Ceph with Chef. Running into an issue after
> following the guides.
>
>
>
> Get the following error when I run the initial sudo chef-client on the
> nodes.
>
>
>
> [2013-11-14T10:02:55+08:00] ERROR
On 11/13/2013 10:13 PM, Shain Miley wrote:
Hello,
I am creating a 250 TB rbd image that may grow to 500 or 600 TB over the next
year or so. I initially formatted the image using ext4 as shown in the rbd
quick start guide, however I have found several examples across the internet
that show rb
Am 14.11.2013 11:24, schrieb Gregory Farnum:
Yes, you've run across an issue. It's identified and both the
preventive fix and the resolver tool are in testing.
See:
The thread "Emperor upgrade bug 6761"
http://tracker.ceph.com/issues/6761#note-19
(Forgive my brevity; it's late here. :)
-Greg
T
Brevity forgiven, thanks for the pointer to the issue :-)
On 14 November 2013 10:24, Gregory Farnum wrote:
> Yes, you've run across an issue. It's identified and both the
> preventive fix and the resolver tool are in testing.
> See:
> The thread "Emperor upgrade bug 6761"
> http://tracker.ceph.
Yes, you've run across an issue. It's identified and both the
preventive fix and the resolver tool are in testing.
See:
The thread "Emperor upgrade bug 6761"
http://tracker.ceph.com/issues/6761#note-19
(Forgive my brevity; it's late here. :)
-Greg
Software Engineer #42 @ http://inktank.com | http:/
Hi Sam,
On 11/12/2013 09:46 PM, Samuel Just wrote:
I think we removed the experimental warning in cuttlefish. It
probably wouldn't hurt to do it in bobtail particularly if you test it
extensively on a test cluster first. However, we didn't do extensive
testing on it until cuttlefish. I would
Hi,
Since the upgrade to 0.72 I've been experiencing an issue with a number of
volumes. It seems to occur on both the librbd clients as well as the krbd
clients. From the kernel client, when trying to access one of these
appearing corrupted images, it causes a panic and you end up with an
assertio
Hi,
We’re trying the same, on SLC. We tried rbdmap but it seems to have some
ubuntu-isms which cause errors.
We also tried with rc.local, and you can map and mount easily, but at shutdown
we’re seeing the still-mapped images blocking a machine from shutting down
(libceph connection refused error
47 matches
Mail list logo