that.
Regards,
Jimmy Tang
--
Senior Software Engineer, Digital Repository of Ireland (DRI)
High Performance Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/ | jt...@tchpc.tcd.ie
Tel: +353-1-896-3847
--
To unsubscribe from this list: send
of rbd's
in certain cases.
Regards,
Jimmy Tang
--
Senior Software Engineer, Digital Repository of Ireland (DRI)
High Performance Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/ | jt...@tchpc.tcd.ie
Tel: +353-1-896-3847
On 9 Dec 2012, at 18:22, Noah Watkins wrote:
On Sun, Dec 9, 2012 at 10:05 AM, Gregory Farnum g...@inktank.com wrote:
Oooh, very nice! Do you have a list of the dependencies that you actually
needed to install?
I can put that together. They were boost, gperf, fuse4x, cryptopp. I
think
On 24 Nov 2012, at 16:42, Gregory Farnum wrote:
On Thursday, November 22, 2012 at 4:33 AM, Jimmy Tang wrote:
Hi All,
Is it possible at this point in time to setup some form of tiering of
storage pools in ceph by modifying the crush map? For example I want to have
my most recently used
using a LRU policy?
Regards,
Jimmy Tang
--
Senior Software Engineer, Digital Repository of Ireland (DRI)
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/ | jt...@tchpc.tcd.ie
--
To unsubscribe from this list: send
On 14 Nov 2012, at 16:14, Sage Weil wrote:
Appending the codename to the version string is something we did with
argonaut (0.48argonaut) just to make it obvious to users which stable
version they are on.
How do people feel about that? Is it worthwhile? Useless? Ugly?
We can
On 18 Oct 2012, at 10:47, Tommi Virtanen wrote:
That's the async replication for disaster recovery feature that has
been mentioned every now and then.
You could build it as read from one cluster, write to another
yourself, the client libraries are perfectly able to talk to two
clusters
both the client and server side being ported, I started
looking at the dependancies needed but I can't dedicate much time to it myself.
Just from a workflow point of view for some use cases having a native OSX
client or a fuse client on OSX would be nice.
Regards,
Jimmy Tang
--
Senior Software
On 18 Oct 2012, at 17:47, Tommi Virtanen wrote:
On Thu, Oct 18, 2012 at 7:40 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
What I actually meant to ask was, is it possible to copy objects or pools
from one ceph cluster to another (for disaster recovery reasons) and if this
feature is planned
Hi All,
Given that there is the capability of running two clusters on the same hosts
(monitors) are there plans to add cluster to cluster features?
e.g. would it be possible to mount two separate ceph clusters from one host?
Jimmy.
On 17 Oct 2012, at 20:35, Tommi Virtanen wrote:
You can
What I actually meant to ask was, is it possible to copy objects or pools from
one ceph cluster to another (for disaster recovery reasons) and if this feature
is planned or even considered?
Jimmu.--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
What I actually meant to ask was, is it possible to copy objects or pools from
one ceph cluster to another (for disaster recovery reasons) and if this feature
is planned or even considered?
Jimmu.--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
structure their repos with a
stable and a testing repo.
Regards,
Jimmy Tang
--
Senior Software Engineer, Digital Repository of Ireland (DRI)
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/
--
To unsubscribe from
Hi Sage,
On 28 Sep 2012, at 16:16, Sage Weil wrote:
Ah, good point. You can do that today with the debian repos, but we
lumped all the rpms into ceph.com/rpms. Currently
debiansymlink to debian-argonaut (latest stable)
debian-argonaut stable
Hi Tommi.
On 6 Sep 2012, at 21:31, Tommi Virtanen wrote:
On Thu, Sep 6, 2012 at 11:51 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Also, the ceph osd setcrushmap... command doesn't up when a ceph
--help is run in the 0.51 release, however it is documented on the
wiki as far as I recall. It'd
On Fri, Sep 07, 2012 at 09:22:34AM -0700, Gregory Farnum wrote:
On Fri, Sep 7, 2012 at 9:03 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Hi All,
I know that cephfs has the option picking which pools to use, will
ceph-fuse be gaining this feature at any point in the future
On Thu, Sep 06, 2012 at 01:31:15PM -0700, Tommi Virtanen wrote:
On Thu, Sep 6, 2012 at 11:51 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Also, the ceph osd setcrushmap... command doesn't up when a ceph
--help is run in the 0.51 release, however it is documented on the
wiki as far as I recall
step take default
step choose firstn 0 type osd
step emit
}
Jimmy
--
Jimmy Tang
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/
pgpoCgpmIPdxU.pgp
Description: PGP signature
if the applications emitted
all the available commands, it would make experimenting much nicer and
fun.
Thanks,
Jimmy,
--
Jimmy Tang
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/
pgpARGJ5DLSYa.pgp
Description: PGP
Hi Francois
On 18 Jul 2012, at 14:58, François Charlier wrote:
Hi,
I'm currently working on writing a Puppet module for Ceph.
As after some research I found no existing module, I'll start from
scratch but I would be glad to hear from people who would already have
started working or this
Hi Greg
On Fri, Jul 6, 2012 at 5:38 PM, Gregory Farnum g...@inktank.com wrote:
Do you have more in the log? It looks like it's being instructed to
shut down before it's fully come up (thus the error in the Objecter
http://tracker.newdream.net/issues/2740, but is not the root cause),
but I
/libcls_rbd.so*
%{_libdir}/rados-classes/libcls_rgw.so*
+/sbin/ceph-disk-activate
+/sbin/ceph-disk-prepare
#
%files fuse
I could of course ignore
Regards,
Jimmy Tang
--
Senior Software Engineer, Digital Repository
this.
--- end dump of recent events ---
Regards,
Jimmy Tang
--
Senior Software Engineer, Digital Repository of Ireland (DRI)
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/
--
To unsubscribe from this list
23 matches
Mail list logo