[ceph-users] Fwd: Re: Experiences with Ceph at the June'14 issue of USENIX ; login:

2014-06-03 Thread Constantinos Venetsanopoulos
Forwarding to ceph-users since the thread started there,
so that we have everything in a single place.


 Original Message 
Subject:Re: Experiences with Ceph at the June'14 issue of USENIX ;login:
Date:   Tue, 03 Jun 2014 12:12:12 +0300
From:   Constantinos Venetsanopoulos 
To: Robin H. Johnson , ceph-de...@vger.kernel.org



Hello Robin,

On 6/3/14, 24:40 AM, Robin H. Johnson wrote:
> On Mon, Jun 02, 2014 at 09:32:19PM +0300,  Filippos Giannakos wrote:
>> As you may already know, we have been using Ceph for quite some time now to 
>> back
>> the ~okeanos [1] public cloud service, which is powered by Synnefo [2].
> (Background info for other readers: Synnefo is a cloud layer on top of
> Ganeti).
>
>> In the article we describe our storage needs, how we use Ceph and how it has
>> worked so far. I hope you enjoy reading it.
> Are you just using the existing kernel RBD mapping for Ganeti running
> KVM, or did you implement the pieces for Ganeti to use the QEMU
> userspace RBD driver?

Non of the above. From the Ceph project we are just using RADOS,
which we access via an Archipelago [1] backend driver that uses
librados from userspace.

We integrate Archipelago with Ganeti with the Archipelago ExtStorage
provider.

> I've got both Ceph & Ganeti clusters already, but am reluctant to marry
> the two sets of functionality because the kernel RBD driver still seemed
> to perform so much worse than the Qemu userspace RBD driver, and Ganeti
> still hasn't implemented the userspace mapping pieces :-(
>

Ganeti supports accessing RADOS from userspace (via the qemu-rbd
driver) since version 2.10. The current stable is 2.11. Not only that,
but starting v2.13 (not released yet), you will be able to configure the
access method per-disk, e.g. saying that the first disk of the instance
will be kernel backed and the second userspace backed. So, I'd suggest
you give it a try and see how it goes :)

Thanks,
Constantinos


[1] https://www.synnefo.org/docs/archipelago/latest/



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Largest Production Ceph Cluster

2014-04-03 Thread Constantinos Venetsanopoulos

Hello everybody,

can anybody comment on the largest number of
production VMs running on top of Ceph?

Thanks,
Constantinos

On 04/01/2014 09:47 PM, Jeremy Hanmer wrote:

Our (DreamHost's) largest cluster is roughly the same size as yours,
~3PB on just shy of 1100 OSDs currently.  The architecture's quite
similar too, except we have "separate" 10G front-end and back-end
networks with a partial spine-leaf architecture using 40G
interconnects.  I say "separate" because the networks only exist in
the logical space; they aren't separated among different bits of
network gear.  Another thing of note is that Ceph will prevent the
marking down of entire racks of OSDs without crazy tweaks of your
CRUSH map (via the mon_osd_down_out_subtree_limit config option).
That actually saved us recently when we suffered a couple of switch
crashes one weekend.


On Tue, Apr 1, 2014 at 7:18 AM, Dan Van Der Ster
 wrote:

Hi,

On 1 Apr 2014 at 15:59:07, Andrey Korolyov (and...@xdel.ru) wrote:

On 04/01/2014 03:51 PM, Robert Sander wrote:

On 01.04.2014 13:38, Karol Kozubal wrote:


I am curious to know what is the largest known ceph production
deployment?

I would assume it is the CERN installation.

Have a look at the slides from Frankfurt Ceph Day:

http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern

Regards


Just curious, how CERN guys built the network topology to prevent
possible cluster splits, because split in the middle will cause huge
downtime even for a relatively short split time enough to mark half of
those 1k OSDs as down by remaining MON majority.


The mons are distributed around the data centre, across N switches.
The OSDs are across a few switches -- actually, we could use CRUSH rules to
replicate across switches but didn't do so because of an (unconfirmed) fear
that the uplinks would become a bottleneck.
So a switch or routing outage scenario is clearly a point of failure where
some PGs could become stale, but we've been lucky enough not to suffer from
that yet.

BTW, this 3PB cluster was built to test the scalability of Ceph's
implementation, not because we have 3PB of data to store in Ceph today (most
of the results of those tests are discussed in that presentation.). And we
are currently partitioning this cluster down into a smaller production
instance for Cinder and other instances for ongoing tests.

BTW#2, I don't think the CERN cluster is the largest. Isn't DreamHost's
bigger?

Cheers, Dan

-- Dan van der Ster || Data & Storage Services || CERN IT Department --

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] First Ceph Athens Meetup!

2014-03-13 Thread Constantinos Venetsanopoulos

Hi Loic,

thanks a lot.

Constantinos

On 03/12/2014 06:52 PM, Loic Dachary wrote:

Hi Constantinos,

I've added it to https://wiki.ceph.com/Community/Meetups . Feel free to update 
it if I made a mistake ;-)

Cheers

On 12/03/2014 17:40, Constantinos Venetsanopoulos wrote:

Hello everybody,

we are happy to invite you to the first Ceph Athens meetup:

http://www.meetup.com/Ceph-Athens

on March 18th, 19:30, taking place on the 4th floor of the
GRNET [1] HQ offices.

We'll be happy to have Steve Starbuck of Inktank with us, who
will introduce Ceph. Also, Vangelis Koukis from the Synnefo team
will present how Ceph is being used to back GRNET’s large-scale,
production, public cloud service called “~okeanos” [2].

So, if you want to learn more about Ceph, discuss or ask questions,
feel free to join us!

See you all there,
Constantinos


P.S.: Please, let us know if you're coming by joining the meetup
on the above link.

[1] http://www.grnet.gr/en
[2] http://okeanos.grnet.gr






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] First Ceph Athens Meetup!

2014-03-12 Thread Constantinos Venetsanopoulos
Hello everybody,

we are happy to invite you to the first Ceph Athens meetup:

http://www.meetup.com/Ceph-Athens

on March 18th, 19:30, taking place on the 4th floor of the
GRNET [1] HQ offices.

We'll be happy to have Steve Starbuck of Inktank with us, who
will introduce Ceph. Also, Vangelis Koukis from the Synnefo team
will present how Ceph is being used to back GRNET’s large-scale,
production, public cloud service called “~okeanos” [2].

So, if you want to learn more about Ceph, discuss or ask questions,
feel free to join us!

See you all there,
Constantinos


P.S.: Please, let us know if you're coming by joining the meetup
on the above link.

[1] http://www.grnet.gr/en
[2] http://okeanos.grnet.gr






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Synnefo + Ceph @ FOSDEM'14

2014-01-31 Thread Constantinos Venetsanopoulos

Hey Loic,

On 01/30/2014 07:48 PM, Loic Dachary wrote:

Hi Constantinos,

Count me in https://fosdem.org/2014/schedule/event/virtiaas02/ :-)


Great!



If you're in Brussels tomorrow (friday), you're welcome to join the Ceph meetup 
http://www.meetup.com/Ceph-Brussels/ !


Unfortunately, I'm not making it to Brussels this year, but
Vangelis and Stratos (cc:ed) from the team will be there.

So, I guess they will be happy to join the meetup, if they
arrive early enough, which I think they do.

Cheers,
Constantinos


Cheers

On 30/01/2014 12:51, Constantinos Venetsanopoulos wrote:

Hello everybody,

in case you haven't noticed already, this weekend at FOSDEM'14,
we will be presenting the Synnefo stack and it's integration with
Google Ganeti, Archipelago and RADOS to provide advanced,
unified cloud storage with unique features.

You can find the official announcement here:
http://synnefo-software.blogspot.com/2014/01/synnefo-fosdem-2014.html

The talk will include a live demo on a large scale environment.
So, if you want to see RADOS powering cool new stuff (e.g.,
Dropbox-like syncing services), or want to learn more about
Synnefo and/or Ganeti, feel free to join us at the talk.

Hope to see you in Brussels,
Constantinos

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Synnefo + Ceph @ FOSDEM'14

2014-01-30 Thread Constantinos Venetsanopoulos

Hello everybody,

in case you haven't noticed already, this weekend at FOSDEM'14,
we will be presenting the Synnefo stack and it's integration with
Google Ganeti, Archipelago and RADOS to provide advanced,
unified cloud storage with unique features.

You can find the official announcement here:
http://synnefo-software.blogspot.com/2014/01/synnefo-fosdem-2014.html

The talk will include a live demo on a large scale environment.
So, if you want to see RADOS powering cool new stuff (e.g.,
Dropbox-like syncing services), or want to learn more about
Synnefo and/or Ganeti, feel free to join us at the talk.

Hope to see you in Brussels,
Constantinos

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and open source cloud software: Path of least resistance

2013-06-24 Thread Constantinos Venetsanopoulos

Hi Jens,

On 6/17/13 05:02 AM, Jens Kristian Søgaard wrote:

Hi Stratos,


you might want to take a look at Synnefo. [1]


I did take a look at it earlier, but decided not to test it.

Mainly I was deterred because I found the documentation a bit lacking. 
I opened up the section on File Storage and found that there were only 
chapter titles, but no actual content. Perhaps I was too quick to 
dismiss it.




Thanks for your interest in our work with Synnefo.

It seems you are referring to the empty sections of the Administrator's
Guide. If yes, then what you're saying is true: The project is in very
active development, so we are mostly focusing on the Installation Guide
right now, which we always try to keep updated with the latest commits:

http://www.synnefo.org/docs/synnefo/latest/quick-install-admin-guide.html

Perhaps you were a bit too quick to dismiss it
If you start playing around with Ganeti for VM management, I think
you'll love its simplicity and reliability. Then, Synnefo is a nice way
of providing cloud interfaces on top of Ganeti VMs, and also adding the
cloud storage part.

A bit more practical problem for me was that my test equipment 
consists of a single server (besides the Ceph cluster). As far as I 
understood the docs, there was a bug that makes it impossible to run 
Synnefo on a single server (to be fixed in the next version)?




This has been completely rehauled in Synnefo 0.14, which will be out by
next week, allowing any combination of components to coexist on a single
node, with arbitrary setting of URL prefixes for each. If you're feeling
adventurous, please find 0.14~rc4 packages for Squeeze at apt.dev.grnet.gr,
we've also uploaded the latest version of the docs at 
http://docs.synnefo.org.


Regarding my goals, I read through the installation guide and it 
recommends setting up an NFS server on one of the servers to serve 
images to the rest. This is what I wanted to avoid. Is that optional 
and/or could be replaced with Ceph?




We have integrated the storage service ("Pithos") with the compute
service, as the Image repository. Pithos has pluggable storage drivers,
through which it stores files as collections of content-addressable blocks.
One driver uses NFS, storing objects as distinct files on a shared 
directory,

another uses RADOS, storing objects as RADOS objects. Our production used
to run on NFS, and we're now transitioning to using RADOS exclusively.
Currently, we use both drivers simultaneously: Incoming file chunks are 
stored

both in RADOS and in the NFS share. Eventually, we'll just unplug the NFS
driver when we're ready to go RADOS-only.

In your case, you can start with Pithos being RADOS-only, although the
Installation Guide continues to refer to NFS for simplicity.


At the moment Ganeti only supports the in-kernel RBD driver, although
support for the qemu-rbd driver should be implemented soon. Using the


Hmm, I wanted to avoid using the in-kernel RBD driver, as I figured it 
lead to various problems. Is it not a problem in practice?




Our demo installation at http://www.synnefo.org ["Try it out"] uses the
in-kernel RBD driver for the "rbd" storage option. We haven't encountered
any significant problems. Furthermore, AFAIK, Ganeti will also support
choosing between the in-kernel or qemu-rbd userspace driver when
spawning a VM in one of its next versions, so Synnefo will then also support
that, out-of-the-box.

I was thinking it would be wisest to stay with the distribution 
kernel, but I guess you swap it out for a later version?




For our custom storage layer (Archipelago, see below) we require a newer
kernel than the one that comes with Squeeze, so we run 3.2 from
squeeze-backports, everything has been going smoothly so far.

The rbds for all my existing VMs would probably have to be converted 
back from format 2 to format 1, right?




If you plan to use the in-kernel rbd driver, it seems yes:
http://ceph.com/docs/next/man/8/rbd/#parameters

I can't comment on this because we only run rbd as an option in the demo
environment, with the in-kernel driver. For our production, we're running
a custom storage layer (Archipelago) which does thin provisioning of volumes
from Pithos files and accesses the underlying Pithos objects directly, 
no matter

which driver (RADOS or NFS) you use.

Thanks again for your interest,
Constantinos


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph and CloudStack? Let us know!

2013-05-18 Thread Constantinos Venetsanopoulos

Hello Patrick and Ceph users,

On 5/16/13 17:02 PM, Patrick McGarry wrote:

Of course,
we'd love to hear about anything you're working on.  So, if you have
notes to share about Ceph with other cloud flavors, massive storage
clusters, or custom work, we'd treasure them appropriately.


as you already know from an older post on the Ceph blog [1] we have been
evaluating RADOS for use in our public cloud service [2], powered by the
open source cloud software Synnefo [3]. When writing the post, we had
already fully integrated RADOS in Synnefo (VM disks, Images, Files) and
we were in the process of moving everything into production.

Indeed, we are now happy to inform you that the deployment of RADOS
into our production environment has been completed successfully and
since last month [4] our users are storing their files and images on
RADOS and also have the choice of spawning VMs with their disks on
RADOS too; in seconds with thin cloning.

We are currently experimenting with thin disk snapshotting and hope to
have the functionality integrated in one of the next Synnefo versions.
In the same time, we are expanding our production RADOS cluster as
demand rises with a plan to hit 1PB or raw storage.

Keep the good work,
Kind Regards,
Constantinos


[1] http://ceph.com/community/ceph-comes-to-synnefo-and-ganeti/
[2] http://okeanos.grnet.gr
[3] http://synnefo.org
[4] https://okeanos.grnet.gr/blog/2013/04/04/introducing-archipelago/


Feel free to just reply to this email, send a message to
commun...@inktank.com, message 'scuttlemonkey' on irc.oftc.net, or tie
a note to our ip-over-carrier-pigeon network.  Thanks, and happy
Ceph-ing.


Best Regards,

Patrick McGarry
Director, Community || Inktank

http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Better RADOS support in Ganeti @ GSoC 2013

2013-05-01 Thread Constantinos Venetsanopoulos

Hello everybody,

I'm sending this here in case someone from the list is interested.
Ganeti [1] is a mentoring organization in this year's Google Summer
of Code and one of the Ideas proposed is:

"Better support for RADOS/Ceph in Ganeti"

Please see here:
http://code.google.com/p/ganeti/wiki/SummerOfCode2013Ideas

Currently, Ganeti supports VM disks inside RADOS natively, using
the rbd kernel driver and the rbd tools. The idea's primary target
is to also support the qemu-rbd driver, so that everything happens
on userspace.

Kind Regards,
Constantinos


[1]?code.google.com/p/ganeti/?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com