Re: [ceph-users] Ceph and Redhat Enterprise Virtualization (RHEV)

2015-07-14 Thread Neil Levine
RHEV does not formally support Ceph yet. Future versions are looking to
include Cinder support which will allow you to hook in Ceph.
You should contact your RHEV contacts who can give an indication of the
timeline for this.

Neil

On Tue, Jul 14, 2015 at 10:43 AM, Peter Michael Calum pe...@tdc.dk wrote:

  Hi,

 Does anyone know if it is possible to use Ceph storage in Redhat
 Enterprise Virtualization (RHEV),
 and connect it as a data domain in the Redhat Enterprise Virtualization
 Manager (RHEVM).

 My RHEV version and Hypervisors are the latest RHEV 6.5 version.

 Thanks,
 Peter Calum
 TDC


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Transfering files from NFS to ceph + RGW

2015-07-08 Thread Neil Levine
There is an initial prototype of the NFS layer to RGW using Ganesha. Yehuda
can probably give an update on its status. The use case for it is exactly
as you describe: to allow you to migrate data of NFS shares to the
S3-object store. It's not going to be high performance or be feature rich
but hopefully will do enough to allow data migration.

On Wed, Jul 8, 2015 at 7:01 PM, Somnath Roy somnath@sandisk.com wrote:

  Hi,

 We are planning to build a Ceph cluster with RGW/S3 as the interface for
 user access. We have PB level of data in NFS share which needs to be moved
 to the Ceph cluster and that’s why I need your valuable input on how to
 efficiently do that. I am sure this is a common problem that RGW users in
 Ceph community have faced and resolved J .

 I can think of the following approach.



 Since the data needs to be accessed later with RGW/S3 , we have to write
 an application that can PUT the existing files as objects  over RGW+S3
 interface to the cluster.



 Is there any alternative approach ?

 There are existing RADOS tools that can take files as input and store it
 in a cluster , but, unfortunately RGW probably will not be able to
 understand those.

 IMO, there should be a channel where we can use these rados utility to
 store the objects in .rgw.data pool and RGW should be able to read the
 objects. This will solve lot of data migration problem (?).

 Also, probably this blueprint (
 https://wiki.ceph.com/Planning/Blueprints/Infernalis/RGW%3A_NFS) of
 Yehuda’s trying to solve similar problem…



 Anyways, Please share your thoughts and let me know if anybody already has
 a workaround for this.



 Thanks  Regards

 Somnath

 --

 PLEASE NOTE: The information contained in this electronic mail message is
 intended only for the use of the designated recipient(s) named above. If
 the reader of this message is not the intended recipient, you are hereby
 notified that you have received this message in error and that any review,
 dissemination, distribution, or copying of this message is strictly
 prohibited. If you have received this communication in error, please notify
 the sender by telephone or e-mail (as shown above) immediately and destroy
 any and all copies of this message in your possession (whether hard copies
 or electronically stored copies).


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is CephFS ready for production?

2015-05-11 Thread Neil Levine
We are still laying the foundations for eventual VMware integration and
indeed the Red Hat acquisition has made this more real now.

The first step is iSCSI support and work is ongoing in the kernel to get HA
iSCSI working with LIO and kRBD. See the blueprint and CDS sessions with
Mike Christie for an update. Love it or hate it, iSCSI is still the
standard protocol supported by ESX etc and this will be the initial QA
burden.

Neil


On Tue, May 5, 2015 at 9:10 AM, Michal Kozanecki mkozane...@evertz.com
wrote:

 This is what I found from 2014 - slide 7


 https://www.openstack.org/assets/presentation-media/inktank-demo-theater.pptx

 Cheers,

 Michal Kozanecki | Linux Administrator | E: mkozane...@evertz.com

 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Ray Sun
 Sent: April-24-15 10:44 PM
 To: Gregory Farnum
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] Is CephFS ready for production?

 ​I think this is what I seen in 2013 Hongkong summit. At least in Ceph
 Enterprise version.​


 Best Regards
 -- Ray

 On Sat, Apr 25, 2015 at 12:36 AM, Gregory Farnum g...@gregs42.com wrote:
 I think the VMWare plugin was going to be contracted out by the business
 people, and it was never going to be upstream anyway -- I've not heard
 anything since then but you'd need to ask them I think.
 -Greg

 On Fri, Apr 24, 2015 at 7:17 AM Marc m...@shoowin.de wrote:
 On 22/04/2015 16:04, Gregory Farnum wrote:
  On Tue, Apr 21, 2015 at 9:53 PM, Mohamed Pakkeer mdfakk...@gmail.com
 wrote:
  Hi sage,
 
  When can we expect the fully functional fsck for cephfs?. Can we get at
 next
  major release?. Is there any roadmap or time frame for the fully
 functional
  fsck release?
  We're working on it as fast as we can, and it'll be done when it's
  done. ;) More seriously, I'm still holding out a waning hope that
  we'll have the forward scrub portion ready for Infernalis and then
  we'll see how long it takes to assemble a working repair tool from
  that.
 
  On Wed, Apr 22, 2015 at 2:20 AM, Marc m...@shoowin.de wrote:
  Hi everyone,
 
  I am curious about the current state of the roadmap as well. Alongside
 the
  already asked question Re vmware support, where are we at with cephfs'
  multiMDS stability and dynamic subtree partitioning?
  Zheng has fixed a ton of bugs in these areas over the last year, but
  both features are farther down the roadmap since we don't think we
  need them for the earliest production users.
  -Greg

 Thanks for letting us know! Due to the RedHat acquisition the ICE
 roadmap seems to have disappeared. Is a vmware driver still being worked
 on? With vmware being closed source and all, I imagine this lies mostly
 within the domain of VMware Inc., correct? Having iSCSI proxies as
 mediators is rather clunky...

 (And yes I am actively working on trying to get the interested parties
 to strongly look into KVM, but they have become very comfortable with
 VMware vsphere enterprise plus...)


 Thanks and have a nice weekend!
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ovirt

2014-05-07 Thread Neil Levine
We were actually talking to Red Hat about oVirt support before the
acquisition. It's on the To Do list but no dates yet.
Of course, someone from the community is welcome to step up and do the work.

Neil

On Wed, May 7, 2014 at 9:49 AM, Nathan Stratton nat...@robotics.net wrote:
 Now that everyone will be one big happy family, any new on ceph support of
 ovirt?


 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help -Ceph deployment in Single node Like Devstack

2014-05-07 Thread Neil Levine
Loic's micro-osd.sh script is as close to single push button as it gets:

http://dachary.org/?p=2374

Not exactly a production cluster but it at least allows you to start
experimenting on the CLI.

Neil

On Wed, May 7, 2014 at 7:56 PM, Patrick McGarry patr...@inktank.com wrote:
 Hey,

 Sorry for the delay, I have been traveling in Asia.  This question
 should probably go to the ceph-user list (cc'd).

 Right now there is no single push-button deployment for Ceph like
 devstack (that I'm aware of)...but we have sever options in terms of
 orchestration and deployment (including out own ceph-deploy featured
 in the doc).

 A good place to see the package options is http://ceph.com/get

 Sorry I couldn't give you an exact answer, but I think Ceph is pretty
 approachable in terms of deployment for experimentation.  Hope that
 helps.



 Best Regards,

 Patrick McGarry
 Director, Community || Inktank
 http://ceph.com  ||  http://inktank.com
 @scuttlemonkey || @ceph || @inktank


 On Wed, Apr 30, 2014 at 2:05 AM, Pandiyan M maestropa...@gmail.com wrote:

 Hi,

 I am looking for Ceph simple instalation like devstack ( For opennstack by
 one package contains all), it should supports for ceph, puppet and run its
 function as whole ceph does? help me out

 Thanks in Advance !!
 --
 PANDIYAN MUTHURAMAN

 Mobile : + 91 9600-963-436   (Personal)
   +91 7259-031-872  (Official)



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Neil Levine
There are few things that need to be tidied up and we will need to
liaise with our new Red Hat colleagues around license choice but I
believe the code is in relatively good shape to be open sourced thanks
to the team who've been working in it (Yan, Gregory, John and Dan).
It's important to me that when we open it up everything is ready to be
hacked on straight away with good documentation but I'd also like to
share some of the options for where I think we can take it. There is a
lot of potential :-)

Neil

On Wed, Apr 30, 2014 at 8:23 AM, Mark Nelson mark.nel...@inktank.com wrote:
 On 04/30/2014 10:19 AM, Loic Dachary wrote:

 Hi Sage,

 Congratulations, this is good news.

 On 30/04/2014 14:18, Sage Weil wrote:

 One important change that will take place involves Inktank's product
 strategy, in which some add-on software we have developed is proprietary.
 In contrast, Red Hat favors a pure open source model. That means that
 Calamari, the monitoring and diagnostics tool that Inktank has developed
 as part of the Inktank Ceph Enterprise product, will soon be open
 sourced.


 I'm glad to hear that this acquisitions puts an end to the proprietary
 software created by the Inktank Ceph developers. And I assume they are also
 happy about the change :-)


 I for one am excited about an open source Calamari! :)


 Cheers




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-07 Thread Neil Levine
Is SELinux enabled?

On Mon, Apr 7, 2014 at 12:50 AM, Diedrich Ehlerding
diedrich.ehlerd...@ts.fujitsu.com wrote:
 [monitoprs do not start properly with ceph-deploy]
 Brian Chandler:

  thank you for your, response, however:
  Including iptables? CentOS/RedHat default to iptables enabled and
  closed.
 
  iptables -Lvn to be 100% sure.
  hvrrzceph1:~ # iptables -Lvn
  iptables: No chain/target/match by that name.
  hvrrzceph1:~ #
 
 Ergh, my mistake: iptables -L -v -n



 hvrrzceph1:~ # iptables -L -v -n
 Chain INPUT (policy ACCEPT 8739 packets, 476K bytes)
  pkts bytes target prot opt in out source
 destination

 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 6270 packets, 505K bytes)
  pkts bytes target prot opt in out source
 destination
 hvrrzceph1:~ #

 The servers do not run any firewall, and they are connected to the
 same switch. ssh login works over three networks (one to be used as
 admin network, one as public network, and another one as cluster
 network).

 Any hint is appreciated ...

 Diedrich
 --
 Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
 FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
 Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
 Firmenangaben: http://de.ts.fujitsu.com/imprint.html


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance issues running vmfs on top of Ceph

2014-02-04 Thread Neil Levine
Also, how are you accessing Ceph - is it using the TGT iSCSI package?


On Tue, Feb 4, 2014 at 10:10 AM, Mark Nelson mark.nel...@inktank.comwrote:

 On 02/04/2014 11:55 AM, Maciej Bonin wrote:

 Hello guys,

 We're testing running an esxi hv on top of a ceph backend and we're
 getting abysmal performance when using vmfs, has anyone else tried this
 successful, any advice ?
 Would be really thankful for any hints.


 Hi!

 I don't have a ton of experience with esxi, but if you can do some rados
 bench or smalliobenchfs tests, that might help give you an idea if the
 problem is Ceph (or lower), or more related to something higher up closer
 to exsi.  Can you describe a little more what you are seeing and what you
 expect?

 Thanks,
 Mark



 Regards,
 Maciej Bonin
 Systems Engineer | M247 Limited
 M247.com  Connected with our Customers
 Contact us today to discuss your hosting and connectivity requirements
 ISO 27001 | ISO 9001 | Deloitte Technology Fast 50 | Deloitte Technology
 Fast 500 EMEA | Sunday Times Tech Track 100
 M247 Ltd, registered in England  Wales #4968341. 1 Ball Green, Cobra
 Court, Manchester, M32 0QT

 ISO 27001 Data Protection Classification: A - Public


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Neil Levine
In the Debian world, purge does both a removal of the package and a clean
up the files so might be good to keep semantic consistency here?


On Tue, Nov 5, 2013 at 1:11 AM, Sage Weil s...@newdream.net wrote:

 Purgedata is only meant to be run *after* the package is uninstalled.  We
 should make it do a check to enforce that.  Otherwise we run into these
 problems...


 Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:

 On 05/11/13 06:37, Alfredo Deza wrote:

  On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
  joseph.r.gru...@intel.com wrote:

  Could these problems be caused by running a purgedata but not a purge?


  It could be, I am not clear on what the expectation was for just doing
  purgedata without a purge.

  Purgedata removes /etc/ceph but without the purge ceph is still installed,
  then ceph-deploy install detects ceph as already installed and does not
  (re)create /etc/ceph?


  ceph-deploy will not create directories for you, that is
 left to the
  ceph install process, and just to be clear, the
  latest ceph-deploy version (1.3) does not remote /etc/ceph, just the 
 contents.


 Yeah, however purgedata is removing /var/lib/ceph, which means after
 running purgedata you need to either run purge then install or manually
 recreate the various working directories under /var/lib/ceph before
 attempting any mon. mds or osd creation.

 Maybe purgedata should actually leave those top level dirs under
 /var/lib/ceph?

 regards

 Mark
 --

 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Inktank Ceph Enterprise Launch

2013-10-30 Thread Neil Levine
Loic, Don,

From an Inktank perspective, we're keen to see a wide variety of ecosystem
tools built around Ceph, especially from developers outside of Inktank. We
welcome open source or other commercial tools that people want to build and
if a more compelling or popular community tool comes along, we'd definitely
look to have our engineers work on it and potentially package it up for our
customers either as a complement or alternative to Calamari. The more
diverse innovation we have in the Ceph community, the better for everyone.

We may very well open source Calamari in the medium term but right now
there are some very specific things that our customers and partners want us
to do with the product, which don't lend themselves well to an open source
project. The most important aspect here is that we are absolutely committed
to ensuring every piece of storage functionality we develop sits in Ceph
and that the tools just provide a specific experience, not a different set
of capabilities.

In terms of a foundation, we definitely acknowledge the benefits this will
bring and share the same vision. We should have a proper community
conversation about what this may look like in practice - perhaps during the
next developer summit?

Neil



On Wed, Oct 30, 2013 at 11:03 AM, Don Talton (dotalton)
dotal...@cisco.comwrote:

 I actually started a django app (no code pushed yet) for this purpose. I
 guessed that Inktank might come out with a commercial offering and thought
 a FOSS dashboard would be a good thing for the community too.

 https://github.com/dontalton/kraken

 I'd much rather contribute to a Inktank-backed dashboard if it were FOSS,
 than start a new project.


  -Original Message-
  From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
  boun...@lists.ceph.com] On Behalf Of Loic Dachary
  Sent: Wednesday, October 30, 2013 10:29 AM
  To: Patrick McGarry; ceph-users@lists.ceph.com; Ceph Devel
  Subject: Re: [ceph-users] Inktank Ceph Enterprise Launch
 
  Hi Patrick,
 
  I wish Inktank was able to base its strategy and income on Free Software.
  Like RedHat does, for instance. In addition, as long as Inktank employs
 the
  majority of Ceph developers, publishing Calamari as a proprietary
 software is
  a conflict of interest. Should someone from the community bootstrap a
 Free
  Software alternative to Calamari, it will compete with it. And should
 Inktank
  employees participate in the development of this alternative, it would be
  against the best interest of Inktank. If that were not true, there would
 be no
  reason to publish Calamari as proprietary software in the first place.
 
  Please reconsider your decision to publish Calamari as a proprietary
 software.
 
  Now is probably the right time to call for the creation of a Ceph
 foundation.
 
  Cheers
 
  On 30/10/2013 18:01, Patrick McGarry wrote:
   Salutations Ceph-ers,
  
   As many of you have noticed, Inktank has taken the wraps off the
   latest and greatest magic for enterprise customers.  Wanted to share a
   few thoughts from a community perspective on Ceph.com and answer any
   questions/concerns folks might have.
  
   http://ceph.com/community/new-inktank-ceph-enterprise-builds-on-
  what-m
   akes-ceph-great/
  
   Just to reiterate, there will be no changes/limitations to Ceph.  All
   Inktank contributions to Ceph will continue to be open source and
   useable.  If you have any questions feel free to direct them my way.
   Thanks.
  
  
   Best Regards,
  
   Patrick McGarry
   Director, Community || Inktank
   http://ceph.com  ||  http://inktank.com @scuttlemonkey || @ceph ||
   @inktank
   --
   To unsubscribe from this list: send the line unsubscribe ceph-devel
   in the body of a message to majord...@vger.kernel.org More
  majordomo
   info at  http://vger.kernel.org/majordomo-info.html
  
 
  --
  Loïc Dachary, Artisan Logiciel Libre

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Install without ceph-deploy

2013-09-07 Thread Neil Levine
We're planning on changing the docs to offer a quick-install quide, focused
around ceph-deploy, and a long-install guide which goes through all the
changes in more detail. The latter should help users wanting to customize
puppet/chef scripts etc

NB mkcephfs is deprecated now. It was just a shell script and interacted
with the same commands that ceph-deploy now does.


On Sat, Sep 7, 2013 at 8:47 AM, Oliver Daudey oli...@xs4all.nl wrote:

 Hey Timofey,

 On za, 2013-09-07 at 18:52 +0400, Timofey Koolin wrote:
  Is anywhere documentation for manual install/modify ceph cluster
  WITHOUT ceph-deploy?

 There used to be a 5 minute quick start guide on the Ceph
 documentation-site, explaining how to do it using the mkcephfs-tool,
 but that seems to have been removed in favor of the new ceph-deploy
 method.  You basically have to write your own ceph.conf, prepare all
 of the data-dirs and mount-points on the nodes manually, make sure that
 password-less SSH to all nodes is OK and then run it.  It will read your
 conf-file and create the cluster as described within that file.  If any
 part fails, for example if you forgot to create a required directory on
 one of the nodes, you have to clean up what it has already done manually
 and try again.  I understand why they deprecated it in favor of
 ceph-deploy and you wouldn't want to create a new cluster completely by
 hand, unless for learning. :-)

 Descriptions on how to add or remove components to/from an existing
 cluster manually are still on the documentation-site.  For example:
 http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
 http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
 With this info, you should be able to manually build a new cluster, if
 you'd want to.  Start with a fresh monitor, then add the rest.



Regards,

   Oliver

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Limitations of Ceph

2013-08-27 Thread Neil Levine
On Tue, Aug 27, 2013 at 10:19 AM, Sage Weil s...@inktank.com wrote:

 Hi Guido!

 On Tue, 27 Aug 2013, Guido Winkelmann wrote:
  Hi,
 
  I have been running a small Ceph cluster for experimentation for a
while, and
  now my employer has asked me to do little talk about my findings, and
one
  important part is, of course, going to be practical limitations of Ceph.
 
  Here is my list so far:
 
  - Ceph is not supported by VMWare ESX. That may change in the future,
but
  seeing how VMWare is now owned by EMC, they might make it a political
decision
  to not support Ceph.
  Apparently, you can import an RBD volume on linux server and then
reexport it
  to a VMWare host as an iSCSI target, but doing so would introduce a
bottleneck
  and a single point of failure, which kind of defeats the purpose of
having a
  Ceph cluster in the first place.

 It will be a challenge to make ESX natively support RBD as RBD is open
 source (ESX is proprietary), ESX is (I think) based on a *BSD kernel, and
 VMWare just announced a possibly competitive product.  Inktank is doing
 what it can.

To add some context to this, my current understanding is that VMware do
provide mechanisms to add plugins to ESX but a formal partnership is needed
for those plugins to be signed  certified. As such the challenge is more
commercial than technical. Inktank are in conversations with VMware but if
you are interested in seeing support, please tell your VMware account rep
and let us know so we can demonstrate the customer demand for this.

VMware partner with multiple storage companies (as evidenced by the number
of storage vendors at VMWorld this week) so the fact that they have
launched vSAN and are owned by EMC is not a commercial barrier. The ESX
business unit want to sell as many licenses as possible and so a good
storage ecosystem is critical to them.

On the Windows side, as Sage said, watch this space. :-)

Neil
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Which Kernel and QEMU/libvirt version doyourecommend on Ubuntu 12.04 and Centos?

2013-08-01 Thread Neil Levine
Yes, use the async one. We will be getting rid of the non-async one soon.

The default Qemu packages in 6.4 doesn't link to librbd whereas these
custom packages have been built explicitly with the link.

Hoping to announce some positive news soon about why this should become
much simpler in 6.5 and up...

N



On Thu, Aug 1, 2013 at 9:09 AM, Da Chun ng...@qq.com wrote:


 Neil,

 What's the difference between your custom qemu packages and the official
 ones?

 There are two kinds of packages in it:
 qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64.rpm
 qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.x86_64.rpm

 What's the difference between them? Does the async version support aio
 flush?

 -- Original --
 *From: * Neil Levineneil.lev...@inktank.com;
 *Date: * Thu, Aug 1, 2013 11:13 AM
 *To: * Da Chunng...@qq.com; **
 *Cc: * ceph-usersceph-users@lists.ceph.com; **
 *Subject: * Re: [ceph-users] Which Kernel and QEMU/libvirt version
 doyourecommend on Ubuntu 12.04 and Centos?

 Yes, default version should work.

 Neil


 On Wed, Jul 31, 2013 at 7:11 PM, Da Chun ng...@qq.com wrote:

 Thanks! Neil and Wido.

 Neil, what about the livirt version on CentOS 6.4? Just use the official
 release?

 -- Original --
 *From: * Neil Levineneil.lev...@inktank.com;
 *Date: * Thu, Aug 1, 2013 05:53 AM
 *To: * Da Chunng...@qq.com; **
 *Cc: * ceph-usersceph-users@lists.ceph.com; **
 *Subject: * Re: [ceph-users] Which Kernel and QEMU/libvirt version do
 yourecommend on Ubuntu 12.04 and Centos?

 For CentOS 6.4, we have custom qemu packages available at
 http://ceph.com/packages/ceph-extras/rpm/centos6 which will provide RBD
 support.
 You will need to install a newer kernel than the one which ships by
 default (2.6.32) to use the cephfs or krbd drivers. Any version above 3.x
 should be sufficient.

 For Ubuntu 12.04, as per Wido's comments, use the Ubuntu Cloud Archive to
 get the latest version of all necessary packages.

 N



 On Wed, Jul 31, 2013 at 7:18 AM, Da Chun ng...@qq.com wrote:

 Hi List,

 I want to deploy two ceph clusters on ubuntu 12.04 and centos 6.4
 separately, and test cephfs, krbd, and librbd.

 Which Kernel and QEMU/libvirt version do you recommend? Any specific
 patches which I should apply manually?

 Thanks for your time!


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Which Kernel and QEMU/libvirt version do yourecommend on Ubuntu 12.04 and Centos?

2013-07-31 Thread Neil Levine
Yes, default version should work.

Neil


On Wed, Jul 31, 2013 at 7:11 PM, Da Chun ng...@qq.com wrote:

 Thanks! Neil and Wido.

 Neil, what about the livirt version on CentOS 6.4? Just use the official
 release?

 -- Original --
 *From: * Neil Levineneil.lev...@inktank.com;
 *Date: * Thu, Aug 1, 2013 05:53 AM
 *To: * Da Chunng...@qq.com; **
 *Cc: * ceph-usersceph-users@lists.ceph.com; **
 *Subject: * Re: [ceph-users] Which Kernel and QEMU/libvirt version do
 yourecommend on Ubuntu 12.04 and Centos?

 For CentOS 6.4, we have custom qemu packages available at
 http://ceph.com/packages/ceph-extras/rpm/centos6 which will provide RBD
 support.
 You will need to install a newer kernel than the one which ships by
 default (2.6.32) to use the cephfs or krbd drivers. Any version above 3.x
 should be sufficient.

 For Ubuntu 12.04, as per Wido's comments, use the Ubuntu Cloud Archive to
 get the latest version of all necessary packages.

 N



 On Wed, Jul 31, 2013 at 7:18 AM, Da Chun ng...@qq.com wrote:

 Hi List,

 I want to deploy two ceph clusters on ubuntu 12.04 and centos 6.4
 separately, and test cephfs, krbd, and librbd.

 Which Kernel and QEMU/libvirt version do you recommend? Any specific
 patches which I should apply manually?

 Thanks for your time!


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Proplem about capacity when mount using CephFS?

2013-07-16 Thread Neil Levine
This seems like a good feature to have. I've created
http://tracker.ceph.com/issues/5642

N


On Tue, Jul 16, 2013 at 8:05 AM, Greg Chavez greg.cha...@gmail.com wrote:

 This is interesting.  So there are no built-in ceph commands that can
 calculate your usable space?  It just so happened that I was going to
 try and figure that out today (new Openstack block cluster, 20TB total
 capacity) by skimming through the documentation.  I figured that there
 had to be a command that would do this.  Blast and gadzooks.

 On Tue, Jul 16, 2013 at 10:37 AM, Ta Ba Tuan tua...@vccloud.vn wrote:
 
  Thank Sage,
 
  tuantaba
 
 
  On 07/16/2013 09:24 PM, Sage Weil wrote:
 
  On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
 
  Thanks  Sage,
  I wories about returned capacity when mounting CephFS.
  but when disk is full, capacity will showed 50% or 100% Used?
 
  100%.
 
  sage
 
 
  On 07/16/2013 11:01 AM, Sage Weil wrote:
 
  On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
 
  Hi everyone.
 
  I have 83 osds, and every osds have same 2TB, (Capacity sumary is
  166TB)
  I'm using replicate 3 for pools ('data','metadata').
 
  But when mounting Ceph filesystem from somewhere (using: mount -t
 ceph
  Monitor_IP:/ /ceph -o name=admin,secret=xx)
  then capacity sumary is showed 160TB?, I used replicate 3 and I
 think
  that
  it must return 160TB/3=50TB?
 
  FilesystemSize  Used Avail Use% Mounted on
  192.168.32.90:/160T  500G  156T   1%  /tmp/ceph_mount
 
  Please, explain this  help me?
 
  statfs/df show the raw capacity of the cluster, not the usable
 capacity.
  How much data you can store is a (potentially) complex function of
 your
  CRUSH rules and replication layout.  If you store 1TB, you'll notice
 the
  available space will go down by about 2TB (if you're using the default
  2x).
 
  sage
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 \*..+.-
 --Greg Chavez
 +//..;};
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy Intended Purpose

2013-07-12 Thread Neil Levine
It's the default tool for getting something up and running quickly for
tests/PoC. If you don't want to make too many custom settings changes, are
happy with SSH access to all boxes, then it's fine but if you want more
granular control then we advise you use something like Chef or Puppet.
There are example Chef configs in the Github repo.

Neil


On Fri, Jul 12, 2013 at 5:54 AM, Edward Huyer erh...@rit.edu wrote:

 I’m working on deploying a multi-machine (possibly as many as 7) ceph
 (61.4) cluster for experimentation.  I’m trying to deploy using ceph-deploy
 on Ubuntu, but it seems…flaky.  For instance, I tried to deploy additional
 monitors and ran into the bug(?) where the additional monitors don’t work
 if you don’t have “public network” defined in ceph.conf, but by the time I
 found that bit of info I had already blown up the cluster.

 ** **

 So my question is, is ceph-deploy the preferred method for deploying
 larger clusters, particularly in production, or is it a quick-and-dirty
 get-something-going-to-play-with tool and manual configuration is preferred
 for “real” clusters?  I’ve seen documentation suggesting it’s not intended
 for use in real clusters, but a lot of other documentation seems to assume
 it’s the default deploy tool.

 ** **

 -

 Edward Huyer

 School of Interactive Games and Media

 Golisano 70-2373

 152 Lomb Memorial Drive

 Rochester, NY 14623

 585-475-6651

 erh...@rit.edu

 ** **

 Obligatory Legalese:

 The information transmitted, including attachments, is intended only for
 the person(s) or entity to which it is addressed and may contain
 confidential and/or privileged material. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon
 this information by persons or entities other than the intended recipient
 is prohibited. If you received this in error, please contact the sender and
 destroy any copies of this information.

 ** **

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cors support

2013-06-21 Thread Neil Levine
Yes!


On Fri, Jun 21, 2013 at 8:49 AM, Fabio - NS3 srl fa...@ns3.it wrote:

 HI,

 Is there support about Cors in cheph 0.61.4 ??

 thanks,
 Fabio
 __**_
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Web Management Interface

2013-05-14 Thread Neil Levine
Yes.

Inktank are looking to create a web management interface. Hoping to
have a first version ready over the summer.

Neil

On Tue, May 14, 2013 at 11:25 AM, Campbell, Bill
bcampb...@axcess-financial.com wrote:
 Hello,
 I was wondering if there were any plans in the near future for some sort of 
 Web-based management interface for Ceph clusters?


 Bill Campbell
 Infrastructure Architect

 Axcess Financial Services, Inc.
 7755 Montgomery Rd., Suite 400
 Cincinnati, OH 45236

 NOTICE: Protect the information in this message in accordance with the 
 company's security policies. If you received this message in error, 
 immediately notify the sender and destroy all copies.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] e release

2013-05-10 Thread Neil Levine
Please don't make me say Enteroctopus over and over again in sales
meetings. :-) Something simple and easy please!

N

On Fri, May 10, 2013 at 7:51 PM, John Wilkins john.wilk...@inktank.com wrote:
 As long as we have a picture. Enteroctopus is giant, which implies
 large scale and is what we're about.  I just like Enope, because they
 are bio-luminescent.
 http://en.wikipedia.org/wiki/Sparkling_Enope_Squid  The pictures are
 kind of cool too.

 On Fri, May 10, 2013 at 11:47 AM, Yehuda Sadeh yeh...@inktank.com wrote:
 On Fri, May 10, 2013 at 11:31 AM, Sage Weil s...@inktank.com wrote:
 We need a cephalopod name that starts with 'e', and trolling through
 taxonomies seems like a great thing to crowdsource.  :)  So far I've found
 a few latin names, but the main problem is that I can't find a single
 large list of species with the common names listed.  Wikipedia's taxonomy
 seems the best so far, but it's still a lot of browsing required as
 cephalopoda is a huge class.

 The only common name I've found is elbow (elbow squid), but elbow is not a
 very fun name.

 Suggestions welcome!

 When we voted on the theme, I've expanded the cephalopods category to
 also include generic marine creatures for this specific reason. We
 can always choose some non-cephalopod creature if the options don't
 feel right (e.g., Eel).

 Yehuda


 elbow (elbow squid)
  
 https://www.google.com/search?q=enteroctopussource=lnmstbm=ischsa=Xei=pzuNUd37McnjigLfu4D4Dwved=0CAoQ_AUoAQbiw=1916bih=1082#tbm=ischsa=1q=elbow+squidoq=elbow+squidgs_l=img.3..0j0i24.80753.82074.2.82218.11.8.0.3.3.0.72.416.8.8.0...0.0...1c.1.12.img.U2rs4lakA-Abav=on.2,or.r_cp.r_qf.bvm=bv.46340616,d.cGEfp=aa2ea750bee51b45biw=1916bih=1082
  http://en.wikipedia.org/wiki/Bigfin_squid
  
 http://news.nationalgeographic.com/news/2008/11/081124-giant-squid-magnapinna.html

 enteroctopus (giant octopus)
  http://en.wikipedia.org/wiki/Giant_octopus
  http://eol.org/pages/61628/overview
  
 https://www.google.com/search?q=enteroctopussource=lnmstbm=ischsa=Xei=pzuNUd37McnjigLfu4D4Dwved=0CAoQ_AUoAQbiw=1916bih=1082

 elegent or elegans (sepia elegans, elegent cuttlefish)
  http://en.wikipedia.org/wiki/File:Sepia_elegans.jpg
  http://en.wikipedia.org/wiki/Sepia_(genus) (see sepia elegans, elegant 
 cuttlefish)
   it's another cuttlefish, though, so, meh.

 eledone
  http://eol.org/pages/51263/overview
  
 https://www.google.com/search?q=enteroctopussource=lnmstbm=ischsa=Xei=pzuNUd37McnjigLfu4D4Dwved=0CAoQ_AUoAQbiw=1916bih=1082#tbm=ischsa=1q=eledoneoq=eledonegs_l=img.3..0l3j0i24.15244.15821.0.15909.7.6.0.0.0.0.145.442.5j1.6.0...0.0...1c.1.12.img.VazRyuNNsiQbav=on.2,or.r_cp.r_qf.bvm=bv.46340616,d.cGEfp=aa2ea750bee51b45biw=1916bih=1082

 euaxoctopus
  http://eol.org/pages/49675/overview

 exannulatus (octopus exannulatus)
  http://eol.org/pages/491114/overview

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



 --
 John Wilkins
 Senior Technical Writer
 Intank
 john.wilk...@inktank.com
 (415) 425-9599
 http://inktank.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread Neil Levine
Leen,

Do you mean you get LIO working with RBD directly? Or are you just
re-exporting a kernel mounted volume?

Neil

On Thu, May 9, 2013 at 11:58 PM, Leen Besselink l...@consolejunkie.net wrote:
 On Thu, May 09, 2013 at 11:51:32PM +0100, Neil Levine wrote:
 Jared,

 As Weiguo says you will need to use a gateway to present a Ceph block
 device (RBD) in a format VMware understands. We've contributed the
 relevant code to the TGT iSCSI target (see blog:
 http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/) and though
 we haven't done a massive amount of testing on it, I'd love to get
 some feedback on it. We will be putting more effort into it this cycle
 (including producing a package).


 We also have a legacy virtualization setup we are thinking of using with Ceph
 and iSCSI. We however also ended up at LIO, because LIO supports the iSCSI
 extensions which are needed for clustering.

 stgt doesn't yet support all the needed extensions as far as I can see.

 There seems to be exactly one person sporadically working on improving stgt
 in this area.

 If you have a VMware account rep, be sure to ask him to file support
 for Ceph as a customer request with the product teams while we
 continue knock on VMware's door :-)

 Neil

 On Thu, May 9, 2013 at 11:30 PM, w sun ws...@hotmail.com wrote:
  RBD is not supported by VMware/vSphere. You will need to build a
  NFS/iSCSI/FC GW to support VMware. Here is a post someone has been trying
  and you may have to contact them directly for status,
 
  http://ceph.com/community/ceph-over-fibre-for-vmware/
 
  --weiguo
 
  
  To: ceph-users@lists.ceph.com
  From: jaredda...@shelterinsurance.com
  Date: Thu, 9 May 2013 17:25:02 -0500
  Subject: [ceph-users] Using Ceph as Storage for VMware
 
 
  I am investigating using Ceph as a storage target for virtual servers in
  VMware.  We have 3 servers packed with hard drives ready for the proof of
  concept.  I am looking for some direction.  Is this a valid use for Ceph?
  If so, has anybody accomplished this?  Are there any documents on how to 
  set
  this up?  Should I use RDB, NFS, etc?  Any help, would be greatly
  appreciated.
 
 
  Thank You,
 
  JD
 
 
  ___ ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] EPEL packages for QEMU-KVM with rbd support?

2013-05-06 Thread Neil Levine
We can't host a modified qemu with rbd package in EPEL itself because
it would conflict with the version of qemu that ships by default,
breaking EPEL's policy.

However, we will be building and hosting a qemu package for RH6.3, 6.4
on ceph.com very soon and getting a package into a CentOS repo.

Neil


On Mon, May 6, 2013 at 6:06 PM, Barry O'Rourke Barry.O'rou...@ed.ac.uk wrote:
 Hi,

 I built a modified version of the fc17 package that I picked up from
 koji [1]. That might not be ideal for you as fc17 uses systemd rather
 than init, we use an in-house configuration management system which
 handles service start-up so it's not an issue for us.

 I'd be interested to hear how others install qemu on el6 derivatives,
 especially those of you running newer versions.

 Cheers,

 Barry

 1. http://koji.fedoraproject.org/koji/packageinfo?packageID=3685

 On Mon, 2013-05-06 at 16:58 +, w sun wrote:
 Does anyone know if there are RPM packages for EPEL 6-8 ? I have heard
 they have been built but could not find them in the latest 6-8 repo.


 Thanks. --weiguo
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 The University of Edinburgh is a charitable body, registered in
 Scotland, with registration number SC005336.

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] using Ceph FS as OpenStack Glance's backend

2013-03-19 Thread Neil Levine
I think object storage (using the Swift-compatible Ceph Object
Gateway) is the preferred mechanism for a Glance backend.

Neil

On Tue, Mar 19, 2013 at 6:49 AM, Patrick McGarry patr...@inktank.com wrote:
 Any reason you have chosen to use CephFS here instead of RBD for
 direct integration with Glance?

 http://ceph.com/docs/master/rbd/rbd-openstack/


 Best Regards,


 Patrick McGarry
 Director, Community || Inktank

 http://ceph.com  ||  http://inktank.com
 @scuttlemonkey || @ceph || @inktank


 On Tue, Mar 19, 2013 at 6:12 AM, Li, Chen chen...@intel.com wrote:
 I’m trying to use Ceph FS as glance's backend.

 I have mount Ceph FS at glance machine. And edit /etc/glance/glance-api.conf
 to use the mounted directory.

 But when I upload the image as I used to, I met the error:



 Request returned failure status.

 None

 HTTPServiceUnavailable (HTTP 503)



 If I change back to use native directory, image can upload successfully.



 Anyone know why?



 Thanks.

 -chen


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] using Ceph FS as OpenStack Glance's backend

2013-03-19 Thread Neil Levine
..to be more precise, I should have said: object storage has been the
preferred mechanism of late in Openstack, but RBD makes more sense due
to the copy-on-write facility. Either way, either the Ceph object
gateway or Ceph RBD makes more sense than CephFS currently.

neil

On Tue, Mar 19, 2013 at 12:48 PM, Neil Levine neil.lev...@inktank.com wrote:
 I think object storage (using the Swift-compatible Ceph Object
 Gateway) is the preferred mechanism for a Glance backend.

 Neil

 On Tue, Mar 19, 2013 at 6:49 AM, Patrick McGarry patr...@inktank.com wrote:
 Any reason you have chosen to use CephFS here instead of RBD for
 direct integration with Glance?

 http://ceph.com/docs/master/rbd/rbd-openstack/


 Best Regards,


 Patrick McGarry
 Director, Community || Inktank

 http://ceph.com  ||  http://inktank.com
 @scuttlemonkey || @ceph || @inktank


 On Tue, Mar 19, 2013 at 6:12 AM, Li, Chen chen...@intel.com wrote:
 I’m trying to use Ceph FS as glance's backend.

 I have mount Ceph FS at glance machine. And edit /etc/glance/glance-api.conf
 to use the mounted directory.

 But when I upload the image as I used to, I met the error:



 Request returned failure status.

 None

 HTTPServiceUnavailable (HTTP 503)



 If I change back to use native directory, image can upload successfully.



 Anyone know why?



 Thanks.

 -chen


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CloudStack interaction with Ceph/RBD

2013-03-15 Thread Neil Levine
Fantastic diagram, Wido, many thanks for doing this. Ross is in the
process of setting up a community wiki so hopefully we can host this
diagram there.

Neil

On Fri, Mar 15, 2013 at 6:22 AM, Wido den Hollander w...@42on.com wrote:
 Hi,

 In the last couple of months I got several questions from people who asked
 how the Ceph integration with CloudStack works internally.

 CloudStack is being developed rapidly lately and the documentation about how
 it all works internally isn't always up to date.

 I made a small diagram which explains how CloudStack communicates with the
 Ceph cluster: http://ubuntuone.com/6RPLncpt0i3OKuZErfiFHg

 To be clear:
 - The CloudStack management server never contacts the Ceph cluster
 - Only the hypervisors contact the Ceph cluster

 The storage subsystem in CloudStack is going a major overhaul right now and
 those changes will hit 4.2. It will mainly change code and not so much the
 workflow.

 Hope this makes it all a bit more clear.

 --
 Wido den Hollander
 42on B.V.

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com