Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Martin B Nielsen
First off, congrats to inktank!

I'm sure having Redhat backing the project it will see even quicker
development.

My only worry is support for future non-RHEL platforms; like many others
we've built our ceph stack around ubuntu and I'm just hoping it won't
deteriorate into something like how it is only built/tested around
Centos/Redhat ( ie moving the I+C from ubuntu to only be on Centos/Redhat
-> http://ceph.com/docs/master/start/os-recommendations/ and just keep a
basic build-test around all other distroes) I fear a political decision to
only have those extra tests on Centos/Redhat will 'force' people to run it
on Centos/Redhat eventually.

Cheers,
Martin



On Wed, Apr 30, 2014 at 2:18 PM, Sage Weil  wrote:

> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
> We believe that, as part of Red Hat, the Inktank team will be able to
> build a better quality Ceph storage platform that will benefit the entire
> ecosystem. Red Hat brings a broad base of expertise in building and
> delivering hardened software stacks as well as a wealth of resources that
> will help Ceph become the transformative and ubiquitous storage platform
> that we always believed it could be.
>
> For existing Inktank customers, this is going to mean turning a reliable
> and robust storage system into something that delivers even more value. In
> particular, joining forces with the Red Hat team will improve our ability
> to address problems at all layers of the storage stack, including in the
> kernel. We naturally recognize that many customers and users have built
> platforms based on other Linux distributions. We will continue to support
> these installations while we determine how to provide the best customer
> experience moving forward and how the next iteration of the enterprise
> Ceph product will be structured. In the meantime, our team remains
> committed to keeping Ceph an open, multiplatform project that works in any
> environment where it makes sense, including other Linux distributions and
> non-Linux operating systems.
>
> Red Hat is one of only a handful of companies that I trust to steward the
> Ceph project. When we started Inktank two years ago, our goal was to build
> the business by making Ceph successful as a broad-based, collaborative
> open source project with a vibrant user, developer, and commercial
> community. Red Hat shares this vision. They are passionate about open
> source, and have demonstrated that they are strong and fair stewards with
> other critical projects (like KVM). Red Hat intends to administer the Ceph
> trademark in a manner that protects the ecosystem as a whole and creates a
> level playing field where everyone is held to the same standards of use.
> Similarly, policies like "upstream first" ensure that bug fixes and
> improvements that go into Ceph-derived products are always shared with the
> community to streamline development and benefit all members of the
> ecosystem.
>
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That means that
> Calamari, the monitoring and diagnostics tool that Inktank has developed
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
>
> This is a big step forward for the Ceph community. Very little will change
> on day one as it will take some time to integrate the Inktank business and
> for any significant changes to happen with our engineering activities.
> However, we are very excited about what is coming next for Ceph and are
> looking forward to this new chapter.
>
> I'd like to thank everyone who has helped Ceph get to where we are today:
> the amazing research group at UCSC where it began, DreamHost for
> supporting us for so many years, the incredible Inktank team, and the many
> contributors and users that have helped shape the system. We continue to
> believe that robust, scalable, and completely open storage platforms like
> Ceph will transform a storage industry that is still dominated by
> proprietary systems. Let's make it happen!
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Mark Kirkwood

On 01/05/14 18:26, Wido den Hollander wrote:

Ok, thanks for the information. Just something that comes up in my mind:

- Repository location and access
- Documentation efforts for non-RHEL platforms
- Support for non-RHEL platforms

I'm confident that RedHat will make Ceph bigger and better, but I'm just
a bit worried about how that will happen and what will happen to the
existing community.



Yes, I want to add a big +1 to that. Currently working with Ceph on 
Ubuntu (especially) is really pleasant - I'd sure like to it stay that 
way! I'm really not looking forward to hearing things like "Well, we 
prioritize the Redhat platform bullshit bullshit" when something is 
clearly busted on Ubuntu and Debian...


In addition, I'm somewhat nervous that an "Enterprise Ceph" is in the 
works... which will make life a misery for those of us (I suspect the 
majority) who want to use the community edition (the whole point of open 
source in my view), and will *never* compromise and buy the "Enterprise" 
version.


regards

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Stuart Longland
On 01/05/14 17:49, Martin B Nielsen wrote:
> I fear a political decision to only have those extra tests on
> Centos/Redhat will 'force' people to run it on Centos/Redhat
> eventually.

To be fair, I haven't witnessed any pro-RedHat agenda regarding other
technologies like KVM, libvirt and spice.  The latter (spice) was a
little broken for me under Ubuntu, but I've used it on Gentoo without
issue so I figure this is more an Ubuntu thing than a non-RedHat thing.

If Red Hat were to pull an Oracle on us, it wouldn't do any good.  Just
look where it got OpenOffice. :-)

Regards,
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Danny Luhde-Thompson
Congratulations!  From reading the Red Hat announcement, you get the
impression they will want to push Glustre for files, and focus on Ceph for
block/object.  As someone who is very excited about CephFS and keen on it
becoming supported this year, I hope it doesn't become de-prioritised for
some other new work.

Best regards,

Danny



On 30 April 2014 13:18, Sage Weil  wrote:

> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
> We believe that, as part of Red Hat, the Inktank team will be able to
> build a better quality Ceph storage platform that will benefit the entire
> ecosystem. Red Hat brings a broad base of expertise in building and
> delivering hardened software stacks as well as a wealth of resources that
> will help Ceph become the transformative and ubiquitous storage platform
> that we always believed it could be.
>
> For existing Inktank customers, this is going to mean turning a reliable
> and robust storage system into something that delivers even more value. In
> particular, joining forces with the Red Hat team will improve our ability
> to address problems at all layers of the storage stack, including in the
> kernel. We naturally recognize that many customers and users have built
> platforms based on other Linux distributions. We will continue to support
> these installations while we determine how to provide the best customer
> experience moving forward and how the next iteration of the enterprise
> Ceph product will be structured. In the meantime, our team remains
> committed to keeping Ceph an open, multiplatform project that works in any
> environment where it makes sense, including other Linux distributions and
> non-Linux operating systems.
>
> Red Hat is one of only a handful of companies that I trust to steward the
> Ceph project. When we started Inktank two years ago, our goal was to build
> the business by making Ceph successful as a broad-based, collaborative
> open source project with a vibrant user, developer, and commercial
> community. Red Hat shares this vision. They are passionate about open
> source, and have demonstrated that they are strong and fair stewards with
> other critical projects (like KVM). Red Hat intends to administer the Ceph
> trademark in a manner that protects the ecosystem as a whole and creates a
> level playing field where everyone is held to the same standards of use.
> Similarly, policies like "upstream first" ensure that bug fixes and
> improvements that go into Ceph-derived products are always shared with the
> community to streamline development and benefit all members of the
> ecosystem.
>
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That means that
> Calamari, the monitoring and diagnostics tool that Inktank has developed
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
>
> This is a big step forward for the Ceph community. Very little will change
> on day one as it will take some time to integrate the Inktank business and
> for any significant changes to happen with our engineering activities.
> However, we are very excited about what is coming next for Ceph and are
> looking forward to this new chapter.
>
> I'd like to thank everyone who has helped Ceph get to where we are today:
> the amazing research group at UCSC where it began, DreamHost for
> supporting us for so many years, the incredible Inktank team, and the many
> contributors and users that have helped shape the system. We continue to
> believe that robust, scalable, and completely open storage platforms like
> Ceph will transform a storage industry that is still dominated by
> proprietary systems. Let's make it happen!
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Mean Trading Systems LLP
http://www.meantradingsystems.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Cedric Lemarchand
What a huge news ! big congrats for the Ceph team, without forgetting
all the volunteers that helped for.

Keep up the amazing work, Ceph is going to be a revolution for the
storage, and that's great.

Follow up in lines responses.

Le 01/05/2014 08:26, Wido den Hollander a écrit :
> On 04/30/2014 10:46 PM, Patrick McGarry wrote:
>> Hey Danny (and Wido),
>>
>> WRT the foundation I'm sure you can see why it has been on hold for
>> the last few weeks.  However, this is not signifying the death of the
>> effort.  Both Sage and I still feel that this is a discussion worth
>> having.  However, the discussion hasn't happened yet, so it's far too
>> early to be able to say anything beyond that.
> Ok, thanks for the information. Just something that comes up in my mind:
>
> - Repository location and access
> - Documentation efforts for non-RHEL platforms
> - Support for non-RHEL platforms
Indeed, all people more comfortable to use Ceph on ".deb" platforms are
thinking that way at this time.
> I'm confident that RedHat will make Ceph bigger and better, but I'm
> just a bit worried about how that will happen and what will happen to
> the existing community. 
From what I saw, the existing community is strong, and for now I don't
see what could change that, so I keep my optimism ;-), plus I read that
Ubuntu will provide enterprise support on the 14.04 LTS, we will see how
it will evolve.

I think RedHat interest in Ceph is very market oriented, in the way to
provide more confident and insurance to their actual and potential
ongoing enterprise customers, to be more attractive. They have just
added a big rope on their bow ... and it's a win/win FMPOV.

Well, just my 2 cents.

Cheers

--
Cédric

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to modify the osd map

2014-05-01 Thread ??
hello, my cluster have some error two days before ,now some pg in incomplete 
state, 
e.g , in the current osdmap, the pg 49.6 is map to [35,29,0] , but osd[35,29,0] 
 are have noany data of pg 49.6, I check the entire cluster, I found in osd.42 
have the Complete data of pg
I thing if I can modify the osdmap, set pg 49.6 map to osd.42, the pg maybe can 
fix,
but I don't know how to modify the osdmap, can you help me ? thank you.

I have use "ceph osd getmap > osdmap.dat" to get the newest osdmap___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] "ceph-deploy osd activate" error: AttributeError: 'module' object has no attribute 'logger' exception

2014-05-01 Thread Alfredo Deza
This is already marked as urgent and I am working on it. A point
release should be coming up as soon as possible.

I apologize for the bug.

The workaround would be to use 1.4.0 until the new version comes out.

On Wed, Apr 30, 2014 at 11:01 PM, Mike Dawson  wrote:
> Victor,
>
> This is a verified issue reported earlier today:
>
> http://tracker.ceph.com/issues/8260
>
> Cheers,
> Mike
>
>
>
> On 4/30/2014 3:10 PM, Victor Bayon wrote:
>>
>> Hi all,
>> I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
>>   error when running the "ceph-deploy osd activate" and I am getting an
>> exception. See below[2].
>> I am following the quick tutorial step by step, except that
>> any help greatly appreciate
>> "ceph-deploy mon create-initial" does not seem to gather the keys and I
>> have to execute
>>
>> manually with
>>
>> ceph-deploy gatherkeys node01
>>
>> I am following the same configuration with
>> - one admin node (myhost)
>> - 1 monitoring node (node01)
>> - 2 osd (node02, node03)
>>
>>
>> I am in Ubuntu Server 12.04 LTS (precise) and using ceph "emperor"
>>
>>
>> Any help greatly appreciated
>>
>> Many thanks
>>
>> Best regards
>>
>> /V
>>
>> [1] http://ceph.com/docs/master/start/quick-ceph-deploy/
>> [2] Error:
>> ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
>> node03:/var/local/osd1
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /home/ceph/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
>> activate node02:/var/local/osd0 node03:/var/local/osd1
>> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
>> node02:/var/local/osd0: node03:/var/local/osd1:
>> [node02][DEBUG ] connected to host: node02
>> [node02][DEBUG ] detect platform information from remote host
>> [node02][DEBUG ] detect machine type
>> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
>> [ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
>> [ceph_deploy.osd][DEBUG ] will use init type: upstart
>> [node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
>> upstart --mount /var/local/osd0
>> [node02][WARNIN] got latest monmap
>> [node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
>> FileJournal::_open: disabling aio for non-block journal.  Use
>> journal_force_aio to force use of aio anyway
>> [node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
>> FileJournal::_open: disabling aio for non-block journal.  Use
>> journal_force_aio to force use of aio anyway
>> [node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
>> filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1
>> in index: (2) No such file or directory
>> [node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created
>> object store /var/local/osd0 journal /var/local/osd0/journal for osd.0
>> fsid 76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
>> [node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
>> reading file: /var/local/osd0/keyring: can't open
>> /var/local/osd0/keyring: (2) No such file or directory
>> [node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new
>> key in keyring /var/local/osd0/keyring
>> [node02][WARNIN] added key for osd.0
>> Traceback (most recent call last):
>>File "/usr/bin/ceph-deploy", line 21, in 
>>  sys.exit(main())
>>File
>> "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line
>> 62, in newfunc
>>  return f(*a, **kw)
>>File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147,
>> in main
>>  return args.func(args)
>>File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 532,
>> in osd
>>  activate(args, cfg)
>>File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 338,
>> in activate
>>  catch_osd_errors(distro.conn, distro.logger, args)
>> AttributeError: 'module' object has no attribute 'logger'
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Loic Dachary
Hi Patrick,

It would be great to start a thread on that topic.

Cheers

On 30/04/2014 22:46, Patrick McGarry wrote:
> Hey Danny (and Wido),
> 
> WRT the foundation I'm sure you can see why it has been on hold for
> the last few weeks.  However, this is not signifying the death of the
> effort.  Both Sage and I still feel that this is a discussion worth
> having.  However, the discussion hasn't happened yet, so it's far too
> early to be able to say anything beyond that.
> 
> Once we get all of the acquisition stuff settled we'll start having
> those conversations again.  There are obvious pros and cons to both
> sides, so the outcome is far from obvious.  I will definitely let you
> all know as soon as there is information to be had.
> 
> Sorry I couldn't be more informative, but we're still looking at it!
> 
> 
> 
> 
> Best Regards,
> 
> Patrick McGarry
> Director, Community || Inktank
> http://ceph.com  ||  http://inktank.com
> @scuttlemonkey || @ceph || @inktank
> 
> 
> On Wed, Apr 30, 2014 at 4:34 PM, Danny Al-Gaaf  
> wrote:
>> Am 30.04.2014 14:18, schrieb Sage Weil:
>>> Today we are announcing some very big news: Red Hat is acquiring Inktank.
>>> We are very excited about what this means for Ceph, the community, the
>>> team, our partners, and our customers. Ceph has come a long way in the ten
>>> years since the first line of code has been written, particularly over the
>>> last two years that Inktank has been focused on its development. The fifty
>>> members of the Inktank team, our partners, and the hundreds of other
>>> contributors have done amazing work in bringing us to where we are today.
>>>
>>> We believe that, as part of Red Hat, the Inktank team will be able to
>>> build a better quality Ceph storage platform that will benefit the entire
>>> ecosystem. Red Hat brings a broad base of expertise in building and
>>> delivering hardened software stacks as well as a wealth of resources that
>>> will help Ceph become the transformative and ubiquitous storage platform
>>> that we always believed it could be.
>>
>> What does that mean to the idea/plans to move Ceph to a Foundation? Are
>> they canceled?
>>
>> Danny
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v0.67.8 released

2014-05-01 Thread Sage Weil
This Dumpling point release fixes several non-critical issues since 
v0.67.7.  The most notable bug fixes are an auth fix in librbd (observed 
as an occasional crash from KVM), an improvement in the network failure 
detection with the monitor, and several hard to hit OSD crashes or hangs.

We recommend that all users upgrade at their convenience.

Upgrading
-

* The 'rbd ls' function now returns success and returns an empty when a pool
  does not store any rbd images.  Previously it would return an ENOENT 
  error.

* Ceph will now issue a health warning if the 'mon osd down out
  interval' config option is set to zero.  This warning can be
  disabled by adding 'mon warn on osd down out interval zero = false'
  to ceph.conf.

Notable Changes
---

* all: improve keepalive detection of failed monitor connections (#7888, 
  Sage Weil)
* ceph-fuse, libcephfs: pin inodes during readahead, fixing rare crash 
  (#7867, Sage Weil)
* librbd: make cache writeback a bit less aggressive (Sage Weil)
* librbd: make symlink for qemu to detect librbd in RPM (#7293, Josh 
  Durgin)
* mon: allow 'hashpspool' pool flag to be set and unset (Loic Dachary)
* mon: commit paxos state only after entire quorum acks, fixing rare race 
  where prior round state is readable (#7736, Sage Weil)
* mon: make elections and timeouts a bit more robust (#7212, Sage Weil)
* mon: prevent extreme pool split operations (Greg Farnum)
* mon: wait for quorum for get_version requests to close rare pool 
  creation race (#7997, Sage Weil)
* mon: warn on 'mon osd down out interval = 0' (#7784, Joao Luis)
* msgr: fix byte-order for auth challenge, fixing auth errors on 
  big-endian clients (#7977, Dan Mick)
* msgr: fix occasional crash in authentication code (usually triggered by 
  librbd) (#6840, Josh Durgin)
* msgr: fix rebind() race (#6992, Xihui He)
* osd: avoid timeouts during slow PG deletion (#6528, Samuel Just)
* osd: fix bug in pool listing during recovery (#6633, Samuel Just)
* osd: fix queue limits, fixing recovery stalls (#7706, Samuel Just)
* osd: fix rare peering crashes (#6722, #6910, Samuel Just)
* osd: fix rare recovery hang (#6681, Samuel Just)
* osd: improve error handling on journal errors (#7738, Sage Weil)
* osd: reduce load on the monitor from OSDMap subscriptions (Greg Farnum)
* osd: rery GetLog on peer osd startup, fixing some rare peering stalls 
  (#6909, Samuel Just)
* osd: reset journal state on remount to fix occasional crash on OSD 
  startup (#8019, Sage Weil)
* osd: share maps with peers more aggressively (Greg Farnum)
* rbd: make it harder to delete an rbd image that is currently in use 
  (#7076, Ilya Drymov)
* rgw: deny writes to secondary zone by non-system users (#6678, Yehuda 
  Sadeh)
* rgw: do'nt log system requests in usage log (#6889, Yehuda Sadeh)
* rgw: fix bucket recreation (#6951, Yehuda Sadeh)
* rgw: fix Swift range response (#7099, Julien Calvet, Yehuda Sadeh)
* rgw: fix URL escaping (#8202, Yehuda Sadeh)
* rgw: fix whitespace trimming in http headers (#7543, Yehuda Sadeh)
* rgw: make multi-object deletion idempotent (#7346, Yehuda Sadeh)

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.67.8.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Infiniband: was: Red Hat to acquire Inktank

2014-05-01 Thread Matt W. Benjamin
Hi,

The XioMessenger work provides native support for Infiniband, if I understand
you correctly.

Early testing is now possible, on a codebase pulled up to Firefly.  See 
discussion
from earlier this week.

Regards,

Matt

- "Gandalf Corvotempesta"  wrote:

> 2014-04-30 14:18 GMT+02:00 Sage Weil :
> > Today we are announcing some very big news: Red Hat is acquiring
> Inktank.
> 
> Great news.
> Any changes to get native Infiniband support in ceph like in GlusterFS
> ?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI  48104

http://linuxbox.com

tel.  734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Matt W. Benjamin
Hi,

Sure, that's planned for integration in Giant (see Blueprints).

Matt

- "Gandalf Corvotempesta"  wrote:

> 2014-05-01 0:11 GMT+02:00 Mark Nelson inktank.com>:
> > Usable is such a vague word.  I imagine it's testable after a
> fashion. :D
> 
> Ok but I prefere an "official" support with IB integrated in main ceph
> repo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI  48104

http://linuxbox.com

tel.  734-761-4689734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309734-216-5309 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Matt W. Benjamin
Hi,

I should have been careful.  Our efforts are aimed at Giant.  We're
serious about meeting delivery targets.  There's lots of shakedown, and
of course further integration work, still to go.

Regards,

Matt

- "Gandalf Corvotempesta"  wrote:

> 2014-05-01 0:20 GMT+02:00 Matt W. Benjamin :
> > Hi,
> >
> > Sure, that's planned for integration in Giant (see Blueprints).
> 
> Great. Any ETA? Firefly was planned for February :)

-- 
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI  48104

http://linuxbox.com

tel.  734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] "ceph-deploy osd activate" error: AttributeError: 'module' object has no attribute 'logger' exception

2014-05-01 Thread Alfredo Deza
Victor,  ceph-deploy v1.5.1 has been released with a fix that should
take care of your problem

On Thu, May 1, 2014 at 7:32 AM, Alfredo Deza  wrote:
> This is already marked as urgent and I am working on it. A point
> release should be coming up as soon as possible.
>
> I apologize for the bug.
>
> The workaround would be to use 1.4.0 until the new version comes out.
>
> On Wed, Apr 30, 2014 at 11:01 PM, Mike Dawson  
> wrote:
>> Victor,
>>
>> This is a verified issue reported earlier today:
>>
>> http://tracker.ceph.com/issues/8260
>>
>> Cheers,
>> Mike
>>
>>
>>
>> On 4/30/2014 3:10 PM, Victor Bayon wrote:
>>>
>>> Hi all,
>>> I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
>>>   error when running the "ceph-deploy osd activate" and I am getting an
>>> exception. See below[2].
>>> I am following the quick tutorial step by step, except that
>>> any help greatly appreciate
>>> "ceph-deploy mon create-initial" does not seem to gather the keys and I
>>> have to execute
>>>
>>> manually with
>>>
>>> ceph-deploy gatherkeys node01
>>>
>>> I am following the same configuration with
>>> - one admin node (myhost)
>>> - 1 monitoring node (node01)
>>> - 2 osd (node02, node03)
>>>
>>>
>>> I am in Ubuntu Server 12.04 LTS (precise) and using ceph "emperor"
>>>
>>>
>>> Any help greatly appreciated
>>>
>>> Many thanks
>>>
>>> Best regards
>>>
>>> /V
>>>
>>> [1] http://ceph.com/docs/master/start/quick-ceph-deploy/
>>> [2] Error:
>>> ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
>>> node03:/var/local/osd1
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /home/ceph/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
>>> activate node02:/var/local/osd0 node03:/var/local/osd1
>>> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
>>> node02:/var/local/osd0: node03:/var/local/osd1:
>>> [node02][DEBUG ] connected to host: node02
>>> [node02][DEBUG ] detect platform information from remote host
>>> [node02][DEBUG ] detect machine type
>>> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
>>> [ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
>>> [ceph_deploy.osd][DEBUG ] will use init type: upstart
>>> [node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
>>> upstart --mount /var/local/osd0
>>> [node02][WARNIN] got latest monmap
>>> [node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
>>> FileJournal::_open: disabling aio for non-block journal.  Use
>>> journal_force_aio to force use of aio anyway
>>> [node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
>>> FileJournal::_open: disabling aio for non-block journal.  Use
>>> journal_force_aio to force use of aio anyway
>>> [node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
>>> filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1
>>> in index: (2) No such file or directory
>>> [node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created
>>> object store /var/local/osd0 journal /var/local/osd0/journal for osd.0
>>> fsid 76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
>>> [node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
>>> reading file: /var/local/osd0/keyring: can't open
>>> /var/local/osd0/keyring: (2) No such file or directory
>>> [node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new
>>> key in keyring /var/local/osd0/keyring
>>> [node02][WARNIN] added key for osd.0
>>> Traceback (most recent call last):
>>>File "/usr/bin/ceph-deploy", line 21, in 
>>>  sys.exit(main())
>>>File
>>> "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line
>>> 62, in newfunc
>>>  return f(*a, **kw)
>>>File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147,
>>> in main
>>>  return args.func(args)
>>>File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 532,
>>> in osd
>>>  activate(args, cfg)
>>>File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 338,
>>> in activate
>>>  catch_osd_errors(distro.conn, distro.logger, args)
>>> AttributeError: 'module' object has no attribute 'logger'
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [ANN] ceph-deploy 1.5.0 released!

2014-05-01 Thread Alfredo Deza
A minor issue was found while attempting to activate OSDs that has
just been fixed and released
as ceph-deploy v.1.5.1

Even if you haven't encountered this particular issue I do recommend
an upgrade anyway.


Thanks!


Alfredo

On Mon, Apr 28, 2014 at 3:17 PM, Alfredo Deza  wrote:
> Hi All,
>
> There is a new release of ceph-deploy, the easy deployment tool for Ceph.
>
> This release comes with a few bug fixes and a few features:
>
> * implement `osd list`
> * add a status check on OSDs when deploying
> * sync local mirrors to remote hosts when installing
> * support flags and options set in cephdeploy.conf
>
> The full list of changes and fixes is documented at:
>
> http://ceph.com/ceph-deploy/docs/changelog.html#id1
>
> Make sure you update!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cannot revert lost objects

2014-05-01 Thread kevin horan
I have an issue very similar to this thread: 
http://article.gmane.org/gmane.comp.file-systems.ceph.user/3197. I have 
19 unfound objects that are part of a VM image that I have already 
recovered from backup. If I query pg 4.30 ( the one with the unfound 
objects), it says it is still querying osd.8, looking for the unfound 
objects. Because of this, when I run:


# ceph pg 4.30 mark_unfound_lost revert
Error EINVAL: pg has 19 unfound objects but we haven't probed all 
sources, not marking lost


It refuses to remove them. It has been "querying" osd.8 for almost 2 
days now, and there is only 200GB on it, so I don't see why it would 
take so long. So how can I force it to either stop querying, or revert 
the unfound objects?


Here is how I got into this state. I have only 6 OSDs total, 3 on one 
host (vashti) and 3 on another (zadok). I set the noout flag so I could 
reboot zadok. Zadok was down for 2 minutes. When it came up ceph began 
recovering the objects that had not been replicated yet. Before recovery 
finished, osd.6, on vashti, died (IO errors on disk, whole drive 
un-recoverable). Since osd.6 had objects that had not yet had a chance 
to replicate to any OSD on zadok, they were lost. I cannot recover 
anything further from osd.6.



Here is the output of "ceph pg 4.30 query":

{ "state": "active+recovering+degraded+remapped",
  "epoch": 20364,
  "up": [
2,
0],
  "acting": [
1,
2],
  "info": { "pgid": "4.30",
  "last_update": "20364'10377395",
  "last_complete": "0'0",
  "log_tail": "20161'10325373",
  "last_user_version": 10377395,
  "last_backfill": "MAX",
  "purged_snaps": "[1~7,10~4]",
  "history": { "epoch_created": 386,
  "last_epoch_started": 20323,
  "last_epoch_clean": 20161,
  "last_epoch_split": 0,
  "same_up_since": 20322,
  "same_interval_since": 20322,
  "same_primary_since": 20311,
  "last_scrub": "20118'10315975",
  "last_scrub_stamp": "2014-04-29 11:54:57.358096",
  "last_deep_scrub": "20050'10061396",
  "last_deep_scrub_stamp": "2014-04-24 11:39:40.313745",
  "last_clean_scrub_stamp": "2014-04-29 11:54:57.358096"},
  "stats": { "version": "20364'10377395",
  "reported_seq": "17957416",
  "reported_epoch": "20364",
  "state": "active+recovering+degraded+remapped",
  "last_fresh": "2014-05-01 10:00:51.210564",
  "last_change": "2014-05-01 09:03:31.708198",
  "last_active": "2014-05-01 10:00:51.210564",
  "last_clean": "2014-04-29 16:14:12.127562",
  "last_became_active": "0.00",
  "last_unstale": "2014-05-01 10:00:51.210564",
  "mapping_epoch": 20317,
  "log_start": "20161'10325373",
  "ondisk_log_start": "20161'10325373",
  "created": 386,
  "last_epoch_clean": 20161,
  "parent": "0.0",
  "parent_split_bits": 0,
  "last_scrub": "20118'10315975",
  "last_scrub_stamp": "2014-04-29 11:54:57.358096",
  "last_deep_scrub": "20050'10061396",
  "last_deep_scrub_stamp": "2014-04-24 11:39:40.313745",
  "last_clean_scrub_stamp": "2014-04-29 11:54:57.358096",
  "log_size": 52022,
  "ondisk_log_size": 52022,
  "stats_invalid": "0",
  "stat_sum": { "num_bytes": 9078859264,
  "num_objects": 2598,
  "num_object_clones": 360,
  "num_object_copies": 0,
  "num_objects_missing_on_primary": 0,
  "num_objects_degraded": 0,
  "num_objects_unfound": 0,
  "num_read": 703887,
  "num_read_kb": 164523202,
  "num_write": 8785487,
  "num_write_kb": 69327327,
  "num_scrub_errors": 0,
  "num_shallow_scrub_errors": 0,
  "num_deep_scrub_errors": 0,
  "num_objects_recovered": 24428,
  "num_bytes_recovered": 93261249024,
  "num_keys_recovered": 0},
  "stat_cat_sum": {},
  "up": [
2,
0],
  "acting": [
1,
2]},
  "empty": 0,
  "dne": 0,
  "incomplete": 0,
  "last_epoch_started": 20323},
  "recovery_state": [
{ "name": "Started\/Primary\/Active",
  "enter_time": "2014-05-01 09:03:30.557244",
  "might_have_unfound": [
{ "osd": 0,
  "status": "already probed"},
{ "osd": 2,
  "status": "already probed"},
{ "osd": 6,
  "status": "osd is down"},
{ "osd": 8,
  "status": "querying"}],
  "recovery_progress": { "backfill_target": 2,
  "waiting_on_backfill": 0,
  "last_backfill_started": "0\/\/0\/\/-1",
  "backfill_info": { "begin": "0\/\/0\/\/-1",
  "end": "0\/\/0\/\/-1",
  

Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Neil Levine
There are few things that need to be tidied up and we will need to
liaise with our new Red Hat colleagues around license choice but I
believe the code is in relatively good shape to be open sourced thanks
to the team who've been working in it (Yan, Gregory, John and Dan).
It's important to me that when we open it up everything is ready to be
hacked on straight away with good documentation but I'd also like to
share some of the options for where I think we can take it. There is a
lot of potential :-)

Neil

On Wed, Apr 30, 2014 at 8:23 AM, Mark Nelson  wrote:
> On 04/30/2014 10:19 AM, Loic Dachary wrote:
>>
>> Hi Sage,
>>
>> Congratulations, this is good news.
>>
>> On 30/04/2014 14:18, Sage Weil wrote:
>>
>>> One important change that will take place involves Inktank's product
>>> strategy, in which some add-on software we have developed is proprietary.
>>> In contrast, Red Hat favors a pure open source model. That means that
>>> Calamari, the monitoring and diagnostics tool that Inktank has developed
>>> as part of the Inktank Ceph Enterprise product, will soon be open
>>> sourced.
>>
>>
>> I'm glad to hear that this acquisitions puts an end to the proprietary
>> software created by the Inktank Ceph developers. And I assume they are also
>> happy about the change :-)
>
>
> I for one am excited about an open source Calamari! :)
>
>>
>> Cheers
>>
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cannot revert lost objects

2014-05-01 Thread Craig Lewis

On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on one 
host (vashti) and 3 on another (zadok). I set the noout flag so I 
could reboot zadok. Zadok was down for 2 minutes. When it came up ceph 
began recovering the objects that had not been replicated yet. Before 
recovery finished, osd.6, on vashti, died (IO errors on disk, whole 
drive un-recoverable). Since osd.6 had objects that had not yet had a 
chance to replicate to any OSD on zadok, they were lost. I cannot 
recover anything further from osd.6.



I'm pretty far out of my element here, but if osd.6 is gone, it might 
help to mark it lost:

ceph osd lost 6

I had similiar issues when I lost some PGs.  I don't think that it 
actually fixed my issue, but marking osds as lost did help Ceph move 
forward.



You could also try deleting the broken RBD image, and see if that helps.



--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com 

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website   | Twitter 
  | Facebook 
  | LinkedIn 
  | Blog 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph unstable when upgrading from emperor (v0.72.2) to firefly (v0.80-rc1-16-g2708c3c)

2014-05-01 Thread Gregory Farnum
Well, we haven't finished diagnosing #8232 yet — I think it's actually
different from #6992 — so I can't really tell you what will fix it!
Restarting the hosts might. If not, I'd love for somebody to reproduce
this with "debug ms = 10" enabled so I can get a log and see what's
causing it.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wed, Apr 30, 2014 at 2:25 PM, Thanh Tran  wrote:
> Hi Gregory,
>
> Sorry for replying late. The issue tracking #8232 referenced to the issue
> tracking #6992 that said that this issue resolved 23 days ago. Weren't  the
> code for this issue updated correctly?.
>
> I restarted one OSD at a time, after that osds have been flapping as
> described in TROUBLESHOOTING OSDS at
> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/.
>
> Does it need to reboot the hosts as Guang Yang reported in the issue
> tracking #8232 to resolve this issue?
>
> Best regards,
> Thanh Tran
>
>
> On Wed, Apr 30, 2014 at 12:53 AM, Gregory Farnum  wrote:
>>
>> Hmm, I think this might actually be another instance of
>> http://tracker.ceph.com/issues/8232, which was just reported
>> yesterday.
>> That said, I think that if you restart one OSD at a time, you should
>> be able to avoid the race condition. It was restarting all of them
>> simultaneously that got you into trouble.
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Tue, Apr 29, 2014 at 12:25 AM, Thanh Tran  wrote:
>> > Hi,
>> >
>> > I upgraded my ceph from emperor to firefly (v0.80-rc1-16-g2708c3c), i
>> > restart the whole ceph after finishing the upgrade.
>> > After restarting, ceph begin to perform checking and recovering pgs, and
>> > osds begin randomly to up and down constantly, some osds were crashed
>> > and
>> > marked as down. I tried to start the crashed osds manually, but a later
>> > time
>> > the other osds crashed.
>> >
>> > My cluster have 3 mons and 24 osds running on 3 hosts.
>> > Ceph is upgraded from "deb
>> > http://gitbuilder.ceph.com/ceph-deb-quantal-x86_64-basic/ref/firefly
>> > quantal
>> > main".
>> >
>> > log information of one osd crashed:
>> > -1> 2014-04-28 17:42:36.751933 7f6aacb30700  2 --
>> > 10.76.0.44:6814/192022485 >> 10.76.0.42:6800/5060598 pipe(0x2af92c80
>> > sd=26
>> > :17458 s=1 pgs=0 cs=0 l=0 c=0x77636c60). got newly_acked_seq 226 vs
>> > out_seq
>> > 0
>> >  0> 2014-04-28 17:42:36.752674 7f6ab293a700 -1 msg/Pipe.cc: In
>> > function
>> > 'int Pipe::connect()' thread 7f6ab293a700 time 2014-04-28
>> > 17:42:36.750997
>> > msg/Pipe.cc: 1070: FAILED assert(m)
>> >
>> >  ceph version 0.80-rc1-16-g2708c3c
>> > (2708c3c559d99e6f3b557ee1d223efa3745f655c)
>> >  1: (Pipe::connect()+0x3b61) [0xc18ab1]
>> >  2: (Pipe::writer()+0x65f) [0xc193ef]
>> >  3: (Pipe::Writer::entry()+0xd) [0xc240ad]
>> >  4: (()+0x7e9a) [0x7f6b0a3a7e9a]
>> >  5: (clone()+0x6d) [0x7f6b089523fd]
>> >  NOTE: a copy of the executable, or `objdump -rdS ` is
>> > needed to
>> > interpret this.
>> >
>> > --- logging levels ---
>> >0/ 5 none
>> >0/ 1 lockdep
>> >0/ 1 context
>> >1/ 1 crush
>> >1/ 5 mds
>> >1/ 5 mds_balancer
>> >1/ 5 mds_locker
>> >1/ 5 mds_log
>> >1/ 5 mds_log_expire
>> >1/ 5 mds_migrator
>> >0/ 1 buffer
>> >0/ 1 timer
>> >0/ 1 filer
>> >0/ 1 striper
>> >0/ 1 objecter
>> >0/ 5 rados
>> >0/ 5 rbd
>> >0/ 5 journaler
>> >0/ 5 objectcacher
>> >0/ 5 client
>> >0/ 5 osd
>> >0/ 5 optracker
>> >0/ 5 objclass
>> >1/ 3 filestore
>> >1/ 3 keyvaluestore
>> >1/ 3 journal
>> >0/ 5 ms
>> >1/ 5 mon
>> >0/10 monc
>> >1/ 5 paxos
>> >0/ 5 tp
>> >1/ 5 auth
>> >1/ 5 crypto
>> >1/ 1 finisher
>> >1/ 5 heartbeatmap
>> >1/ 5 perfcounter
>> >1/ 5 rgw
>> >1/ 5 javaclient
>> >1/ 5 asok
>> >1/ 1 throttle
>> >   -2/-2 (syslog threshold)
>> >   -1/-1 (stderr threshold)
>> >   max_recent 1
>> >   max_new 1000
>> >   log_file /var/log/ceph/ceph-osd.19.log
>> > --- end dump of recent events ---
>> > 2014-04-28 17:42:36.756085 7f6aacb30700 -1 msg/Pipe.cc: In function 'int
>> > Pipe::connect()' thread 7f6aacb30700 time 2014-04-28 17:42:36.751971
>> > msg/Pipe.cc: 1070: FAILED assert(m)
>> >
>> >  ceph version 0.80-rc1-16-g2708c3c
>> > (2708c3c559d99e6f3b557ee1d223efa3745f655c)
>> >  1: (Pipe::connect()+0x3b61) [0xc18ab1]
>> >  2: (Pipe::writer()+0x65f) [0xc193ef]
>> >  3: (Pipe::Writer::entry()+0xd) [0xc240ad]
>> >  4: (()+0x7e9a) [0x7f6b0a3a7e9a]
>> >  5: (clone()+0x6d) [0x7f6b089523fd]
>> >  NOTE: a copy of the executable, or `objdump -rdS ` is
>> > needed to
>> > interpret this.
>> >
>> > 2014-04-28 17:42:36.760152 7f6ab7f54700 -1 msg/Pipe.cc: In function 'int
>> > Pipe::connect()' thread 7f6ab7f54700 time 2014-04-28 17:42:36.753156
>> > msg/Pipe.cc: 1070: FAILED assert(m)
>> >
>> >  ceph version 0.80-rc1-16-g2708c3c
>> > (2708c3c559d99e6f3b557ee1d223efa3745f655c)
>> >  1: (Pipe::connect()+0x3b61) [0xc1

[ceph-users] Ceph User Committee monthly meeting #2 : May 2nd, 2014

2014-05-01 Thread Loic Dachary
Hi Ceph,

This month Ceph User Committee meeting proposed agenda is at

   https://wiki.ceph.com/Community/Meetings#Proposed_topics:

Feel free to add what you would like to discuss.

Date: May 2nd, 2014

Time:

 18:00-19:00 UTC
 14:00-15:00 US-Eastern
 12:00-13:00 US-Mountain
 11:00-12:00 US-Pacific
 20:00-21:00 Europe-Central

Location: irc.oftc.net#ceph

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre





signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Object Storage front-end?

2014-05-01 Thread Mandell Degerness
You can use librados directly or you can use radosgw, which, I think,
would be pretty much exactly what you are looking for.

On Tue, Apr 29, 2014 at 4:36 PM, Stuart Longland  wrote:
> Hi all,
>
> Is there some kind of web-based or WebDAV-based front-end for accessing
> a Ceph cluster?
>
> Our situation is sometimes we have big blobs that we'd like to stash
> somewhere safe, things like customer database backups, etc.  Things
> other than disk images.
>
> We haven't deployed CephFS at this stage as at the time, running more
> than one MDS was not supported and I'd rather not rely on having
> something that has that single point of failure.
>
> For now I'm just creating a RBD, formatting it XFS and slopping my data
> into that.  Not ideal, but it works: for me, as I run a Linux
> workstation.  It won't work for the Windows users (which outnumber us
> greatly).
>
> I was thinking something along the lines of a WebDAV or Samba interface,
> which I realise could be done with conventional Apache/Samba atop
> CephFS, but I was wondering if there was something that would do it
> using the librados API?  Something like Ceph Gateway, but without the
> specialised client requirement.
>
> Has anyone seen something along these lines or am I being to vague?
> Regards,
> --
> Stuart Longland
> Systems Engineer
>  _ ___
> \  /|_) |   T: +61 7 3535 9619
>  \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
>SYSTEMSMilton QLD 4064   http://www.vrt.com.au
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Object Storage front-end?

2014-05-01 Thread Craig Lewis

On 4/29/14 16:36 , Stuart Longland wrote:

I was thinking something along the lines of a WebDAV or Samba interface,
which I realise could be done with conventional Apache/Samba atop
CephFS, but I was wondering if there was something that would do it
using the librados API?  Something like Ceph Gateway, but without the
specialised client requirement.



If you're concerned about having to write code to deal with RadosGW, 
it's not necessary.


My application code uses the PHP S3 libraries, but most of my 
administrative tools are bash scripts that use s3cmd.  All of my manual 
interaction uses s3cmd.



--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com 

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website   | Twitter 
  | Facebook 
  | LinkedIn 
  | Blog 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Suresh Sadhu

Congrats to the Inktank team and Sage! ... Hope  to see more 
innovation/interesting products  coming in future.

Regards
sadhu

-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pawel Stefanski
Sent: 30 April 2014 21:38
To: Sage Weil
Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com
Subject: Re: [ceph-users] Red Hat to acquire Inktank

On Wed, Apr 30, 2014 at 2:18 PM, Sage Weil  wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the 
> team, our partners, and our customers. Ceph has come a long way in the 
> ten years since the first line of code has been written, particularly 
> over the last two years that Inktank has been focused on its 
> development. The fifty members of the Inktank team, our partners, and 
> the hundreds of other contributors have done amazing work in bringing us to 
> where we are today.
>
[...]

Hello!!!

Congratulations Glad to hear that RH will continue develop as a OSS and 
even open source Calamari!
I also admire RH work on KVM and Linux kernel, so I'm very excited on this news!

best regards!
--
Pawel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Access denied error

2014-05-01 Thread Punit Dambiwal
Hi Cedric/Yehuda,

I have generated the signature dynamically as like in this documentation
http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html

by using a reference

http://birkoff.net/blog/amazon-s3-query-string-authentication-using-php/

and my code is like







$aws_access_key_id = 'KGXJJGKDM5G7G4CNKC7R';

   $aws_secret_key = 'LC7S0twZdhtXA1XxthfMDsj5TgJpeKhZrloWa9WN';

   $downloadAudioTimeout = '+20 minutes';

  $file_path = "usage";



   $http_verb = "GET";

   $content_md5 = "";

   $content_type = "";

   $expires = strtotime($downloadAudioTimeout);

   $canonicalizedAmzHeaders = "";



 $canonicalizedResource = '/usage';;



   $stringToSign = $http_verb . "\n" . $content_md5 . "\n" .

$content_type . "\n" . $expires . "\n" . $canonicalizedAmzHeaders .

$canonicalizedResource;



   $signature = urlencode(hex2b64(hmacsha1($aws_secret_key,

utf8_encode($stringToSign;



   echo "url=".$url =

"
http://gateway.3linux.com/$file_path?AWSAccessKeyId=$aws_access_key_id&Signature=$signature&Expires=$expires
";



   return $url;



function hmacsha1($key,$data)

   {

 $blocksize=64;

 $hashfunc='sha1';

 if (strlen($key)>$blocksize)

 $key=pack('H*', $hashfunc($key));

 $key=str_pad($key,$blocksize,chr(0x00));

 $ipad=str_repeat(chr(0x36),$blocksize);

 $opad=str_repeat(chr(0x5c),$blocksize);

 $hmac = pack(

 'H*',$hashfunc(

 ($key^$opad).pack(

 'H*',$hashfunc(

 ($key^$ipad).$data



 )

 )

 )

 );

 return bin2hex($hmac);

   }



   /*

* Used to encode a field for Amazon Auth

* (taken from the Amazon S3 PHP example library)

*/

function hex2b64($str)

   {

   $raw = '';

   for ($i=0; $i < strlen($str); $i+=2)

   {

   $raw .= chr(hexdec(substr($str, $i, 2)));

   }

   return base64_encode($raw);

   }



--





I got the url like

http://gateway.3linux.com//admin/usage?AWSAccessKeyId=KGXJJGKDM5G7G4CNKC7R&Signature=ivLXdG9TltSTYEGc5nf%2B5%2B2lyxs%3D&Expires=1398757716

by using the above. When i enter this on the browser, i got the same access
denied error.

Could you please check if there is any issues with this ?


-- Forwarded message --
From: Punit Dambiwal 
Date: Tue, Apr 29, 2014 at 3:59 PM
Subject: Re: [ceph-users] Access denied error
To: Cedric Lemarchand , Yehuda Sadeh ,
"ceph-users@lists.ceph.com" 


Hi Cedric/Yehuda,

I have generated the signature dynamically as like in this documentation
http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html

by using a reference

http://birkoff.net/blog/amazon-s3-query-string-authentication-using-php/

and my code is like







$aws_access_key_id = 'KGXJJGKDM5G7G4CNKC7R';

   $aws_secret_key = 'LC7S0twZdhtXA1XxthfMDsj5TgJpeKhZrloWa9WN';

   $downloadAudioTimeout = '+20 minutes';

  $file_path = "usage";



   $http_verb = "GET";

   $content_md5 = "";

   $content_type = "";

   $expires = strtotime($downloadAudioTimeout);

   $canonicalizedAmzHeaders = "";



 $canonicalizedResource = '/usage';;



   $stringToSign = $http_verb . "\n" . $content_md5 . "\n" .

$content_type . "\n" . $expires . "\n" . $canonicalizedAmzHeaders .

$canonicalizedResource;



   $signature = urlencode(hex2b64(hmacsha1($aws_secret_key,

utf8_encode($stringToSign;



   echo "url=".$url =

"
http://gateway.3linux.com/$file_path?AWSAccessKeyId=$aws_access_key_id&Signature=$signature&Expires=$expires
";



   return $url;



function hmacsha1($key,$data)

   {

 $blocksize=64;

 $hashfunc='sha1';

 if (strlen($key)>$blocksize)

 $key=pack('H*', $hashfunc($key));

 $key=str_pad($key,$blocksize,chr(0x00));

 $ipad=str_repeat(chr(0x36),$blocksize);

 $opad=str_repeat(chr(0x5c),$blocksize);

 $hmac = pack(

 'H*',$hashfunc(

 ($key^$opad).pack(

 'H*',$hashfunc(

 ($key^$ipad).$data



 )

 )

 )

 );

 return bin2hex($hmac);

   }



   /*

* Used to encode a field for Amazon Auth

* (taken from the Amazon S3 PHP example library)

*/

function hex2b64($str)

   {

   $raw = '';

   for ($i=0; $i < strlen($str); $i+=2)

   {

   $raw .= chr(hexdec(substr($str, $i, 2)));

   }

   return base64_encode($raw);

   }



--