Re: [ceph-users] How to mount cephfs from fstab

2014-11-24 Thread Alek Paunov

On 24.11.2014 19:08, Erik Logtenberg wrote:

...



So, how do my fellow cephfs-users do this?



I do not use cephfs yet, but there seems a measure for your problem for 
a systemd based OS:


http://www.cepheid.org/~jeff/?p=69

Kind Regards,
Alek




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Alek Paunov

On 07.12.2013 00:11, Dimitri Maziuk wrote:

On 12/06/2013 04:03 PM, Alek Paunov wrote:


We use only Fedora servers for everything, so I am curious, why you are
excluded this option from your research? (CentOS is always problematic
with the new bits of technology).


6 months lifecycle and having to os-upgrade your entire data center 3
times a year?

(OK maybe it's 18 months and once every 9 months)


Most servers novadays are re-provisioned even more often, but every new 
Fedora release comes with more and more KVM/Libvirt features and 
resolved issues, so the net effect is positive anyway.


Yes, we need some extra tests to follow the cadence, just like ceph 
upgrades and the all other components.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Alek Paunov

On 07.12.2013 01:03, Chris C wrote:

We rely on the stability of rhel/centos as well.  We have no patch/upgrade
policy or regulatory directive to do so.  Our servers are set and forget.
  We circle back for patch/upgrades only for break/fix.


Stability means keeping the ABIs (and in general all interfaces and 
conventions) stable. It is very important when e.g. you intent to deploy 
some old Sybase on these boxes. How this type of stability helps the 
Ceph/KVM node ... ?




I tried F19 just for the fun of it.  We ended up with conflicts trying to
run qemu-kvm with ceph.  I could get one or the other working but not both.
  Our architecture is calling for compute and storage to live on the same
host to save in hardware costs.

I also tried to recompile libvirt and qemu-kvm today.  I didn't even see
rbd libraries in the source code.



OSD/libvirt-kvm dual role node should work just fine with F19/F20. If 
you are interested in Fedora deployments, we could try to resolve these 
issues.


Alek

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph User Committee

2013-11-07 Thread Alek Paunov

Hi Loic,

On 08.11.2013 00:19, Loic Dachary wrote:

On 08/11/2013 04:57, Kyle Bader wrote:

I think this is a great idea.  One of the big questions users have is
what kind of hardware should I buy.  An easy way for users to publish
information about their setup (hardware, software versions, use-case,
performance) when they have successful deployments would be very valuable.
Maybe a section of wiki?


It would be interesting to a site where a Ceph admin can download an
API key/package that could be optionally installed and report
configuration information to a community API. The admin could then
supplement/correct that base information. Having much of the data
collection be automated lowers the barrier for contribution.  Bonus
points if this could be extended to SMART and failed drives so we
could have a community generated report similar to Google's disk
population study they presented at FAST'07.



Would this be something like 
http://wiki.ceph.com/01Planning/02Blueprints/Firefly/Ceph-Brag ?



It seems that all eyes are looking in the same or very close directions
:-)

Sage initially said wiki page per reference setup - outlined overview of
the context, specifics (e.g. defaults overrides and their reasoning),
possibly essential notes on some regular maintenance activities, etc. In
summary: the minimal readme or receipt enough for an admin to adapt
and replicate a proven setup.

Publishing of few concrete deployments in this form doesn't need any
development and will generate positive effect immediately - I'm doing
setup based on {wiki-page} with ... (differences), but ...

You (Loic) are developing on the practical basis for scaling all of this
at large: Convenient ceph-brag tool and online service - collecting of
detailed snapshot of the setup as it is visible from a Ceph node.

Kyle combines the two, saying: application of the collecting tool
followed by handcrafted shaping, linking and annotations before/after
publishing.

Personally, I most like Kyle's workflow - iterations of: tool based
collection - results in new version in the tool branch; applying fixes
trough the web editor - merging handcrafted defs branch;
publishing/communication.

Once the working prototype goes live, various derivatives could be
considered, e.g.:
 * Nice, possibly interactive diagrams (visual documentation) of the
   setup.
 * Standard reports with anchors for referencing in the mails.
 * Side projects for build and maintenance artifacts generation for
   various management platforms - ceph-deploy or different (of course
   assuming rejoining-back the private bits)
 * View/Report aiming extracting the essentials, roughly equivalent to
   the handcrafted Ceph setup receipt for the context.

Regards,
Alek

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Application HA Scalability Via ceph

2013-10-19 Thread Alek Paunov

On 18.10.2013 22:23, Noah Watkins wrote:


As far as constructing scriptable object interfaces (Java, LISP,
etc...) this is certainly possible, and pretty cool :) Currently we
have a development version of Lua support (github.com/ceph/ceph.git
cls-lua), and an LLVM JIT implementation about ready to make public.


Noah, Please drop a note to the LuaJIT ML with the idea and current 
features, when it is an appropriate moment for testing of this branch.


Thanks,
Alek

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Module rbd not found on Ubuntu 13.04

2013-09-15 Thread Alek Paunov

On 11.09.2013 20:05, Prasanna Gholap wrote:


By the link about aws, rbd.ko isn't included yet in linux aws . I'll try to
build the kernel manually and proceed for rbd.
Thanks for your help.


If your requirement is modern Linux (not Ubuntu exclusive) you can use 
Fedora (AMIs are built with unmodified Fedora kernel which of course 
includes recent rbd)


http://fedoraproject.org/en/get-fedora-options#clouds

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com