Re: [ceph-users] Adding new disk/OSD to ceph cluster

2016-04-11 Thread Eneko Lacunza
Hi Mad, El 09/04/16 a las 14:39, Mad Th escribió: We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks Are you using 3-way replication? I guess you are. :) 1) If we want to add more disks , what are the things that we need to be careful about? Will the following steps automati

Re: [ceph-users] How can I monitor current ceph operation at cluster

2016-04-11 Thread nick
Hi, > We're parsing the output of 'ceph daemon osd.N perf dump' for the admin > sockets in /var/run/ceph/ceph-osd.*.asok on each node in our cluster. > We then push that data into carbon-cache/graphite and using grafana for > visualization. which of those values are you using for monitoring? I can

Re: [ceph-users] Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread James Page
Hi On Sun, 10 Apr 2016 at 16:39 hp cre wrote: > Hello all, > > I was just installing jewel 10.1.0 on ubuntu xenial beta 2. > I got an error when trying to create a mon about failure to find command > 'initctl' which is in upstart. > Tried to install upstart, then got an error 'com.ubuntu...' not

[ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread 施柏安
Hi cephers, I was testing CephFS's HA. So I shutdown the active mds server. Then the one of standby mds turn to be active. Everything seems work properly. But I boot the mds server which was shutdown in test. It can't join cluster automatically. And I use command 'sudo service ceph-mds start id=0'

Re: [ceph-users] How can I monitor current ceph operation at cluster

2016-04-11 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > nick > Sent: 11 April 2016 08:26 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] How can I monitor current ceph operation at > cluster > > Hi, > > We're parsing the output of 'ceph

Re: [ceph-users] Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread hp cre
Hello James, It's a default install of xenial server beta 2 release. Created a user then followed the ceph installation quick start exactly as it is. Ceph-deploy version 1.5.31 was used as follows 1- ceph-deploy new node1 2- ceph-deploy install --release jewel node1 3- ceph-deploy mon create-in

Re: [ceph-users] Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread James Page
On Mon, 11 Apr 2016 at 10:02 hp cre wrote: > Hello James, > > It's a default install of xenial server beta 2 release. Created a user > then followed the ceph installation quick start exactly as it is. > > Ceph-deploy version 1.5.31 was used as follows > > 1- ceph-deploy new node1 > 2- ceph-deploy

Re: [ceph-users] Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread James Page
It would be handy to get visibility of your deployment log data; I'm not currently able to reproduce your issue deploying ceph using ceph-deploy on a small three node install running xenial - its correctly detecting systemd and using systemctl instead of initctl. On Mon, 11 Apr 2016 at 10:18 James

Re: [ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread John Spray
Is the ID of the MDS service really "0"? Usually people set the ID to the hostname. Check it in /var/lib/ceph/mds John On Mon, Apr 11, 2016 at 9:44 AM, 施柏安 wrote: > Hi cephers, > > I was testing CephFS's HA. So I shutdown the active mds server. > Then the one of standby mds turn to be active.

Re: [ceph-users] Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread hp cre
In the process of reproducing it now. I'll attach a full command log On 11 Apr 2016 11:42, "James Page" wrote: It would be handy to get visibility of your deployment log data; I'm not currently able to reproduce your issue deploying ceph using ceph-deploy on a small three node install running xen

Re: [ceph-users] kernel cephfs - slow requests

2016-04-11 Thread Dzianis Kahanovich
Dzianis Kahanovich пишет: > Christian Balzer пишет: > >>> New problem (unsure, but probably not observed in Hammer, but sure in >>> Infernalis): copying large (tens g) files into kernel cephfs (from >>> outside of cluster, iron - non-VM, preempt kernel) - make slow requests >>> on some of OSDs (re

Re: [ceph-users] ceph striping

2016-04-11 Thread Jason Dillaman
In general, RBD "fancy" striping can help under certain workloads where small IO would normally be hitting the same object (e.g. small sequential IO). -- Jason Dillaman - Original Message - > From: "Alwin Antreich" > To: ceph-users@lists.ceph.com > Sent: Thursday, April 7, 2016 2:4

[ceph-users] cephfs Kernel panic

2016-04-11 Thread Simon Ferber
Hi, I try to setup an ceph cluster on Debian 8.4. Mainly I followed a tutorial at http://adminforge.de/raid/ceph/ceph-cluster-unter-debian-wheezy-installieren/ As far as I can see, the first steps are just working fine. I have two nodes with four OSD on both nodes. This is the output of ceph -s

Re: [ceph-users] cephfs Kernel panic

2016-04-11 Thread Ilya Dryomov
On Mon, Apr 11, 2016 at 4:37 PM, Simon Ferber wrote: > Hi, > > I try to setup an ceph cluster on Debian 8.4. Mainly I followed a > tutorial at > http://adminforge.de/raid/ceph/ceph-cluster-unter-debian-wheezy-installieren/ > > As far as I can see, the first steps are just working fine. I have two

Re: [ceph-users] Powercpu and ceph

2016-04-11 Thread Gregory Farnum
Upstream doesn't test Ceph on Power. We built it semi-regularly several years ago but that has fallen by the wayside as well. I think some distros still package it though; and we are fairly careful about endianness and things so it's supposed to work. -Greg On Sunday, April 10, 2016, louis wrote:

[ceph-users] 回复:Re: Powercpu and ceph

2016-04-11 Thread louis
Yes,  I installed ceph on power server and can run good path io, but how can i prove it is stable on power arch? Use ceph test suite? Thanks发自网易邮箱大师 在2016年04月11日 23:44,Gregory Farnum 写道:Upstream doesn't test Ceph on Power. We bu

Re: [ceph-users] Powercpu and ceph

2016-04-11 Thread Gregory Farnum
If you've got the time to run teuthology/ceph-qa-suite on it, that would be awesome! But really if you've got it running now, you're probably good. You can exercise basically all the riskiest bits by killing some OSDs and then turning them back on once the cluster has finished peering after it mar

[ceph-users] Fwd: Re: Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread hp cre
-- Forwarded message -- From: "hp cre" Date: 11 Apr 2016 15:50 Subject: Re: [ceph-users] Ubuntu xenial and ceph jewel systemd To: "James Page" Cc: Here is exactly what has been done (just started from scratch today): 1- install default xenial beta 2 2- run apt-get update && apt

Re: [ceph-users] OSD activate Error

2016-04-11 Thread Bob R
I'd guess you previously removed an osd.0 but forgot to perform 'ceph auth del osd.0' 'ceph auth list' might show some other stray certs. Bob On Mon, Apr 4, 2016 at 9:52 PM, wrote: > Hi, > > > > I keep getting this error while try to activate: > > > > [root@mon01 ceph]# ceph-deploy osd prepare

Re: [ceph-users] upgraded to Ubuntu 16.04, getting assert failure

2016-04-11 Thread John Spray
On Sun, Apr 10, 2016 at 4:12 AM, Don Waterloo wrote: > I have a 6 osd system (w/ 3 mon, and 3 mds). > it is running cephfs as part of its task. > > i have upgraded the 3 mon nodes to Ubuntu 16.04 and the bundled ceph > 10.1.0-0ubuntu1. > > (upgraded from Ubuntu 15.10 with ceph 0.94.6-0ubuntu0.15.1

[ceph-users] RE; upgraded to Ubuntu 16.04, getting assert failure

2016-04-11 Thread Chad William Seys
Hi Don, I had a similar problem starting a mon. In my case a computer failed and I removed and recreated the 3rd mon on a new computer. It would start but never get added to the other mon's lists. Restarting the other two mons caused them to add the third to their monmap . Go

Re: [ceph-users] Fwd: Re: Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread Peter Sabaini
On 2016-04-11 18:15, hp cre wrote: > -- Forwarded message -- > From: "hp cre" mailto:hpc...@gmail.com>> > Date: 11 Apr 2016 15:50 > Subject: Re: [ceph-users] Ubuntu xenial and ceph jewel systemd > To: "James Page" mailto:james.p...@ubuntu.com>> > Cc: > > Here is exactly what has be

Re: [ceph-users] Fwd: Re: Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread hp cre
I wanted to try the latest ceph-deploy. Thats why i downloaded this version (31). Latest ubuntu version is (20). I tried today at the end of the failed attempt to uninstall this version and install the one that came with xenial, but whatever i did, it always defaulted to version 31. Maybe someone

Re: [ceph-users] Fwd: Re: Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread James Page
On Mon, 11 Apr 2016 at 21:35 hp cre wrote: > I wanted to try the latest ceph-deploy. Thats why i downloaded this > version (31). Latest ubuntu version is (20). > > I tried today at the end of the failed attempt to uninstall this version > and install the one that came with xenial, but whatever i

Re: [ceph-users] adding cache tier in productive hammer environment

2016-04-11 Thread Oliver Dzombic
Hi, currently in use: oldest: SSDs: Intel S3510 80GB HDD: HGST 6TB H3IKNAS600012872SE NAS latest: SSDs: Kingston 120 GB SV300 HDDs: HGST 3TB H3IKNAS30003272SE NAS in future will be in use: SSDs: Samsung SM863 240 GB HDDs: HGST 3TB H3IKNAS30003272SE NAS and/or Seagate ST2000NM0023 2 TB

Re: [ceph-users] Fwd: Re: Ubuntu xenial and ceph jewel systemd

2016-04-11 Thread hp cre
Hey James, Did you check my steps? What did you do differently and worked for your? Thanks for sharing.. On 11 Apr 2016 22:39, "James Page" wrote: > On Mon, 11 Apr 2016 at 21:35 hp cre wrote: > >> I wanted to try the latest ceph-deploy. Thats why i downloaded this >> version (31). Latest ubuntu

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Allen Samuels
RIP ext4. Allen Samuels Software Architect, Fellow, Systems and Software Solutions 2880 Junction Avenue, San Jose, CA 95134 T: +1 408 801 7030| M: +1 408 780 6416 allen.samu...@sandisk.com > -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.k

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Jan Schermer
RIP Ceph. > On 11 Apr 2016, at 23:42, Allen Samuels wrote: > > RIP ext4. > > > Allen Samuels > Software Architect, Fellow, Systems and Software Solutions > > 2880 Junction Avenue, San Jose, CA 95134 > T: +1 408 801 7030| M: +1 408 780 6416 > allen.samu...@sandisk.com > > >> -Original

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Mark Nelson
On 04/11/2016 04:44 PM, Sage Weil wrote: On Mon, 11 Apr 2016, Sage Weil wrote: Hi, ext4 has never been recommended, but we did test it. After Jewel is out, we would like explicitly recommend *against* ext4 and stop testing it. I should clarify that this is a proposal and solicitation of feed

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Sage Weil
On Mon, 11 Apr 2016, Sage Weil wrote: > Hi, > > ext4 has never been recommended, but we did test it. After Jewel is out, > we would like explicitly recommend *against* ext4 and stop testing it. I should clarify that this is a proposal and solicitation of feedback--we haven't made any decisions

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Michael Hanscho
Hi! How about these findings? https://events.linuxfoundation.org/sites/events/files/slides/AFL%20filesystem%20fuzzing%2C%20Vault%202016.pdf Ext4 seems to be the one file system tested best... (although xfs survived also quite long...) Gruesse Michael On 2016-04-11 23:44, Sage Weil wrote: > On

[ceph-users] Deprecating ext4 support

2016-04-11 Thread Sage Weil
Hi, ext4 has never been recommended, but we did test it. After Jewel is out, we would like explicitly recommend *against* ext4 and stop testing it. Why: Recently we discovered an issue with the long object name handling that is not fixable without rewriting a significant chunk of FileStores f

[ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-11 Thread Eric Hall
Power failure in data center has left 3 mons unable to start with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch) Have found simliar problem discussed at http://irclogs.ceph.widodh.nl/index.php?date=2015-05-29, but am unsure how to proceed. If I read ceph-kvstore-tool /var/lib

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Shinobu Kinjo
Just to clarify to prevent any confusion. Honestly I've never used ext4 as underlying filesystem for the Ceph cluster, but according to wiki [1], ext4 is recommended -; [1] https://en.wikipedia.org/wiki/Ceph_%28software%29 Shinobu - Original Message - From: "Mark Nelson" To: "Sage Wei

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Lionel Bouton
Hi, Le 11/04/2016 23:57, Mark Nelson a écrit : > [...] > To add to this on the performance side, we stopped doing regular > performance testing on ext4 (and btrfs) sometime back around when ICE > was released to focus specifically on filestore behavior on xfs. > There were some cases at the time

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Christian Balzer
Hello, What a lovely missive to start off my working day... On Mon, 11 Apr 2016 17:39:37 -0400 (EDT) Sage Weil wrote: > Hi, > > ext4 has never been recommended, but we did test it. Patently wrong, as Shinobu just pointed. Ext4 never was (especially recently) flogged as much as XFS, but it a

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Lindsay Mathieson
On 12/04/2016 9:09 AM, Lionel Bouton wrote: * If the journal is not on a separate partition (SSD), it should definitely be re-created NoCoW to avoid unnecessary fragmentation. From memory : stop OSD, touch journal.new, chattr +C journal.new, dd if=journal of=journal.new (your dd options here for

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Lionel Bouton
Le 12/04/2016 01:40, Lindsay Mathieson a écrit : > On 12/04/2016 9:09 AM, Lionel Bouton wrote: >> * If the journal is not on a separate partition (SSD), it should >> definitely be re-created NoCoW to avoid unnecessary fragmentation. From >> memory : stop OSD, touch journal.new, chattr +C journal.ne

Re: [ceph-users] ceph striping

2016-04-11 Thread Christian Balzer
On Mon, 11 Apr 2016 09:25:35 -0400 (EDT) Jason Dillaman wrote: > In general, RBD "fancy" striping can help under certain workloads where > small IO would normally be hitting the same object (e.g. small > sequential IO). > While the above is very true (especially for single/few clients), I never

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Robin H. Johnson
On Mon, Apr 11, 2016 at 06:49:09PM -0400, Shinobu Kinjo wrote: > Just to clarify to prevent any confusion. > > Honestly I've never used ext4 as underlying filesystem for the Ceph cluster, > but according to wiki [1], ext4 is recommended -; > > [1] https://en.wikipedia.org/wiki/Ceph_%28software%

[ceph-users] Thoughts on proposed hardware configuration.

2016-04-11 Thread Brad Smith
We're looking at implementing a 200+TB, 3 OSD-node Ceph cluster to be accessed as a filesystem from research compute clusters and "data transfer nodes" (from the Science DMZ network model... link

Re: [ceph-users] adding cache tier in productive hammer environment

2016-04-11 Thread Christian Balzer
Hello, On Mon, 11 Apr 2016 22:45:00 +0200 Oliver Dzombic wrote: > Hi, > > currently in use: > > oldest: > > SSDs: Intel S3510 80GB Ouch. As in, not a speed wonder at 110MB/s writes (or 2 HDDs worth), but at least suitable as a journal when it comes to sync writes. But at 45TBW dangerously low

[ceph-users] Mon placement over wide area

2016-04-11 Thread Adrian Saul
We are close to being given approval to deploy a 3.5PB Ceph cluster that will be distributed over every major capital in Australia.The config will be dual sites in each city that will be coupled as HA pairs - 12 sites in total. The vast majority of CRUSH rules will place data either local

Re: [ceph-users] Thoughts on proposed hardware configuration.

2016-04-11 Thread Christian Balzer
Hello, On Mon, 11 Apr 2016 16:57:40 -0700 Brad Smith wrote: > We're looking at implementing a 200+TB, 3 OSD-node Ceph cluster to be That's 72TB in your setup below and and 3 nodes are of course the bare minimum, they're going to perform WORSE than an identical, single, non-replicated node (late

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Sage Weil
On Tue, 12 Apr 2016, Christian Balzer wrote: > > Hello, > > What a lovely missive to start off my working day... > > On Mon, 11 Apr 2016 17:39:37 -0400 (EDT) Sage Weil wrote: > > > Hi, > > > > ext4 has never been recommended, but we did test it. > Patently wrong, as Shinobu just pointed. >

Re: [ceph-users] Mon placement over wide area

2016-04-11 Thread Christian Balzer
Hello (again), On Tue, 12 Apr 2016 00:46:29 + Adrian Saul wrote: > > We are close to being given approval to deploy a 3.5PB Ceph cluster that > will be distributed over every major capital in Australia.The config > will be dual sites in each city that will be coupled as HA pairs - 12 >

Re: [ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread 施柏安
Hi John, You are right. The id is not '0'. I checked the status of mds by command 'ceph mds dump'. There is not showing much info for MDS servers. Is there any command can check the mds list or health easily? Thank for your help. ... vagrant@mds-1:~$ sudo ls /var/lib/ceph/mds ceph-mds-1 vagran

Re: [ceph-users] Mon placement over wide area

2016-04-11 Thread Adrian Saul
Hello again Christian :) > > We are close to being given approval to deploy a 3.5PB Ceph cluster that > > will be distributed over every major capital in Australia.The config > > will be dual sites in each city that will be coupled as HA pairs - 12 > > sites in total. The vast majority of C

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Shinobu Kinjo
Hi Sage, Probably it may be better to mention that we only update master document otherwise someone gets confused again [1]. [1] https://en.wikipedia.org/wiki/Ceph_%28software%29 Cheers, Shinobu - Original Message - From: "Sage Weil" To: "Christian Balzer" Cc: ceph-de...@vger.kernel.

Re: [ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread John Spray
On Tue, Apr 12, 2016 at 2:14 AM, 施柏安 wrote: > Hi John, > > You are right. The id is not '0'. > I checked the status of mds by command 'ceph mds dump'. There is not > showing much info for MDS servers. > Is there any command can check the mds list or health easily? > The info you get in "mds dum

Re: [ceph-users] [Ceph-maintainers] Deprecating ext4 support

2016-04-11 Thread hp cre
As far as i remember, the documentation did say that either filesystems (ext4 or xfs) are OK, except for xattr which was better represented on xfs. I would think the best move would be to make xfs the default osd creation method and put in a warning about ext4 being deprecated in future releases

Re: [ceph-users] Deprecating ext4 support

2016-04-11 Thread Christian Balzer
Hello, On Mon, 11 Apr 2016 21:12:14 -0400 (EDT) Sage Weil wrote: > On Tue, 12 Apr 2016, Christian Balzer wrote: > > > > Hello, > > > > What a lovely missive to start off my working day... > > > > On Mon, 11 Apr 2016 17:39:37 -0400 (EDT) Sage Weil wrote: > > > > > Hi, > > > > > > ext4 has ne

Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-11 Thread Gregory Farnum
On Mon, Apr 11, 2016 at 3:45 PM, Eric Hall wrote: > Power failure in data center has left 3 mons unable to start with > mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch) > > Have found simliar problem discussed at > http://irclogs.ceph.widodh.nl/index.php?date=2015-05-29, but am unsur

[ceph-users] ceph breizh meetup

2016-04-11 Thread eric mourgaya
hi, The next ceph breizh meetup up will be organized at Nantes,the April 19th in the Suravenir Building: at 2 Impasse Vasco de Gama, 44800 Saint-Herblain Here the doodle: http://doodle.com/poll/3mxqqgfkn4ttpfib Will see you soon at Nantes -- Eric Mourgaya, Respectons la planete! Luttons co

Re: [ceph-users] [Ceph-maintainers] Deprecating ext4 support

2016-04-11 Thread Loic Dachary
Hi Sage, I suspect most people nowadays run tests and develop on ext4. Not supporting ext4 in the future means we'll need to find a convenient way for developers to run tests against the supported file systems. My 2cts :-) On 11/04/2016 23:39, Sage Weil wrote: > Hi, > > ext4 has never been re