Re: [ceph-users] combined ceph roles

2015-02-11 Thread André Gemünd
Hi All,

this would be interesting for us (at least temporarily). Do you think it would 
be better to run the mon as a VM on the OSD host or natively?

Greetings
André

- Am 11. Feb 2015 um 20:56 schrieb pixelfairy pixelfa...@gmail.com:

> i believe combining mon+osd, up to whatever magic number of monitors you want,
> is common in small(ish) clusters. i also have a 3 node ceph cluster at home 
> and
> doing mon+osd, but not client. only rbd served to the vm hosts. no problem 
> even
> with my abuses (yanking disks out, shutting down nodes etc) starting and
> stopping the whole cluster works fine too.
> 
> On Wed, Feb 11, 2015 at 9:07 AM, Christopher Armstrong < ch...@opdemand.com >
> wrote:
> 
> 
> 
> Thanks for reporting, Nick - I've seen the same thing and thought I was just
> crazy.
> 
> Chris
> 
> On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk < n...@fisk.me.uk > wrote:
> 
> 
> 
> 
> 
> Hi David,
> 
> 
> 
> I have had a few weird issues when shutting down a node, although I can
> replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
> detection takes a lot longer when a monitor goes down at the same time,
> sometimes I have seen the whole cluster grind to a halt for several minutes
> before it works out whats happened.
> 
> 
> 
> If I stop the either role and wait for it to be detected as failed and then do
> the next role, I don’t see the problem. So it might be something to keep in
> mind when doing maintenance.
> 
> 
> 
> Nick
> 
> 
> 
> From: ceph-users [mailto: ceph-users-boun...@lists.ceph.com ] On Behalf Of 
> David
> Graham
> Sent: 10 February 2015 17:07
> To: ceph-us...@ceph.com
> Subject: [ceph-users] combined ceph roles
> 
> 
> 
> 
> Hello, I'm giving thought to a minimal footprint scenario with full 
> redundancy.
> I realize it isn't ideal--and may impact overall performance -- but wondering
> if the below example would work, supported, or known to cause issue?
> 
> 
> Example, 3x hosts each running:
> -- OSD's
> -- Mon
> -- Client
> 
> 
> 
> I thought I read a post a while back about Client+OSD on the same host 
> possibly
> being an issue -- but i am having difficulty finding that reference.
> 
> 
> I would appreciate if anyone has insight into such a setup,
> 
> thanks!
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
André Gemünd
Fraunhofer-Institute for Algorithms and Scientific Computing
andre.gemu...@scai.fraunhofer.de
Tel: +49 2241 14-2193
/C=DE/O=Fraunhofer/OU=SCAI/OU=People/CN=Andre Gemuend
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mongodb on top of rbd volumes (through krbd) ?

2015-02-11 Thread Alexandre DERUMIER
Hi,

I'm currently running a big mongodb cluster, around 2TB, (sharding + 
replication).

And I have a lot of problems with mongo replication (out of syncs and need to 
full replicate again and again datas between my mongo replicats).


So, I thinked to use rbd to replicate the storage and keep only sharding on 
mongo. (maybe with some kind of shard failover between nodes with corosync).

NODE1  NODE2 NODE3
-  --- 
[shard1]  [shard2]  [shard3]  
   | | |
   | | |
/dev/rbd0 /dev/rbd1  /dev/rbd2


Has somebody already tested such kind of setup with mongo ?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph Performance with SSD journal

2015-02-11 Thread Chris Hoy Poy
Hi Sumit, 

A couple questions: 

What brand/model SSD? 

What brand/model HDD? 

Also how they are connected to controller/motherboard? Are they sharing a bus 
(ie SATA expander)? 

RAM? 

Also look at the output of "iostat -x" or similiar, are the SSDs hitting 100% 
utilisation? 

I suspect that the 5:1 ratio of HDDs to SDDs is not ideal, you now have 5x the 
write IO trying to fit into a single SSD. I'll take a punt on it being a SATA 
connected SSD (most common), 5x ~130 megabytes/second gets very close to most 
SATA bus limits. If its a shared BUS, you possibly hit that limit even earlier 
(since all that data is now being written twice out over the bus). 

cheers; 
\Chris 


- Original Message -

From: "Sumit Gaur"  
To: ceph-users@lists.ceph.com 
Sent: Thursday, 12 February, 2015 9:23:35 AM 
Subject: [ceph-users] ceph Performance with SSD journal 

Hi Ceph -Experts, 

Have a small ceph architecture related question 

As blogs and documents suggest that ceph perform much better if we use journal 
on SSD . 

I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD + 1 SSD 
on each node and each SSD have 5 partition for journaling 5 OSDs on the node. 

Now I ran similar test as I ran for all HDD setup. 

What I saw below two reading goes in wrong direction as expected 

1) 4K write IOPS are less for SSD setup, though not major difference but less. 
2) 1024K Read IOPS are less for SSD setup than HDD setup. 

On the other hand 4K read and 1024K write both have much better numbers for SSD 
setup. 

Let me know if I am missing some obvious concept. 

Thanks 
sumit 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Upgrade 0.80.5 to 0.80.8 --the VM's read request become too slow

2015-02-11 Thread 杨万元
Hello!
We use Ceph+Openstack in our private cloud. Recently we upgrade our
centos6.5 based cluster from Ceph Emperor to Ceph Firefly.
At first,we use redhat yum repo epel to upgrade, this Ceph's version is
0.80.5. First upgrade monitor,then osd,last client. when we complete this
upgrade, we boot a VM on the cluster,then use fio to test the io
performance. The io performance is as better as before. Everything is ok!
Then we upgrade the cluster from 0.80.5 to 0.80.8,when we  completed ,
we reboot the VM to load the newest librbd. after that we also use fio to
test the io performance*.then we find the randwrite and write is as good as
before.but the randread and read is become worse, randwrite's iops from
4000-5000 to 300-400 ,and the latency is worse. the write's bw from 400MB/s
to 115MB/s*. then I downgrade the ceph client version from 0.80.8 to
0.80.5, then the reslut become  normal.
 So I think maybe something cause about librbd.  I compare the 0.80.8
release notes with 0.80.5 (
http://ceph.com/docs/master/release-notes/#v0-80-8-firefly ), I just find
this change in  0.80.8 is something about read request  :  librbd: cap
memory utilization for read requests (Jason Dillaman)  .  Who can  explain
this?


   * My ceph cluster is 400osd,5mons*:
ceph -s
 health HEALTH_OK
 monmap e11: 5 mons at {BJ-M1-Cloud71=
172.28.2.71:6789/0,BJ-M1-Cloud73=172.28.2.73:6789/0,BJ-M2-Cloud80=172.28.2.80:6789/0,BJ-M2-Cloud81=172.28.2.81:6789/0,BJ-M3-Cloud85=172.28.2.85:6789/0},
election epoch 198, quorum 0,1,2,3,4
BJ-M1-Cloud71,BJ-M1-Cloud73,BJ-M2-Cloud80,BJ-M2-Cloud81,BJ-M3-Cloud85
 osdmap e120157: 400 osds: 400 up, 400 in
 pgmap v26161895: 29288 pgs, 6 pools, 20862 GB data, 3014 kobjects
41084 GB used, 323 TB / 363 TB avail
   29288 active+clean
 client io 52640 kB/s rd, 32419 kB/s wr, 5193 op/s


 *The follwing is my ceph client conf :*
 [global]
 auth_service_required = cephx
 filestore_xattr_use_omap = true
 auth_client_required = cephx
 auth_cluster_required = cephx
 mon_host =
 172.29.204.24,172.29.204.48,172.29.204.55,172.29.204.58,172.29.204.73
 mon_initial_members = ZR-F5-Cloud24, ZR-F6-Cloud48, ZR-F7-Cloud55,
ZR-F8-Cloud58, ZR-F9-Cloud73
 fsid = c01c8e28-304e-47a4-b876-cb93acc2e980
 mon osd full ratio = .85
 mon osd nearfull ratio = .75
 public network = 172.29.204.0/24
 mon warn on legacy crush tunables = false

 [osd]
 osd op threads = 12
 filestore journal writeahead = true
 filestore merge threshold = 40
 filestore split multiple = 8

 [client]
 rbd cache = true
 rbd cache writethrough until flush = false
 rbd cache size = 67108864
 rbd cache max dirty = 50331648
 rbd cache target dirty = 33554432

 [client.cinder]
 admin socket = /var/run/ceph/rbd-$pid.asok



* My VM is 8core16G,we use fio scripts is : *
 fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randread -size=60G
-filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
 fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randwrite -size=60G
-filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
 fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=read -size=60G
-filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
 fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=write -size=60G
-filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200

 *The following is the io test result*
 ceph client verison :0.80.5
 read:  bw=*430MB*
 write: bw=420MB
 randread:   iops=*4875*   latency=65ms
 randwrite:   iops=6844   latency=46ms

 ceph client verison :0.80.8
 read: bw=*115MB*
 write: bw=480MB
 randread:   iops=*381*   latency=83ms
 randwrite:  iops=4843   latency=68ms
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph Performance with SSD journal

2015-02-11 Thread Sumit Gaur
Hi Ceph-Experts,

Have a small ceph architecture related question

As blogs and documents suggest that ceph perform much better if we use
journal on SSD.

I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD + 1
SSD on each node and each SSD have 5 partition for journaling 5 OSDs on the
node.

Now I ran similar test as I ran for all HDD setup.

What I saw below two reading goes in wrong direction as expected

1) 4K write IOPS are less for SSD setup, though not major difference but
less.
2) 1024K Read IOPS are  less  for SSD setup than HDD setup.

On the other hand 4K read and 1024K write both have much better numbers for
SSD setup.

Let me know if I am missing some obvious concept.

Thanks
sumit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread pixelfairy
i believe combining mon+osd, up to whatever magic number of monitors you
want, is common in small(ish) clusters. i also have a 3 node ceph cluster
at home and doing mon+osd, but not client. only rbd served to the vm hosts.
no problem even with my abuses (yanking disks out, shutting down nodes etc)
starting and stopping the whole cluster works fine too.

On Wed, Feb 11, 2015 at 9:07 AM, Christopher Armstrong 
wrote:

> Thanks for reporting, Nick - I've seen the same thing and thought I was
> just crazy.
>
> Chris
>
> On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk  wrote:
>
>> Hi David,
>>
>>
>>
>> I have had a few weird issues when shutting down a node, although I can
>> replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
>> detection takes a lot longer when a monitor goes down at the same time,
>> sometimes I have seen the whole cluster grind to a halt for several minutes
>> before it works out whats happened.
>>
>>
>>
>> If I stop the either role and wait for it to be detected as failed and
>> then do the next role, I don’t see the problem. So it might be something to
>> keep in mind when doing maintenance.
>>
>>
>>
>> Nick
>>
>>
>>
>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
>> Of *David Graham
>> *Sent:* 10 February 2015 17:07
>> *To:* ceph-us...@ceph.com
>> *Subject:* [ceph-users] combined ceph roles
>>
>>
>>
>> Hello, I'm giving thought to a minimal footprint scenario with full
>> redundancy. I realize it isn't ideal--and may impact overall performance
>> --  but wondering if the below example would work, supported, or known to
>> cause issue?
>>
>> Example, 3x hosts each running:
>> -- OSD's
>> -- Mon
>> -- Client
>>
>> I thought I read a post a while back about Client+OSD on the same host
>> possibly being an issue -- but i am having difficulty finding that
>> reference.
>>
>> I would appreciate if anyone has insight into such a setup,
>>
>> thanks!
>>
>>
>>
>>
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread Christopher Armstrong
Thanks for reporting, Nick - I've seen the same thing and thought I was
just crazy.

Chris

On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk  wrote:

> Hi David,
>
>
>
> I have had a few weird issues when shutting down a node, although I can
> replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
> detection takes a lot longer when a monitor goes down at the same time,
> sometimes I have seen the whole cluster grind to a halt for several minutes
> before it works out whats happened.
>
>
>
> If I stop the either role and wait for it to be detected as failed and
> then do the next role, I don’t see the problem. So it might be something to
> keep in mind when doing maintenance.
>
>
>
> Nick
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *David Graham
> *Sent:* 10 February 2015 17:07
> *To:* ceph-us...@ceph.com
> *Subject:* [ceph-users] combined ceph roles
>
>
>
> Hello, I'm giving thought to a minimal footprint scenario with full
> redundancy. I realize it isn't ideal--and may impact overall performance
> --  but wondering if the below example would work, supported, or known to
> cause issue?
>
> Example, 3x hosts each running:
> -- OSD's
> -- Mon
> -- Client
>
> I thought I read a post a while back about Client+OSD on the same host
> possibly being an issue -- but i am having difficulty finding that
> reference.
>
> I would appreciate if anyone has insight into such a setup,
>
> thanks!
>
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread Stephen Hindle
I saw a similar warning - turns out, its only an issue if your using
the kernel driver.  If your using VMs and access thru the library (eg
qemu/kvm) you should be ok...


On Tue, Feb 10, 2015 at 10:06 AM, David Graham  wrote:
> Hello, I'm giving thought to a minimal footprint scenario with full
> redundancy. I realize it isn't ideal--and may impact overall performance --
> but wondering if the below example would work, supported, or known to cause
> issue?
>
> Example, 3x hosts each running:
> -- OSD's
> -- Mon
> -- Client
>
>
> I thought I read a post a while back about Client+OSD on the same host
> possibly being an issue -- but i am having difficulty finding that
> reference.
>
> I would appreciate if anyone has insight into such a setup,
>
> thanks!
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Call for Ceph Day Speakers (SF + Amsterdam)

2015-02-11 Thread Patrick McGarry
Hey cephers,

The Ceph Day program for this year is already shaping up to be a great
one! Our first two events have been solidified (with several more
getting close), and now we just need awesome speakers to share what
they have been doing with Ceph. Currently we are accepting speakers
for the following Ceph Days:

Ceph Day San Francisco, CA -- 12 Mar 2015
Ceph Day Amsterdam -- 31 Mar 2015

If you are interested in being a Ceph Speaker, please contact me as
soon as possible with the following information:

Name
Company (or "none" if you just wish to represent yourself)
Talk topic
Brief description of material to be covered
Technical level (deeply technical or overview/informational)
Which Ceph Day you want to speak at
If you need help with travel/logistics


If you have any questions, feel free to send them my way. Thanks!


Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cache pressure fail

2015-02-11 Thread Gregory Farnum
On Wed, Feb 11, 2015 at 5:30 AM, Dennis Kramer (DT)  wrote:
> After setting the debug level to 2, I can see:
> 2015-02-11 13:36:31.922262 7f0b38294700  2 mds.0.cache check_memory_usage
> total 58516068, rss 57508660, heap 32676, malloc 1227560 mmap 0, baseline
> 39848, buffers 0, max 67108864, 8656261 / 931 inodes have caps, 10367318
> caps, 1.03674 caps per inode
>
> It doesn't look like it has serious memory problems, unless my
> interpretation is wrong of the output.

The MDS currently requests trimming based on simple dentry counts
rather than actual amount of memory in use. That's a configurable; I
believe mds_max_cache_size? It defaults to 100,000. Looks like you've
already increased it to more like 10 million.

You can go to the clients and run the "status" and "dump_cache"
commands on their admin sockets and see if the kernel is holding
references to their inodes, preventing cap releases.

> It looks like I have the same symptoms as:
> http://tracker.ceph.com/issues/10151
>
> I'm running 0.87 on all my nodes.

That bug is just about whether the health warnings for it show up, and
is resolved in v0.89, so you seeing it is expected.
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph vs Hardware RAID: No battery backed cache

2015-02-11 Thread Thomas Güttler



Am 10.02.2015 um 09:08 schrieb Mark Kirkwood:

On 10/02/15 20:40, Thomas Güttler wrote:

Hi,

does the lack of a battery backed cache in Ceph introduce any
disadvantages?

We use PostgreSQL and our servers have UPS.

But I want to survive a power outage, although it is unlikely. But "hope
is not an option ..."



You can certainly make use of adapter cards that have a battery backed cache 
with Ceph - either using RAID as usual or
creating arrays of "RAID 0 of 1 disk" that enable you to use the nice battery 
backed cache + writeback options on the
card and still have a "1 osd mapped to 1 disk" topology.

Without such cards it is still quite possible to have a power loss safe setup. 
These days (with reasonably modern 3.*
kernels) using SATA or SAS plus mount options that do *not* disable write 
barriers will leave you with a consistent,
safe state in the advent of power loss. You might want to test your SATA disk 
of choice to be sure, but SAS should be safe!


Thank you very much for the clarification.

Regards,
  Thomas Güttler
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread Nick Fisk
Hi David,

 

I have had a few weird issues when shutting down a node, although I can 
replicate it by doing a “stop ceph-all” as well. It seems that OSD failure 
detection takes a lot longer when a monitor goes down at the same time, 
sometimes I have seen the whole cluster grind to a halt for several minutes 
before it works out whats happened.

 

If I stop the either role and wait for it to be detected as failed and then do 
the next role, I don’t see the problem. So it might be something to keep in 
mind when doing maintenance.

 

Nick

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of David 
Graham
Sent: 10 February 2015 17:07
To: ceph-us...@ceph.com
Subject: [ceph-users] combined ceph roles

 

Hello, I'm giving thought to a minimal footprint scenario with full redundancy. 
I realize it isn't ideal--and may impact overall performance --  but wondering 
if the below example would work, supported, or known to cause issue?

Example, 3x hosts each running:
-- OSD's
-- Mon
-- Client



I thought I read a post a while back about Client+OSD on the same host possibly 
being an issue -- but i am having difficulty finding that reference.

I would appreciate if anyone has insight into such a setup,

thanks!
















___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cache pressure fail

2015-02-11 Thread Dennis Kramer (DT)

On Wed, 11 Feb 2015, Wido den Hollander wrote:


On 11-02-15 12:57, Dennis Kramer (DT) wrote:

On Fri, 7 Nov 2014, Gregory Farnum wrote:


Did you upgrade your clients along with the MDS? This warning
indicates the
MDS asked the clients to boot some inboxes out of cache and they have
taken
too long to do so.
It might also just mean that you're actively using more inodes at any
given
time than your MDS is configured to keep in memory.
-Greg

How can one verify this? I'm getting the same warnings. I'm curious how
I can check if there are indeed more inodes actively used than my MDS
can keep in memory.



I think that using the admin socket you can query the MDS for how much
Inodes are cached.

$ ceph daemon mds.X help

I don't know the exact syntax from the top of my head, but it should be
something you can fetch there.

And iirc it also prints this ones every X seconds in the MDS log file.

Wido


After setting the debug level to 2, I can see:
2015-02-11 13:36:31.922262 7f0b38294700  2 mds.0.cache check_memory_usage 
total 58516068, rss 57508660, heap 32676, malloc 1227560 mmap 0, baseline 
39848, buffers 0, max 67108864, 8656261 / 931 inodes have caps, 
10367318 caps, 1.03674 caps per inode


It doesn't look like it has serious memory problems, unless my 
interpretation is wrong of the output.


It looks like I have the same symptoms as:
http://tracker.ceph.com/issues/10151

I'm running 0.87 on all my nodes.


Thanks.


On Fri, Nov 7, 2014 at 5:17 AM Daniel Takatori Ohara

wrote:


Hi,

In my cluster, when i execute the command ceph health detail, show me
the
message.

mds0: Many clients (17) failing to respond to cache
pressure(client_count:
)

This message appear when i upgrade the ceph for 0.87 from 0.80.7.

Anyone help me?

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto S?rio-Liban?s de Ensino e Pesquisa
Hospital S?rio-Liban?s
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
S?o Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br

 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Are EC pools ready for production use ?

2015-02-11 Thread Loic Dachary

Hi Florent,

On 11/02/2015 12:20, Florent B wrote:
> Hi every one,
> 
> My question is simple : are erasure coded pools in Giant considered
> enough stables to be used in production ? (or is it a feature in
> development, like CephFS).

They are considered stable and useable in production.

> And what about upgrades to new versions ?

If an erasure coded pool was created using giant, it will be usable with all 
future Ceph versions. There is no upgrade procedure necessary.

Cheers

> 
> Thank you :)
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cache pressure fail

2015-02-11 Thread Wido den Hollander
On 11-02-15 12:57, Dennis Kramer (DT) wrote:
> On Fri, 7 Nov 2014, Gregory Farnum wrote:
> 
>> Did you upgrade your clients along with the MDS? This warning
>> indicates the
>> MDS asked the clients to boot some inboxes out of cache and they have
>> taken
>> too long to do so.
>> It might also just mean that you're actively using more inodes at any
>> given
>> time than your MDS is configured to keep in memory.
>> -Greg
> How can one verify this? I'm getting the same warnings. I'm curious how
> I can check if there are indeed more inodes actively used than my MDS
> can keep in memory.
> 

I think that using the admin socket you can query the MDS for how much
Inodes are cached.

$ ceph daemon mds.X help

I don't know the exact syntax from the top of my head, but it should be
something you can fetch there.

And iirc it also prints this ones every X seconds in the MDS log file.

Wido

> Thanks.
> 
>> On Fri, Nov 7, 2014 at 5:17 AM Daniel Takatori Ohara
>> 
>> wrote:
>>
>>> Hi,
>>>
>>> In my cluster, when i execute the command ceph health detail, show me
>>> the
>>> message.
>>>
>>> mds0: Many clients (17) failing to respond to cache
>>> pressure(client_count:
>>> )
>>>
>>> This message appear when i upgrade the ceph for 0.87 from 0.80.7.
>>>
>>> Anyone help me?
>>>
>>> Thank's,
>>>
>>> Att.
>>>
>>> ---
>>> Daniel Takatori Ohara.
>>> System Administrator - Lab. of Bioinformatics
>>> Molecular Oncology Center
>>> Instituto S?rio-Liban?s de Ensino e Pesquisa
>>> Hospital S?rio-Liban?s
>>> Phone: +55 11 3155-0200 (extension 1927)
>>> R: Cel. Nicolau dos Santos, 69
>>> S?o Paulo-SP. 01308-060
>>> http://www.bioinfo.mochsl.org.br
>>>
>>>  ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
> 
> Kramer M.D.
> Infrastructure Engineer
> 
> 
> Nederlands Forensisch Instituut
> Digitale Technologie & Biometrie
> Laan van Ypenburg 6 | 2497 GB | Den Haag
> Postbus 24044 | 2490 AA | Den Haag
> 
> T 070 888 66 46
> M 06 29 62 12 02
> d.kra...@nfi.minvenj.nl / den...@holmes.nl
> PGP publickey: http://www.holmes.nl/dennis.asc
> www.forensischinstituut.nl
> 
> Nederlands Forensisch Instituut. In feiten het beste.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cache pressure fail

2015-02-11 Thread Dennis Kramer (DT)

On Fri, 7 Nov 2014, Gregory Farnum wrote:


Did you upgrade your clients along with the MDS? This warning indicates the
MDS asked the clients to boot some inboxes out of cache and they have taken
too long to do so.
It might also just mean that you're actively using more inodes at any given
time than your MDS is configured to keep in memory.
-Greg
How can one verify this? I'm getting the same warnings. I'm curious how I 
can check if there are indeed more inodes actively used than my MDS can 
keep in memory.


Thanks.


On Fri, Nov 7, 2014 at 5:17 AM Daniel Takatori Ohara 
wrote:


Hi,

In my cluster, when i execute the command ceph health detail, show me the
message.

mds0: Many clients (17) failing to respond to cache pressure(client_count:
)

This message appear when i upgrade the ceph for 0.87 from 0.80.7.

Anyone help me?

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto S?rio-Liban?s de Ensino e Pesquisa
Hospital S?rio-Liban?s
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
S?o Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br

 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





Kramer M.D.
Infrastructure Engineer


Nederlands Forensisch Instituut
Digitale Technologie & Biometrie
Laan van Ypenburg 6 | 2497 GB | Den Haag
Postbus 24044 | 2490 AA | Den Haag

T 070 888 66 46
M 06 29 62 12 02
d.kra...@nfi.minvenj.nl / den...@holmes.nl
PGP publickey: http://www.holmes.nl/dennis.asc
www.forensischinstituut.nl

Nederlands Forensisch Instituut. In feiten het beste.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] wider rados namespace support?

2015-02-11 Thread Blair Bethwaite
On 11 February 2015 at 20:43, John Spray  wrote:
> Namespaces in CephFS would become useful in conjunction with limiting
> client authorization by sub-mount -- that way subdirectories could be
> assigned a layout with a particular namespace, and a client could be
> limited to that namespace on the OSD side and that path on the MDS
> side.

Agreed, that was exactly how I imagined it might work. Sounds
particularly useful in the context of a service like Manila.

But as I said, it's pretty important for RBD as well - creating
multiple pools to isolate clients doesn't scale very far at all.

-- 
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] wider rados namespace support?

2015-02-11 Thread John Spray
Namespaces in CephFS would become useful in conjunction with limiting
client authorization by sub-mount -- that way subdirectories could be
assigned a layout with a particular namespace, and a client could be
limited to that namespace on the OSD side and that path on the MDS
side.  So I guess we'd look to support them at the same time as
improving the MDS authorization stuff in general -- recently the MDS
capability string format was updated to be a bit more expressive but
we're not enforcing most of it yet.

John

On Wed, Feb 11, 2015 at 3:54 AM, Blair Bethwaite
 wrote:
> Just came across this in the docs:
> "Currently (i.e., firefly), namespaces are only useful for
> applications written on top of librados. Ceph clients such as block
> device, object storage and file system do not currently support this
> feature."
>
> Then found:
> https://wiki.ceph.com/Planning/Sideboard/rbd%3A_namespace_support
>
> Is there any progress or plans to address this (particularly for rbd
> clients but also cephfs)?
>
> --
> Cheers,
> ~Blairo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-11 Thread B L
Thank you Vickie .. and thanks to the ceph community for showing continued 
support 

Best of luck to all !


> On Feb 11, 2015, at 3:58 AM, Vickie ch  wrote:
> 
> Hi 
> The weight is reflect spaces or ability  of disks.
> For example, the weight of 100G OSD disk is 0.100(100G/1T).
> 
> 
> Best wishes,
> Vickie
> 
> 2015-02-10 22:25 GMT+08:00 B L  >:
> Thanks for everyone!!
> 
> After applying the re-weighting command (ceph osd crush reweight osd.0 
> 0.0095), my cluster is getting healthy now :))
> 
> But I have one question, what if I have hundreds of OSDs, shall I do the 
> re-weighting on each device, or there is some way to make this happen 
> automatically .. the question in other words, why would I need to do 
> weighting in the first place??
> 
> 
> 
> 
>> On Feb 10, 2015, at 4:00 PM, Vikhyat Umrao > > wrote:
>> 
>> Oh , I have miss placed the places for osd names and weight 
>> 
>> ceph osd crush reweight osd.0 0.0095  and so on ..
>> 
>> Regards,
>> Vikhyat
>> 
>> On 02/10/2015 07:31 PM, B L wrote:
>>> Thanks Vikhyat,
>>> 
>>> As suggested .. 
>>> 
>>> ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0
>>> 
>>> Invalid command:  osd.0 doesn't represent a float
>>> osd crush reweight   :  change 's weight to 
>>>  in crush map
>>> Error EINVAL: invalid command
>>> 
>>> What do you think
>>> 
>>> 
 On Feb 10, 2015, at 3:18 PM, Vikhyat Umrao >>> > wrote:
 
 sudo ceph osd crush reweight 0.0095 osd.0 to osd.5
>>> 
>> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com