Re: [openstack-dev] [Cinder][Ceph][Kingbird][Tricircle][Smaug]Build a multisite disaster recovery "stack"

2016-05-09 Thread Sébastien Han
a joint Ubernetes/Tricircle session in OSCON, you are > also welcomed to check it out. > > On Mon, May 9, 2016 at 2:14 AM, Sébastien Han > wrote: > >> Thanks for raising this. However we have "good" reasons to not talk >> about Smaug and Tricircle. >> Tho

Re: [openstack-dev] [Cinder][Ceph][Kingbird][Tricircle][Smaug]Build a multisite disaster recovery "stack"

2016-05-08 Thread Sébastien Han
We really aim for a basic approach, so we start small with Kingbird in order to address our first multi-site use case. Perhaps in the future we will need Smaug and Tricircle but we are not there yet. We will start contributing to Kingbird pretty soon and see how that goes. Thanks! -- Regards, Séb

Re: [ceph-users] RBD - possible to query "used space" of images/clones ?

2014-11-04 Thread Sébastien Han
$ rbd diff rbd/myimage-1 | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }' -- Regards, Sébastien Han. > On 04 Nov 2014, at 16:57, Daniel Schwager wrote: > > Hi, > > is there a way to query the used space of a RBD image created with format 2 >

[Yahoo-eng-team] [Bug 1370387] [NEW] Fail to fetch non-raw image on image_backend=RBD

2014-09-17 Thread Sébastien Han
Public bug reported: Using the image_backend RBD and booting a non-raw image results in a fallback to fetch_to_raw function where the goal is to download the qcow2 image on the compute, convert it into raw and import it into ceph. Fetching the image fails with the following errors: On the nova-

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-25 Thread Sébastien Han
No one? On Wednesday, April 23, 2014 3:34:47 PM UTC+2, Sébastien Han wrote: > > I don't know :/ > > -- > Regards, > Sébastien Han. > > > On Wed, Apr 23, 2014 at 11:40 AM, Strahinja Kustudić < > strahin...@nordeus.eu> wrote: > >> I don't un

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-23 Thread Sébastien Han
I don't know :/ -- Regards, Sébastien Han. On Wed, Apr 23, 2014 at 11:40 AM, Strahinja Kustudić wrote: > I don't understand, it's like it is not recognizing the *hostvars*keyword? > > > > > On Wednesday, April 23, 2014 11:21:20 AM UTC+2, Sébastien Han wrote

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-23 Thread Sébastien Han
Arf sorry, I read too fast. http://pastebin.com/H1Vzmb7w -- Regards, Sébastien Han. On Wed, Apr 23, 2014 at 11:00 AM, Strahinja Kustudić wrote: > You didn't understand me, I would like you to replaced the whole "command: > ping -c 1 {{ hostvars[item]["ansible_bond

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-23 Thread Sébastien Han
Thanks for you help, this is what I got: http://pastebin.com/qfLyS9yi Cheers. -- Regards, Sébastien Han. On Wed, Apr 23, 2014 at 10:29 AM, Strahinja Kustudić wrote: > Could you try instead of command module something like: > > debug: msg="{{ hostvars[item]['ansible

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-23 Thread Sébastien Han
Same error :( On Wednesday, April 23, 2014 12:13:26 AM UTC+2, Strahinja Kustudić wrote: > > Try: > > {{ hostvars[item]['ansible_bond1.2108']['ipv4']['address'] }} > > > On Tuesday, April 22, 2014 11:02:53 PM UTC+2, Sébastien Han wrote: >> &

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-22 Thread Sébastien Han
No more ideas? On Friday, April 18, 2014 10:12:31 AM UTC+2, Sébastien Han wrote: > > I'm confused, quotes are already there. > > command: ping -c 1 {{ hostvars[item]["ansible_bond1.2108"].ipv4.address }} > > > -- > Regards, > Sébastien Han. > > &g

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-18 Thread Sébastien Han
I'm confused, quotes are already there. command: ping -c 1 {{ hostvars[item]["ansible_bond1.2108"].ipv4.address }} -- Regards, Sébastien Han. On Fri, Apr 18, 2014 at 12:13 AM, Michael DeHaan wrote: > "ansible_bond1.2108" > > This part needs quotes around it.

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-17 Thread Sébastien Han
d: [ceph0010] => (item=ceph0060) => {"changed": true, "cmd": ["ping", "-c", "1", "{{hostvars[item][ansible_bond1.2108].ipv4.address}}"], "delta": "0:00:00.005203", "end": "2014-04-17 17:59:51.5488

Re: [ansible-project] Re: Looping over a set of host in a playbook?

2014-04-17 Thread Sébastien Han
ot;{{hostvars.{{item}}.ansible_hostname}}"} Any idea? Furthermore, the final goal is to collect the ip address of the following intertace: ansible_bond1.2108. Not sure if it's reachable given this: https://github.com/ansible/ansible/issues/6879 Thanks for your help. -- Regards, Sébast

[ansible-project] Re: Looping over a set of host in a playbook?

2014-04-16 Thread Sébastien Han
Up? On Monday, April 14, 2014 5:59:15 PM UTC+2, Sébastien Han wrote: > > Hello, > > I'm trying to loop over a set of host, get their IP and then append the > result to a file. > > Currently the action looks like this: > > - name: build rings > command:

[ansible-project] Looping over a set of host in a playbook?

2014-04-14 Thread Sébastien Han
Hello, I'm trying to loop over a set of host, get their IP and then append the result to a file. Currently the action looks like this: - name: build rings command: swift-ring-builder {{ item.service }}.builder add z1-{{ hostvars[inventory_hostname]["ansible_bond1.2108"].ipv4.address }}:{{ i

Re: [ansible-project] Issue while applying a VLANed nic variable into a playbook

2014-04-14 Thread Sébastien Han
to be accessed directly, though this would be a > backwards incompatible change for those that had dots in interface names -- > it's easier than needing to know the above. > > > > > > On Mon, Apr 7, 2014 at 7:59 AM, Sébastien Han > > > wrote: > >&

[ansible-project] Issue while applying a VLANed nic variable into a playbook

2014-04-07 Thread Sébastien Han
Hi all, This seems to only happen with a vlan on top of a bond device. The fact gathers the nic properly: pouet1 | success >> { "ansible_facts": { "ansible_bond1.2108": { "active": true, "device": "bond1.2108", "ipv4": { "address":

Re: Ceph puppet module

2013-10-16 Thread Sébastien Han
Hi Dan, During the cephdays you mentioned that you were about to redistribute all the changes you’ve made on puppet-ceph to the enovance repo. It would be great to merge both before starting anything. What’s the progress? Thanks :) Sébastien Han Cloud Engineer "Always give 100%. U

Re: [ceph-users] kernel: [ 8773.432358] libceph: osd1 192.168.0.131:6803 socket error on read

2013-10-11 Thread Sébastien Han
Hi, I was wondering, why did you use CephFS instead of RBD? RBD is much more reliable and well integrated with QEMU/KVM. Or perhaps you want to try CephFS? Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail:

Re: [ceph-users] use ceph-rest-api without rados-gw

2013-10-10 Thread Sébastien Han
Hi, Did you create a key for your user ceph? Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On

Re: OpenStack and ceph integration with puppet

2013-10-08 Thread Sébastien Han
/master/manifests/osd.pp#L73 For the rest this might be already done but your puppet manifests. Please also note that http://ceph.com/docs/next/rbd/rbd-openstack/ will need some updates for OpenStack Havana. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving bloo

Re: Ceph users meetup

2013-09-25 Thread Sébastien Han
I’m in too :) Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On September 25, 2013 at 12:58:23

Re: "ceph osd map" pointer to the code

2013-09-24 Thread Sébastien Han
Thanks Joao. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On September 24, 2013 at 3:07:5

Re: Object Write Latency

2013-09-24 Thread Sébastien Han
Hi Dan, Yes I noticed :), no we haven’t done anything on that side yet. But I’ll be happy to see this happening, that way better than a file of the fs. Would be nice if you could push something then :) Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're gi

"ceph osd map" pointer to the code

2013-09-24 Thread Sébastien Han
Hi guys, Does anyone can point me to the piece of code called during the “ceph osd map ‘pool’ ‘object’” command please? Thanks! Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 

Re: Object Write Latency

2013-09-24 Thread Sébastien Han
Ideally a partition using the first sectors of the disk. I usually do a tiny partition at the beginning of the device and leave the rest for ods_data. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@en

Re: OSD memory leaks?

2013-03-13 Thread Sébastien Han
Dave, Just to be sure, did the log max recent=1 _completely_ stod the memory leak or did it slow it down? Thanks! -- Regards, Sébastien Han. On Wed, Mar 13, 2013 at 2:12 PM, Dave Spano wrote: > Lol. I'm totally fine with that. My glance images pool isn't used too often. &

Re: mon memory leak

2013-03-13 Thread Sébastien Han
7 302528 179352 01:25:02 0.7 302528 179176 01:30:01 0.7 302528 179060 Sorry for the long post. -- Regards, Sébastien Han. On Wed, Mar 13, 2013 at 11:44 AM, Stefan Priebe - Profihost AG wrote: > Hi, > > are there any known ceph-mon memory leaks in bobtail? Today i've seen a > c

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread Sébastien Han
It's pretty straightforward, but you can 'simply' delete the pool :) (since it should be a test pool ;)). -- Regards, Sébastien Han. On Tue, Mar 12, 2013 at 10:11 PM, Scott Kinder wrote: > A follow-up question. How do I cleanup the written data, after I finish up > with my

Re: OSD memory leaks?

2013-03-12 Thread Sébastien Han
Well to avoid un necessary data movement, there is also an _experimental_ feature to change on fly the number of PGs in a pool. ceph osd pool set pg_num --allow-experimental-feature Cheers! -- Regards, Sébastien Han. On Tue, Mar 12, 2013 at 7:09 PM, Dave Spano wrote: > Disregard my previ

Re: OSD memory leaks?

2013-03-12 Thread Sébastien Han
ready said that pg_num was 450... -- Regards, Sébastien Han. On Tue, Mar 12, 2013 at 2:00 PM, Vladislav Gorbunov wrote: > Sorry, i mean pg_num and pgp_num on all pools. Shown by the "ceph osd > dump | grep 'rep size'" > The default pg_num value 8 is NOT suitable for

Re: OSD memory leaks?

2013-03-12 Thread Sébastien Han
Replica count has been set to 2. Why? -- Regards, Sébastien Han. On Tue, Mar 12, 2013 at 12:45 PM, Vladislav Gorbunov wrote: >> FYI I'm using 450 pgs for my pools. > Please, can you show the number of object replicas? > > ceph osd dump | grep 'rep size' > >

Re: OSD memory leaks?

2013-03-11 Thread Sébastien Han
Dave, It still a production platform so no I didn't try it. I've also found that now ceph-mon are constantly leaking... I truly hope your log max recent = 1 will help. Cheers. -- Regards, Sébastien Han. On Mon, Mar 11, 2013 at 7:43 PM, Dave Spano wrote: > Sebastien, > >

Re: OSD memory leaks?

2013-03-04 Thread Sébastien Han
FYI I'm using 450 pgs for my pools. -- Regards, Sébastien Han. On Fri, Mar 1, 2013 at 8:10 PM, Sage Weil wrote: > > On Fri, 1 Mar 2013, Wido den Hollander wrote: > > On 02/23/2013 01:44 AM, Sage Weil wrote: > > > On Fri, 22 Feb 2013, S?bastien Han wrote: > &g

Re: OSD memory leaks?

2013-02-25 Thread Sébastien Han
Ok thanks guys. Hope we will find something :-). -- Regards, Sébastien Han. On Mon, Feb 25, 2013 at 8:51 AM, Wido den Hollander wrote: > On 02/25/2013 01:21 AM, Sage Weil wrote: >> >> On Mon, 25 Feb 2013, S?bastien Han wrote: >>> >>> Hi Sage, >>> >

Re: OSD memory leaks?

2013-02-24 Thread Sébastien Han
Hi Sage, Sorry it's a production system, so I can't test it. So at the end, you can't get anything out of the core dump? -- Regards, Sébastien Han. On Sat, Feb 23, 2013 at 1:44 AM, Sage Weil wrote: > On Fri, 22 Feb 2013, S?bastien Han wrote: >> Hi all, >> >>

Re: OpenStack summit : Ceph design session

2013-02-24 Thread Sébastien Han
Hi, I would love to have such discussion. I think it's a good initiative. Cheers. -- Regards, Sébastien Han. On Sat, Feb 23, 2013 at 11:33 AM, Loic Dachary wrote: > Hi, > > In anticipation of the next OpenStack summit > http://www.openstack.org/summit/portland-2013/, I prop

Re: OSD memory leaks?

2013-02-22 Thread Sébastien Han
Hi all, I finally got a core dump. I did it with a kill -SEGV on the OSD process. https://www.dropbox.com/s/ahv6hm0ipnak5rf/core-ceph-osd-11-0-0-20100-1361539008 Hope we will get something out of it :-). -- Regards, Sébastien Han. On Fri, Jan 11, 2013 at 7:13 PM, Gregory Farnum wrote: >

Re: Geo-replication with RBD

2013-02-19 Thread Sébastien Han
, Sébastien Han. On Mon, Feb 18, 2013 at 3:20 PM, Sławomir Skowron wrote: > Hi, Sorry for very late response, but i was sick. > > Our case is to make a failover rbd instance in another cluster. We are > storing block device images, for some services like Database. We need > to have

Re: [Openstack] Problems with drbd + pacemaker in HA

2013-02-19 Thread Sébastien Han
w.drbd.org/users-guide-8.3/s-resolve-split-brain.html Cheers -- Regards, Sébastien Han. On Tue, Feb 19, 2013 at 2:38 AM, Samuel Winchenbach wrote: > Hi All, > > I recently switched from CentOS 6.3 to Ubuntu LTS server and have started > encountering some really odd problems w

Re: [0.48.3] OSD memory leak when scrubbing

2013-02-17 Thread Sébastien Han
+1 -- Regards, Sébastien Han. On Sat, Feb 16, 2013 at 10:09 AM, Wido den Hollander wrote: > On 02/16/2013 08:09 AM, Andrey Korolyov wrote: >> >> Can anyone who hit this bug please confirm that your system contains libc >> 2.15+? >> > > I've seen this with

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Well if you follow my article, you will get LVS-NAT running. It's fairly easy, no funky stuff. Yes you will probably need the postrouting rule, as usual :). Let me know how it goes ;) -- Regards, Sébastien Han. On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach wrote: > I > didn&#

Re: [Openstack] Suggestions for shared-storage cluster file system

2013-02-15 Thread Sébastien Han
But if are in a hurry and looking for a DFS then GlusterFS seems to be a good candidate. NFS works pretty well too. Cheers. -- Regards, Sébastien Han. On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso < juanfra.rodriguez.card...@gmail.com> wrote: > Another one: &

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Ok but why direct routing instead of NAT? If the public IPs are _only_ on LVS there is no point to use LVS-DR. LVS has the public IPs and redirects to the private IPs, this _must_ work. Did you try NAT? Or at least can you give it a shot? -- Regards, Sébastien Han. On Fri, Feb 15, 2013 at 3:55

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Hum I don't see the problem, it's possible to load-balance VIPs with LVS, there are just IPs... Can I see your conf? -- Regards, Sébastien Han. On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach wrote: > W > ell, I think I will have to go with one ip per service and for

Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Sébastien Han
y create a resource group with all the openstack service inside it (it's ugly but if it's what you want :)). Give me more info about your setup and we can go further in the discussion :). -- Regards, Sébastien Han. On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach wrote: > T > he on

Re: [ceph-users] urgent journal conf on ceph.conf

2013-02-14 Thread Sébastien Han
id/journal but change it for something like osd journal = /srv/ceph/journals/osd$id/journal. Cheers. -- Regards, Sébastien Han. On Thu, Feb 14, 2013 at 2:52 PM, Joao Eduardo Luis wrote: > Including ceph-users, as it feels like this belongs there :-) > > > > On 02/14/2013 01:47 PM,

Re: [ceph-users] OSD Weights

2013-02-14 Thread Sébastien Han
f you need more input, have a look at the documentation ;-) http://ceph.com/docs/master/rados/operations/crush-map/?highlight=crush#adjust-an-osd-s-crush-weight Cheers, -- Regards, Sébastien Han. On Wed, Feb 13, 2013 at 4:23 PM, sheng qiu wrote: > Hi Gregory, > > once running ceph onl

Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Sébastien Han
What's the problem to have one IP on service pool basis? -- Regards, Sébastien Han. On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach wrote: > What if the VIP is created on a different host than keystone is started > on? It seems like you either need to set net.ipv4.ip_nonloc

Re: [0.48.3] OSD memory leak when scrubbing

2013-02-04 Thread Sébastien Han
oh nice, the pattern also matches path :D, didn't know that thanks Greg -- Regards, Sébastien Han. On Mon, Feb 4, 2013 at 10:22 PM, Gregory Farnum wrote: > Set your /proc/sys/kernel/core_pattern file. :) > http://linux.die.net/man/5/core > -Greg > > On Mon, Feb 4, 2013 a

Re: [0.48.3] OSD memory leak when scrubbing

2013-02-04 Thread Sébastien Han
ok I finally managed to get something on my test cluster, unfortunately, the dump goes to / any idea to change the destination path? My production / won't be big enough... -- Regards, Sébastien Han. On Mon, Feb 4, 2013 at 10:03 PM, Dan Mick wrote: > ...and/or do you have the core

Re: [Openstack] can't run nova boot from command line (works from horizon)

2013-02-04 Thread Sébastien Han
nova network-list then look for the id and add the following to your boot command: nova boot bla bla bla --nic net-id= Let me know if it's better. Cheers. -- Regards, Sébastien Han. On Mon, Feb 4, 2013 at 6:24 PM, JR wrote: > Greetings, > > I'm running a devstack test

Re: [Openstack] can't run nova boot from command line (works from horizon)

2013-02-04 Thread Sébastien Han
hum ok now I wonder if you created a network or not? # nova-manage network list ? -- Regards, Sébastien Han. On Mon, Feb 4, 2013 at 7:09 PM, JR wrote: > Hi Sébastien > > Problem is, I can't run nova network-list either! > > stack@gpfs6-int:~$ nova network-list > ERR

Re: [0.48.3] OSD memory leak when scrubbing

2013-02-04 Thread Sébastien Han
Hum just tried several times on my test cluster and I can't get any core dump. Does Ceph commit suicide or something? Is it expected behavior? -- Regards, Sébastien Han. On Sun, Feb 3, 2013 at 10:03 PM, Sébastien Han wrote: > Hi Loïc, > > Thanks for bringing our discussion on the

Re: [0.48.3] OSD memory leak when scrubbing

2013-02-03 Thread Sébastien Han
Hi Loïc, Thanks for bringing our discussion on the ML. I'll check that tomorrow :-). Cheer -- Regards, Sébastien Han. On Sun, Feb 3, 2013 at 10:01 PM, Sébastien Han wrote: > Hi Loïc, > > Thanks for bringing our discussion on the ML. I'll check that tomorrow :-). > >

Re: [Openstack] nova ceph integration

2013-01-31 Thread Sébastien Han
compute. With the boot from volume it's one RBD per instance which brings way more IOPS to your instance. Still with boot from volume you can also enjoy the rbd cache on the client side, cache that will also helps with buffered IO. Cheers! -- Regards, Sébastien Han. On Thu, Jan 31, 2013 at 7:

Re: [Openstack] nova ceph integration

2013-01-31 Thread Sébastien Han
gards, Sébastien Han. On Thu, Jan 31, 2013 at 7:40 AM, Wolfgang Hennerbichler wrote: > Hi, > > I'm sorry if this has been asked before. My question is: can I integrate ceph > into openstack's nova & cinder in a way, that I don't need > /var/lib/nova/instances anymo

Re: [Openstack] List of Cinder compatible devices

2013-01-31 Thread Sébastien Han
Just added some stuff about RBD where E refers to Essex. -- Regards, Sébastien Han. On Thu, Jan 31, 2013 at 11:20 AM, Avishay Traeger wrote: > openstack-bounces+avishay=il.ibm@lists.launchpad.net wrote on > 01/31/2013 12:37:07 AM: >> From: Tom Fifield >> To: openstack@l

Re: [Openstack] List of Cinder compatible devices

2013-01-30 Thread Sébastien Han
+ RBD (Ceph) +1 for the matrix, this will be really nice :-) -- Regards, Sébastien Han. On Wed, Jan 30, 2013 at 5:04 PM, Tim Bell wrote: > > > Is there a list of devices which are currently compatible with cinder and > their relative functionality ? > > > > Looking

Re: [0.48.3] OSD memory leak when scrubbing

2013-01-25 Thread Sébastien Han
Hi, Could provide those heaps? Is it possible? -- Regards, Sébastien Han. On Tue, Jan 22, 2013 at 10:38 PM, Sébastien Han wrote: > Well ideally you want to run the profiler during the scrubbing process > when the memory leaks appear :-). > -- > Regards, > Sébastien Han. >

Re: [0.48.3] OSD memory leak when scrubbing

2013-01-22 Thread Sébastien Han
Well ideally you want to run the profiler during the scrubbing process when the memory leaks appear :-). -- Regards, Sébastien Han. On Tue, Jan 22, 2013 at 10:32 PM, Sylvain Munaut wrote: > Hi, > >> I don't really want to try the mem profiler, I had quite a bad >> exper

Re: [Openstack] Bypassing the keypair in Folsom

2013-01-22 Thread Sébastien Han
so you prefer to be asked for a password instead of log in passwordless? as suggested, edit the base image and create a password for the user :) -- Regards, Sébastien Han. On Tue, Jan 22, 2013 at 6:08 PM, Balamurugan V G wrote: > My ssh debug logs are below: > > $ ssh -vvv roo

Re: [0.48.3] OSD memory leak when scrubbing

2013-01-22 Thread Sébastien Han
er I can't reproduce the problem on my test environment... :( -- Regards, Sébastien Han. On Tue, Jan 22, 2013 at 9:01 PM, Sylvain Munaut wrote: > Hi, > > Since I have ceph in prod, I experienced a memory leak in the OSD > forcing to restart them every 5 or 6 days. Without that t

Re: Inktank team @ FOSDEM 2013 ?

2013-01-21 Thread Sébastien Han
Hi guys, See you at FOSDEM ;-) Cheers, -- Regards, Sébastien Han. On Sun, Jan 20, 2013 at 6:13 PM, Constantinos Venetsanopoulos wrote: > Hello Loic, Sebastien, Patrick, > > that's great news! I'm sure we'll have some very interesting stuff to talk > about. >

Re: ceph from poc to production

2013-01-17 Thread Sébastien Han
Cool, I look forward to reading it! -- Regards, Sébastien Han. On Thu, Jan 17, 2013 at 5:29 PM, Mark Nelson wrote: > On 01/17/2013 10:24 AM, Sébastien Han wrote: >> >> Hi Stephan, >> >>> - Increase the osdmax value >> >> >> Well actually this doe

Re: ceph from poc to production

2013-01-17 Thread Sébastien Han
f the eventual sync. In practice, disabling ‘filestore flusher’ seems to improve performance in some cases." Cheers, -- Regards, Sébastien Han. On Wed, Jan 16, 2013 at 7:41 PM, Stefan Priebe wrote: > > Hello Sebastien, > hello list, > > first nice article sebastien ;-)

Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut

2013-01-16 Thread Sébastien Han
Thanks Sage! -- Regards, Sébastien Han. On Wed, Jan 16, 2013 at 5:39 PM, Sage Weil wrote: > On Wed, 16 Jan 2013, S?bastien Han wrote: >> Can we use this doc as a reference for the upgrade? >> >> https://github.com/ceph/ceph/blob/eb02eaede53c03579d015ca00a888a48dbab739a/

Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut

2013-01-16 Thread Sébastien Han
Can we use this doc as a reference for the upgrade? https://github.com/ceph/ceph/blob/eb02eaede53c03579d015ca00a888a48dbab739a/doc/install/upgrading-ceph.rst Thanks. -- Regards, Sébastien Han. On Tue, Jan 15, 2013 at 10:49 PM, Sage Weil wrote: > That there are some critical bugs that

Re: OSD memory leaks?

2013-01-11 Thread Sébastien Han
ption was already high before the profiler was started. So yes with the memory profiler enable an OSD might consume more memory but this doesn't cause the memory leaks. Any ideas? Nothing to say about my scrumbing theory? Thanks! -- Regards, Sébastien Han. On Thu, Jan 10, 2013 at 10:44 PM,

Re: [Openstack] Migrate Instance to another Tenant ID in the same environment

2013-01-11 Thread Sébastien Han
If an admin user put it public, this is also possible. -- Regards, Sébastien Han. On Fri, Jan 11, 2013 at 3:40 AM, Lei Zhang wrote: > why not try boot from snapshot. That's will save some time. > > > On Thu, Jan 10, 2013 at 5:18 AM, Sébastien Han > wrote: >> >>

Re: [Openstack] Migrate Instance to another Tenant ID in the same environment

2013-01-10 Thread Sébastien Han
Cool! -- Regards, Sébastien Han. On Thu, Jan 10, 2013 at 11:15 AM, Alex Vitola wrote: > Changed directly by the database. > > Not the best way but I did because it was an environment. > > So far I have not found any problems > > > mysql> use nova; > mysql&

Re: OSD memory leaks?

2013-01-09 Thread Sébastien Han
ing the next OSD..." sleep 60 done logger -t ceph-memory-usage "Ceph state after memory check operation is: $(ceph health)" Crons run with 10 min interval everyday for each storage node ;-). Waiting for some Inktank guys now :-). -- Regards, Sébastien Han. On Wed, Jan 9, 2013 a

Re: [Openstack] Migrate Instance to another Tenant ID in the same environment

2013-01-09 Thread Sébastien Han
! Cheers! -- Regards, Sébastien Han. On Wed, Jan 9, 2013 at 8:14 PM, Alex Vitola wrote: > I have 2 projects in my environment: > > ProjectQA1: ID -> 0001 > ProjectQA2: ID -> 0002 > > root@Controller:# keystone tenant-list > +-++-+ > |

Re: OSD memory leaks?

2013-01-09 Thread Sébastien Han
Hi, Thanks for the input. I also have tons of "socket closed", I recall that this message is harmless. Anyway Cephx is disable on my platform from the beginning... Anyone to approve or disapprove my "scrub theory"? -- Regards, Sébastien Han. On Wed, Jan 9, 2013 at 7:0

Re: OSD memory leaks?

2013-01-09 Thread Sébastien Han
If you wait too long, the system will trigger OOM killer :D, I already experienced that unfortunately... Sam? On Wed, Jan 9, 2013 at 5:10 PM, Dave Spano wrote: > OOM killer -- Regards, Sébastien Han. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in

Re: OSD memory leaks?

2013-01-09 Thread Sébastien Han
I guess he runs Argonaut as well. More suggestions about this problem? Thanks! -- Regards, Sébastien Han. On Mon, Jan 7, 2013 at 8:09 PM, Samuel Just wrote: > > Awesome! What version are you running (ceph-osd -v, include the hash)? > -Sam > > On Mon, Jan 7, 2013 at 11:03

Re: [Openstack] Nova (compute) and syslog

2013-01-07 Thread Sébastien Han
Hi, Stupid question, did you restart compute and api service? I don't have any problems with those flags. -- Regards, Sébastien Han. On Mon, Jan 7, 2013 at 9:58 AM, Robert van Leeuwen < robert.vanleeu...@spilgames.com> wrote: > Hi, > > I'm trying to get all logg

Re: OSD memory leaks?

2013-01-04 Thread Sébastien Han
lmost the same for all the OSD process. Thank you in advance. -- Regards, Sébastien Han. On Wed, Dec 19, 2012 at 10:43 PM, Samuel Just wrote: > > Sorry, it's been very busy. The next step would to try to get a heap > dump. You can start a heap profile on osd N by: > > c

Re: v0.56 released

2013-01-02 Thread Sébastien Han
Oh ok I see, thanks for the clarification :) -- Regards, Sébastien Han. On Wed, Jan 2, 2013 at 7:11 PM, Sage Weil wrote: > On Wed, 2 Jan 2013, S?bastien Han wrote: >> Debian-testing shows the version 0.56-1, maybe I misunderstood but I >> thought that 0.56-1 bobtail was the new

Re: v0.56 released

2013-01-02 Thread Sébastien Han
Debian-testing shows the version 0.56-1, maybe I misunderstood but I thought that 0.56-1 bobtail was the new version of the stable branch. So I was expecting to see it here http://ceph.com/debian/dists/precise/main/binary-amd64/Packages Correct me if I'm wrong :) -- Regards, Sébastien Han.

Re: Ceph logging level

2013-01-02 Thread Sébastien Han
rvices that log into LOG_DAEMON. Using LOCAL0, LOCAL1 and so on for example. > We can just add a min log level configurable. Please :-) New config options like: * log_facility * log_level will be a good starting point I guess. Thanks! -- Regards, Sébastien Han. On Wed, Jan 2, 2013 at 5:28

Ceph logging level

2013-01-02 Thread Sébastien Han
ant to see ERROR logs. Thanks you in advance :-) -- Regards, Sébastien Han. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: v0.56 released

2013-01-02 Thread Sébastien Han
It looks the same for Debian packages, 0.56.1 is on testing branch. http://ceph.com/debian-testing/dists/precise/main/binary-amd64/Packages If someone can fix this, thanks in advance ;-) -- Regards, Sébastien Han. On Wed, Jan 2, 2013 at 3:28 AM, Mark Nelson wrote: > Doh! Sorry about t

Re: OSD memory leaks?

2012-12-19 Thread Sébastien Han
No more suggestions? :( -- Regards, Sébastien Han. On Tue, Dec 18, 2012 at 6:21 PM, Sébastien Han wrote: > Nothing terrific... > > Kernel logs from my clients are full of "libceph: osd4 > 172.20.11.32:6801 socket closed" > > I saw this somewhere on the tracker. &g

Re: OSD memory leaks?

2012-12-18 Thread Sébastien Han
Nothing terrific... Kernel logs from my clients are full of "libceph: osd4 172.20.11.32:6801 socket closed" I saw this somewhere on the tracker. Does this harm? Thanks. -- Regards, Sébastien Han. On Mon, Dec 17, 2012 at 11:55 PM, Samuel Just wrote: > > What is the workl

Re: [Openstack] Openstack High Availability

2012-12-18 Thread Sébastien Han
Thanks Razique, I still need to edit the official HA doc to give details about this setup, I don't really have the time this week. I hope I can save some time before the end of the year. Cheers! -- Regards, Sébastien Han. On Tue, Dec 18, 2012 at 12:13 AM, Razique Mahroua wrote: > Gre

Re: OSD memory leaks?

2012-12-17 Thread Sébastien Han
Hi, No, I don't see nothing abnormal in the network stats. I don't see anything in the logs... :( The weird thing is that one node over 4 seems to take way more memory than the others... -- Regards, Sébastien Han. On Mon, Dec 17, 2012 at 11:31 PM, Sébastien Han wrote: > > Hi,

Re: [Openstack] Openstack High Availability

2012-12-17 Thread Sébastien Han
-components-ha/ For the latest article *please use* this repo, this our new location with several branches (Essex/Folsom). https://github.com/madkiss/openstack-resource-agents -- Regards, Sébastien Han. On Mon, Dec 17, 2012 at 9:56 PM, Eugene Kirpichov wrote: > Right, you only need HA for sw

Fwd: OSD memory leaks?

2012-12-17 Thread Sébastien Han
that I have to provide. So let me know. The only thing I can say is that the load haven't increase that much this week. It seems to be consuming and not giving back the memory. Thank you in advance. -- Regards, Sébastien Han. <>

Re: [Openstack] [ERROR] refresh_instance_security_rules

2012-12-15 Thread Sébastien Han
Hi Vish, The logs don't show more, even after enabling DEBUG logs... See debug mode below right away before and after the message: http://pastebin.com/1LCXuaVi I forgot to mention but it _only_ appears while rolling out a new instance. Thanks. -- Regards, Sébastien Han. On Sat, D

[Openstack] [ERROR] refresh_instance_security_rules

2012-12-14 Thread Sébastien Han
rity_rules#0122012-12-12 23:46:29 TRACE nova.openstack.common.rpc.amqp self.firewall_driver.refresh_instance_security_rules(instance)# This error seems harmless, as far as I can tell everything works perfectly. Even so I'd like to have some input about it (ideally a fix bec

Re: [Openstack] Is there any way to migrate the Instance between the projects/tenants?

2012-11-30 Thread Sébastien Han
dummy in project B - delete the volume from project A If you use Ceph RBD it's really easy for example. For the rest I don't know. -- Bien cordialement. Sébastien HAN. On Thu, Nov 29, 2012 at 9:55 AM, Lei Zhang wrote: > Hi Sébastien, > > Good ideas. There is a very tri

Re: [Openstack] Is there any way to migrate the Instance between the projects/tenants?

2012-11-28 Thread Sébastien Han
Hi, What I will do to achieve what you want: _ take a snapshot of your instance _ export the snapshot from wherever it's stored (filesystem for instance) _ import it to Glance, make the image to public or assign it to the tenant (not 100% sure if the latest is possible though...) _ run a new vm w

Re: [Openstack] how to let the instance name (instance-xxx) equal to the hostname of the instance (chosen by the user)??

2012-11-28 Thread Sébastien Han
t 2:16 PM, Vishvananda Ishaya > wrote: > > > > > On Nov 28, 2012, at 2:08 PM, Sébastien Han > wrote: > > > >> Hi, > >> > >> Just tried this, it works but I'd also like to rename > /var/lib/nova/instances/ according to the hostname. At th

Re: [Openstack] how to let the instance name (instance-xxx) equal to the hostname of the instance (chosen by the user)??

2012-11-28 Thread Sébastien Han
Hi, Just tried this, it works but I'd also like to rename /var/lib/nova/instances/ according to the hostname. At the moment this only rename (output from nova show): | OS-EXT-SRV-ATTR:instance_name | mon-nom Is it possible? Cheers! On Wed, Nov 28, 2012 at 7:31 PM, John Garbutt wrote: >

Re: RBD fio Performance concerns

2012-11-22 Thread Sébastien Han
22.11.2012 11:49, schrieb Sébastien Han: > >> @Alexandre: cool! >> >> @ Stefan: Full SSD cluster and 10G switches? > > Yes > > >> Couple of weeks ago I saw >> that you use journal aio, did you notice performance improvement with it? > > journal

Re: RBD fio Performance concerns

2012-11-22 Thread Sébastien Han
>>But who cares? it's also on the 2nd node. or even on the 3rd if you have >>replicas 3. Yes but you could also suffer a crash while writing the first replica. If the journal is in tmpfs, there is nothing to replay. On Thu, Nov 22, 2012 at 4:35 PM, Alexandre DERUMIER wrote: > > >>But who cares

Re: RBD fio Performance concerns

2012-11-22 Thread Sébastien Han
kaged in bobtail) and also see if this behavior happens >> with cephfs. It's still too early in the morning for me right now to >> come up with a reasonable explanation for what's going on. It might be >> worth running blktrace and seekwatcher to see what the

Re: RBD fio Performance concerns

2012-11-22 Thread Sébastien Han
Hum sorry, you're right. Forget about what I said :) On Thu, Nov 22, 2012 at 4:54 PM, Stefan Priebe - Profihost AG wrote: > I thought the Client would then write to the 2nd is this wrong? > > Stefan > > Am 22.11.2012 um 16:49 schrieb Sébastien Han : > >>>>

Re: [Openstack] Ceph + Nova

2012-11-21 Thread Sébastien Han
ity than my small database > does. Instead, I prefer to perform block migrations rather than live ones > until cephfs becomes more stable. > > Dave Spano > Optogenics > Systems Administrator > > > -- > *From: *"Sébastien Han" >

  1   2   3   >