a joint Ubernetes/Tricircle session in OSCON, you are
> also welcomed to check it out.
>
> On Mon, May 9, 2016 at 2:14 AM, Sébastien Han > wrote:
>
>> Thanks for raising this. However we have "good" reasons to not talk
>> about Smaug and Tricircle.
>> Tho
We really aim for a basic approach, so we start small with Kingbird in
order to address our first multi-site use case.
Perhaps in the future we will need Smaug and Tricircle but we are not there yet.
We will start contributing to Kingbird pretty soon and see how that goes.
Thanks!
--
Regards,
Séb
$ rbd diff rbd/myimage-1 | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'
--
Regards,
Sébastien Han.
> On 04 Nov 2014, at 16:57, Daniel Schwager wrote:
>
> Hi,
>
> is there a way to query the used space of a RBD image created with format 2
>
Public bug reported:
Using the image_backend RBD and booting a non-raw image results in a fallback
to fetch_to_raw function where the goal is to download the qcow2 image on the
compute, convert it into raw and import it into ceph.
Fetching the image fails with the following errors:
On the nova-
No one?
On Wednesday, April 23, 2014 3:34:47 PM UTC+2, Sébastien Han wrote:
>
> I don't know :/
>
> --
> Regards,
> Sébastien Han.
>
>
> On Wed, Apr 23, 2014 at 11:40 AM, Strahinja Kustudić <
> strahin...@nordeus.eu> wrote:
>
>> I don't un
I don't know :/
--
Regards,
Sébastien Han.
On Wed, Apr 23, 2014 at 11:40 AM, Strahinja Kustudić
wrote:
> I don't understand, it's like it is not recognizing the *hostvars*keyword?
>
>
>
>
> On Wednesday, April 23, 2014 11:21:20 AM UTC+2, Sébastien Han wrote
Arf sorry, I read too fast.
http://pastebin.com/H1Vzmb7w
--
Regards,
Sébastien Han.
On Wed, Apr 23, 2014 at 11:00 AM, Strahinja Kustudić
wrote:
> You didn't understand me, I would like you to replaced the whole "command:
> ping -c 1 {{ hostvars[item]["ansible_bond
Thanks for you help, this is what I got:
http://pastebin.com/qfLyS9yi
Cheers.
--
Regards,
Sébastien Han.
On Wed, Apr 23, 2014 at 10:29 AM, Strahinja Kustudić
wrote:
> Could you try instead of command module something like:
>
> debug: msg="{{ hostvars[item]['ansible
Same error :(
On Wednesday, April 23, 2014 12:13:26 AM UTC+2, Strahinja Kustudić wrote:
>
> Try:
>
> {{ hostvars[item]['ansible_bond1.2108']['ipv4']['address'] }}
>
>
> On Tuesday, April 22, 2014 11:02:53 PM UTC+2, Sébastien Han wrote:
>>
&
No more ideas?
On Friday, April 18, 2014 10:12:31 AM UTC+2, Sébastien Han wrote:
>
> I'm confused, quotes are already there.
>
> command: ping -c 1 {{ hostvars[item]["ansible_bond1.2108"].ipv4.address }}
>
>
> --
> Regards,
> Sébastien Han.
>
>
&g
I'm confused, quotes are already there.
command: ping -c 1 {{ hostvars[item]["ansible_bond1.2108"].ipv4.address }}
--
Regards,
Sébastien Han.
On Fri, Apr 18, 2014 at 12:13 AM, Michael DeHaan wrote:
> "ansible_bond1.2108"
>
> This part needs quotes around it.
d: [ceph0010] => (item=ceph0060) => {"changed": true, "cmd": ["ping",
"-c", "1", "{{hostvars[item][ansible_bond1.2108].ipv4.address}}"], "delta":
"0:00:00.005203", "end": "2014-04-17 17:59:51.5488
ot;{{hostvars.{{item}}.ansible_hostname}}"}
Any idea? Furthermore, the final goal is to collect the ip address of the
following intertace: ansible_bond1.2108. Not sure if it's reachable given
this: https://github.com/ansible/ansible/issues/6879
Thanks for your help.
--
Regards,
Sébast
Up?
On Monday, April 14, 2014 5:59:15 PM UTC+2, Sébastien Han wrote:
>
> Hello,
>
> I'm trying to loop over a set of host, get their IP and then append the
> result to a file.
>
> Currently the action looks like this:
>
> - name: build rings
> command:
Hello,
I'm trying to loop over a set of host, get their IP and then append the
result to a file.
Currently the action looks like this:
- name: build rings
command: swift-ring-builder {{ item.service }}.builder add z1-{{
hostvars[inventory_hostname]["ansible_bond1.2108"].ipv4.address }}:{{
i
to be accessed directly, though this would be a
> backwards incompatible change for those that had dots in interface names --
> it's easier than needing to know the above.
>
>
>
>
>
> On Mon, Apr 7, 2014 at 7:59 AM, Sébastien Han
>
> > wrote:
>
>&
Hi all,
This seems to only happen with a vlan on top of a bond device.
The fact gathers the nic properly:
pouet1 | success >> {
"ansible_facts": {
"ansible_bond1.2108": {
"active": true,
"device": "bond1.2108",
"ipv4": {
"address":
Hi Dan,
During the cephdays you mentioned that you were about to redistribute all the
changes you’ve made on puppet-ceph to the enovance repo.
It would be great to merge both before starting anything.
What’s the progress?
Thanks :)
Sébastien Han
Cloud Engineer
"Always give 100%. U
Hi,
I was wondering, why did you use CephFS instead of RBD?
RBD is much more reliable and well integrated with QEMU/KVM.
Or perhaps you want to try CephFS?
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail:
Hi,
Did you create a key for your user ceph?
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On
/master/manifests/osd.pp#L73
For the rest this might be already done but your puppet manifests.
Please also note that http://ceph.com/docs/next/rbd/rbd-openstack/ will need
some updates for OpenStack Havana.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving bloo
I’m in too :)
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On September 25, 2013 at 12:58:23
Thanks Joao.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On September 24, 2013 at 3:07:5
Hi Dan,
Yes I noticed :), no we haven’t done anything on that side yet.
But I’ll be happy to see this happening, that way better than a file of the fs.
Would be nice if you could push something then :)
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're gi
Hi guys,
Does anyone can point me to the piece of code called during the “ceph osd map
‘pool’ ‘object’” command please?
Thanks!
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address :
Ideally a partition using the first sectors of the disk.
I usually do a tiny partition at the beginning of the device and leave the rest
for ods_data.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@en
Dave,
Just to be sure, did the log max recent=1 _completely_ stod the
memory leak or did it slow it down?
Thanks!
--
Regards,
Sébastien Han.
On Wed, Mar 13, 2013 at 2:12 PM, Dave Spano wrote:
> Lol. I'm totally fine with that. My glance images pool isn't used too often.
&
7 302528 179352
01:25:02 0.7 302528 179176
01:30:01 0.7 302528 179060
Sorry for the long post.
--
Regards,
Sébastien Han.
On Wed, Mar 13, 2013 at 11:44 AM, Stefan Priebe - Profihost AG
wrote:
> Hi,
>
> are there any known ceph-mon memory leaks in bobtail? Today i've seen a
> c
It's pretty straightforward, but you can 'simply' delete the pool :)
(since it should be a test pool ;)).
--
Regards,
Sébastien Han.
On Tue, Mar 12, 2013 at 10:11 PM, Scott Kinder wrote:
> A follow-up question. How do I cleanup the written data, after I finish up
> with my
Well to avoid un necessary data movement, there is also an
_experimental_ feature to change on fly the number of PGs in a pool.
ceph osd pool set pg_num --allow-experimental-feature
Cheers!
--
Regards,
Sébastien Han.
On Tue, Mar 12, 2013 at 7:09 PM, Dave Spano wrote:
> Disregard my previ
ready said that pg_num was 450...
--
Regards,
Sébastien Han.
On Tue, Mar 12, 2013 at 2:00 PM, Vladislav Gorbunov wrote:
> Sorry, i mean pg_num and pgp_num on all pools. Shown by the "ceph osd
> dump | grep 'rep size'"
> The default pg_num value 8 is NOT suitable for
Replica count has been set to 2.
Why?
--
Regards,
Sébastien Han.
On Tue, Mar 12, 2013 at 12:45 PM, Vladislav Gorbunov wrote:
>> FYI I'm using 450 pgs for my pools.
> Please, can you show the number of object replicas?
>
> ceph osd dump | grep 'rep size'
>
>
Dave,
It still a production platform so no I didn't try it. I've also found
that now ceph-mon are constantly leaking... I truly hope your log max
recent = 1 will help.
Cheers.
--
Regards,
Sébastien Han.
On Mon, Mar 11, 2013 at 7:43 PM, Dave Spano wrote:
> Sebastien,
>
>
FYI I'm using 450 pgs for my pools.
--
Regards,
Sébastien Han.
On Fri, Mar 1, 2013 at 8:10 PM, Sage Weil wrote:
>
> On Fri, 1 Mar 2013, Wido den Hollander wrote:
> > On 02/23/2013 01:44 AM, Sage Weil wrote:
> > > On Fri, 22 Feb 2013, S?bastien Han wrote:
> &g
Ok thanks guys. Hope we will find something :-).
--
Regards,
Sébastien Han.
On Mon, Feb 25, 2013 at 8:51 AM, Wido den Hollander wrote:
> On 02/25/2013 01:21 AM, Sage Weil wrote:
>>
>> On Mon, 25 Feb 2013, S?bastien Han wrote:
>>>
>>> Hi Sage,
>>>
>
Hi Sage,
Sorry it's a production system, so I can't test it.
So at the end, you can't get anything out of the core dump?
--
Regards,
Sébastien Han.
On Sat, Feb 23, 2013 at 1:44 AM, Sage Weil wrote:
> On Fri, 22 Feb 2013, S?bastien Han wrote:
>> Hi all,
>>
>>
Hi,
I would love to have such discussion. I think it's a good initiative.
Cheers.
--
Regards,
Sébastien Han.
On Sat, Feb 23, 2013 at 11:33 AM, Loic Dachary wrote:
> Hi,
>
> In anticipation of the next OpenStack summit
> http://www.openstack.org/summit/portland-2013/, I prop
Hi all,
I finally got a core dump.
I did it with a kill -SEGV on the OSD process.
https://www.dropbox.com/s/ahv6hm0ipnak5rf/core-ceph-osd-11-0-0-20100-1361539008
Hope we will get something out of it :-).
--
Regards,
Sébastien Han.
On Fri, Jan 11, 2013 at 7:13 PM, Gregory Farnum wrote:
>
,
Sébastien Han.
On Mon, Feb 18, 2013 at 3:20 PM, Sławomir Skowron wrote:
> Hi, Sorry for very late response, but i was sick.
>
> Our case is to make a failover rbd instance in another cluster. We are
> storing block device images, for some services like Database. We need
> to have
w.drbd.org/users-guide-8.3/s-resolve-split-brain.html
Cheers
--
Regards,
Sébastien Han.
On Tue, Feb 19, 2013 at 2:38 AM, Samuel Winchenbach wrote:
> Hi All,
>
> I recently switched from CentOS 6.3 to Ubuntu LTS server and have started
> encountering some really odd problems w
+1
--
Regards,
Sébastien Han.
On Sat, Feb 16, 2013 at 10:09 AM, Wido den Hollander wrote:
> On 02/16/2013 08:09 AM, Andrey Korolyov wrote:
>>
>> Can anyone who hit this bug please confirm that your system contains libc
>> 2.15+?
>>
>
> I've seen this with
Well if you follow my article, you will get LVS-NAT running. It's fairly
easy, no funky stuff. Yes you will probably need the postrouting rule, as
usual :). Let me know how it goes ;)
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach wrote:
> I
> didn
But if are in a hurry and looking for a DFS then
GlusterFS seems to be a good candidate. NFS works pretty well too.
Cheers.
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso <
juanfra.rodriguez.card...@gmail.com> wrote:
> Another one:
&
Ok but why direct routing instead of NAT? If the public IPs are _only_
on LVS there is no point to use LVS-DR.
LVS has the public IPs and redirects to the private IPs, this _must_ work.
Did you try NAT? Or at least can you give it a shot?
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 3:55
Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?
--
Regards,
Sébastien Han.
On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach wrote:
> W
> ell, I think I will have to go with one ip per service and for
y create a resource group with all the openstack service inside it
(it's ugly but if it's what you want :)). Give me more info about your
setup and we can go further in the discussion :).
--
Regards,
Sébastien Han.
On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach wrote:
> T
> he on
id/journal but change it for something like osd journal =
/srv/ceph/journals/osd$id/journal.
Cheers.
--
Regards,
Sébastien Han.
On Thu, Feb 14, 2013 at 2:52 PM, Joao Eduardo Luis
wrote:
> Including ceph-users, as it feels like this belongs there :-)
>
>
>
> On 02/14/2013 01:47 PM,
f you need more input, have a look at the documentation ;-)
http://ceph.com/docs/master/rados/operations/crush-map/?highlight=crush#adjust-an-osd-s-crush-weight
Cheers,
--
Regards,
Sébastien Han.
On Wed, Feb 13, 2013 at 4:23 PM, sheng qiu wrote:
> Hi Gregory,
>
> once running ceph onl
What's the problem to have one IP on service pool basis?
--
Regards,
Sébastien Han.
On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach wrote:
> What if the VIP is created on a different host than keystone is started
> on? It seems like you either need to set net.ipv4.ip_nonloc
oh nice, the pattern also matches path :D, didn't know that
thanks Greg
--
Regards,
Sébastien Han.
On Mon, Feb 4, 2013 at 10:22 PM, Gregory Farnum wrote:
> Set your /proc/sys/kernel/core_pattern file. :)
> http://linux.die.net/man/5/core
> -Greg
>
> On Mon, Feb 4, 2013 a
ok I finally managed to get something on my test cluster,
unfortunately, the dump goes to /
any idea to change the destination path?
My production / won't be big enough...
--
Regards,
Sébastien Han.
On Mon, Feb 4, 2013 at 10:03 PM, Dan Mick wrote:
> ...and/or do you have the core
nova network-list then look for the id and add the following to your
boot command:
nova boot bla bla bla --nic net-id=
Let me know if it's better.
Cheers.
--
Regards,
Sébastien Han.
On Mon, Feb 4, 2013 at 6:24 PM, JR wrote:
> Greetings,
>
> I'm running a devstack test
hum ok now I wonder if you created a network or not?
# nova-manage network list
?
--
Regards,
Sébastien Han.
On Mon, Feb 4, 2013 at 7:09 PM, JR wrote:
> Hi Sébastien
>
> Problem is, I can't run nova network-list either!
>
> stack@gpfs6-int:~$ nova network-list
> ERR
Hum just tried several times on my test cluster and I can't get any
core dump. Does Ceph commit suicide or something? Is it expected
behavior?
--
Regards,
Sébastien Han.
On Sun, Feb 3, 2013 at 10:03 PM, Sébastien Han wrote:
> Hi Loïc,
>
> Thanks for bringing our discussion on the
Hi Loïc,
Thanks for bringing our discussion on the ML. I'll check that tomorrow :-).
Cheer
--
Regards,
Sébastien Han.
On Sun, Feb 3, 2013 at 10:01 PM, Sébastien Han wrote:
> Hi Loïc,
>
> Thanks for bringing our discussion on the ML. I'll check that tomorrow :-).
>
>
compute. With the boot from volume it's one RBD
per instance which brings way more IOPS to your instance. Still with
boot from volume you can also enjoy the rbd cache on the client side,
cache that will also helps with buffered IO.
Cheers!
--
Regards,
Sébastien Han.
On Thu, Jan 31, 2013 at 7:
gards,
Sébastien Han.
On Thu, Jan 31, 2013 at 7:40 AM, Wolfgang Hennerbichler
wrote:
> Hi,
>
> I'm sorry if this has been asked before. My question is: can I integrate ceph
> into openstack's nova & cinder in a way, that I don't need
> /var/lib/nova/instances anymo
Just added some stuff about RBD where E refers to Essex.
--
Regards,
Sébastien Han.
On Thu, Jan 31, 2013 at 11:20 AM, Avishay Traeger wrote:
> openstack-bounces+avishay=il.ibm@lists.launchpad.net wrote on
> 01/31/2013 12:37:07 AM:
>> From: Tom Fifield
>> To: openstack@l
+ RBD (Ceph)
+1 for the matrix, this will be really nice :-)
--
Regards,
Sébastien Han.
On Wed, Jan 30, 2013 at 5:04 PM, Tim Bell wrote:
>
>
> Is there a list of devices which are currently compatible with cinder and
> their relative functionality ?
>
>
>
> Looking
Hi,
Could provide those heaps? Is it possible?
--
Regards,
Sébastien Han.
On Tue, Jan 22, 2013 at 10:38 PM, Sébastien Han wrote:
> Well ideally you want to run the profiler during the scrubbing process
> when the memory leaks appear :-).
> --
> Regards,
> Sébastien Han.
>
Well ideally you want to run the profiler during the scrubbing process
when the memory leaks appear :-).
--
Regards,
Sébastien Han.
On Tue, Jan 22, 2013 at 10:32 PM, Sylvain Munaut
wrote:
> Hi,
>
>> I don't really want to try the mem profiler, I had quite a bad
>> exper
so you prefer to be asked for a password instead of log in passwordless?
as suggested, edit the base image and create a password for the user :)
--
Regards,
Sébastien Han.
On Tue, Jan 22, 2013 at 6:08 PM, Balamurugan V G
wrote:
> My ssh debug logs are below:
>
> $ ssh -vvv roo
er I can't reproduce the problem on my test environment... :(
--
Regards,
Sébastien Han.
On Tue, Jan 22, 2013 at 9:01 PM, Sylvain Munaut
wrote:
> Hi,
>
> Since I have ceph in prod, I experienced a memory leak in the OSD
> forcing to restart them every 5 or 6 days. Without that t
Hi guys,
See you at FOSDEM ;-)
Cheers,
--
Regards,
Sébastien Han.
On Sun, Jan 20, 2013 at 6:13 PM, Constantinos Venetsanopoulos
wrote:
> Hello Loic, Sebastien, Patrick,
>
> that's great news! I'm sure we'll have some very interesting stuff to talk
> about.
>
Cool, I look forward to reading it!
--
Regards,
Sébastien Han.
On Thu, Jan 17, 2013 at 5:29 PM, Mark Nelson wrote:
> On 01/17/2013 10:24 AM, Sébastien Han wrote:
>>
>> Hi Stephan,
>>
>>> - Increase the osdmax value
>>
>>
>> Well actually this doe
f the eventual sync. In practice, disabling ‘filestore
flusher’ seems to improve performance in some cases."
Cheers,
--
Regards,
Sébastien Han.
On Wed, Jan 16, 2013 at 7:41 PM, Stefan Priebe wrote:
>
> Hello Sebastien,
> hello list,
>
> first nice article sebastien ;-)
Thanks Sage!
--
Regards,
Sébastien Han.
On Wed, Jan 16, 2013 at 5:39 PM, Sage Weil wrote:
> On Wed, 16 Jan 2013, S?bastien Han wrote:
>> Can we use this doc as a reference for the upgrade?
>>
>> https://github.com/ceph/ceph/blob/eb02eaede53c03579d015ca00a888a48dbab739a/
Can we use this doc as a reference for the upgrade?
https://github.com/ceph/ceph/blob/eb02eaede53c03579d015ca00a888a48dbab739a/doc/install/upgrading-ceph.rst
Thanks.
--
Regards,
Sébastien Han.
On Tue, Jan 15, 2013 at 10:49 PM, Sage Weil wrote:
> That there are some critical bugs that
ption was already high before the profiler was
started. So yes with the memory profiler enable an OSD might consume
more memory but this doesn't cause the memory leaks.
Any ideas? Nothing to say about my scrumbing theory?
Thanks!
--
Regards,
Sébastien Han.
On Thu, Jan 10, 2013 at 10:44 PM,
If an admin user put it public, this is also possible.
--
Regards,
Sébastien Han.
On Fri, Jan 11, 2013 at 3:40 AM, Lei Zhang wrote:
> why not try boot from snapshot. That's will save some time.
>
>
> On Thu, Jan 10, 2013 at 5:18 AM, Sébastien Han
> wrote:
>>
>>
Cool!
--
Regards,
Sébastien Han.
On Thu, Jan 10, 2013 at 11:15 AM, Alex Vitola wrote:
> Changed directly by the database.
>
> Not the best way but I did because it was an environment.
>
> So far I have not found any problems
>
>
> mysql> use nova;
> mysql&
ing the next OSD..."
sleep 60
done
logger -t ceph-memory-usage "Ceph state after memory check operation
is: $(ceph health)"
Crons run with 10 min interval everyday for each storage node ;-).
Waiting for some Inktank guys now :-).
--
Regards,
Sébastien Han.
On Wed, Jan 9, 2013 a
!
Cheers!
--
Regards,
Sébastien Han.
On Wed, Jan 9, 2013 at 8:14 PM, Alex Vitola wrote:
> I have 2 projects in my environment:
>
> ProjectQA1: ID -> 0001
> ProjectQA2: ID -> 0002
>
> root@Controller:# keystone tenant-list
> +-++-+
> |
Hi,
Thanks for the input.
I also have tons of "socket closed", I recall that this message is
harmless. Anyway Cephx is disable on my platform from the beginning...
Anyone to approve or disapprove my "scrub theory"?
--
Regards,
Sébastien Han.
On Wed, Jan 9, 2013 at 7:0
If you wait too long, the system will trigger OOM killer :D, I already
experienced that unfortunately...
Sam?
On Wed, Jan 9, 2013 at 5:10 PM, Dave Spano wrote:
> OOM killer
--
Regards,
Sébastien Han.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
I guess he runs Argonaut as well.
More suggestions about this problem?
Thanks!
--
Regards,
Sébastien Han.
On Mon, Jan 7, 2013 at 8:09 PM, Samuel Just wrote:
>
> Awesome! What version are you running (ceph-osd -v, include the hash)?
> -Sam
>
> On Mon, Jan 7, 2013 at 11:03
Hi,
Stupid question, did you restart compute and api service?
I don't have any problems with those flags.
--
Regards,
Sébastien Han.
On Mon, Jan 7, 2013 at 9:58 AM, Robert van Leeuwen <
robert.vanleeu...@spilgames.com> wrote:
> Hi,
>
> I'm trying to get all logg
lmost
the same for all the OSD process.
Thank you in advance.
--
Regards,
Sébastien Han.
On Wed, Dec 19, 2012 at 10:43 PM, Samuel Just wrote:
>
> Sorry, it's been very busy. The next step would to try to get a heap
> dump. You can start a heap profile on osd N by:
>
> c
Oh ok I see, thanks for the clarification :)
--
Regards,
Sébastien Han.
On Wed, Jan 2, 2013 at 7:11 PM, Sage Weil wrote:
> On Wed, 2 Jan 2013, S?bastien Han wrote:
>> Debian-testing shows the version 0.56-1, maybe I misunderstood but I
>> thought that 0.56-1 bobtail was the new
Debian-testing shows the version 0.56-1, maybe I misunderstood but I
thought that 0.56-1 bobtail was the new version of the stable branch.
So I was expecting to see it here
http://ceph.com/debian/dists/precise/main/binary-amd64/Packages
Correct me if I'm wrong :)
--
Regards,
Sébastien Han.
rvices that log into
LOG_DAEMON. Using LOCAL0, LOCAL1 and so on for example.
> We can just add a min log level configurable.
Please :-)
New config options like:
* log_facility
* log_level
will be a good starting point I guess.
Thanks!
--
Regards,
Sébastien Han.
On Wed, Jan 2, 2013 at 5:28
ant to see ERROR logs.
Thanks you in advance :-)
--
Regards,
Sébastien Han.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
It looks the same for Debian packages, 0.56.1 is on testing branch.
http://ceph.com/debian-testing/dists/precise/main/binary-amd64/Packages
If someone can fix this, thanks in advance ;-)
--
Regards,
Sébastien Han.
On Wed, Jan 2, 2013 at 3:28 AM, Mark Nelson wrote:
> Doh! Sorry about t
No more suggestions? :(
--
Regards,
Sébastien Han.
On Tue, Dec 18, 2012 at 6:21 PM, Sébastien Han wrote:
> Nothing terrific...
>
> Kernel logs from my clients are full of "libceph: osd4
> 172.20.11.32:6801 socket closed"
>
> I saw this somewhere on the tracker.
&g
Nothing terrific...
Kernel logs from my clients are full of "libceph: osd4
172.20.11.32:6801 socket closed"
I saw this somewhere on the tracker.
Does this harm?
Thanks.
--
Regards,
Sébastien Han.
On Mon, Dec 17, 2012 at 11:55 PM, Samuel Just wrote:
>
> What is the workl
Thanks Razique,
I still need to edit the official HA doc to give details about this
setup, I don't really have the time this week.
I hope I can save some time before the end of the year.
Cheers!
--
Regards,
Sébastien Han.
On Tue, Dec 18, 2012 at 12:13 AM, Razique Mahroua
wrote:
> Gre
Hi,
No, I don't see nothing abnormal in the network stats. I don't see
anything in the logs... :(
The weird thing is that one node over 4 seems to take way more memory
than the others...
--
Regards,
Sébastien Han.
On Mon, Dec 17, 2012 at 11:31 PM, Sébastien Han wrote:
>
> Hi,
-components-ha/
For the latest article *please use* this repo, this our new location with
several branches (Essex/Folsom).
https://github.com/madkiss/openstack-resource-agents
--
Regards,
Sébastien Han.
On Mon, Dec 17, 2012 at 9:56 PM, Eugene Kirpichov wrote:
> Right, you only need HA for sw
that I have to provide. So
let me know. The only thing I can say is that the load haven't
increase that much this week. It seems to be consuming and not giving
back the memory.
Thank you in advance.
--
Regards,
Sébastien Han.
<>
Hi Vish,
The logs don't show more, even after enabling DEBUG logs...
See debug mode below right away before and after the message:
http://pastebin.com/1LCXuaVi
I forgot to mention but it _only_ appears while rolling out a new instance.
Thanks.
--
Regards,
Sébastien Han.
On Sat, D
rity_rules#0122012-12-12 23:46:29 TRACE
nova.openstack.common.rpc.amqp
self.firewall_driver.refresh_instance_security_rules(instance)#
This error seems harmless, as far as I can tell everything works perfectly.
Even so I'd like to have some input about it (ideally a fix bec
dummy in project B
- delete the volume from project A
If you use Ceph RBD it's really easy for example.
For the rest I don't know.
--
Bien cordialement.
Sébastien HAN.
On Thu, Nov 29, 2012 at 9:55 AM, Lei Zhang wrote:
> Hi Sébastien,
>
> Good ideas. There is a very tri
Hi,
What I will do to achieve what you want:
_ take a snapshot of your instance
_ export the snapshot from wherever it's stored (filesystem for instance)
_ import it to Glance, make the image to public or assign it to the tenant
(not 100% sure if the latest is possible though...)
_ run a new vm w
t 2:16 PM, Vishvananda Ishaya
> wrote:
>
> >
> > On Nov 28, 2012, at 2:08 PM, Sébastien Han
> wrote:
> >
> >> Hi,
> >>
> >> Just tried this, it works but I'd also like to rename
> /var/lib/nova/instances/ according to the hostname. At th
Hi,
Just tried this, it works but I'd also like to rename
/var/lib/nova/instances/ according to the hostname. At the moment this only
rename (output from nova show):
| OS-EXT-SRV-ATTR:instance_name | mon-nom
Is it possible?
Cheers!
On Wed, Nov 28, 2012 at 7:31 PM, John Garbutt wrote:
>
22.11.2012 11:49, schrieb Sébastien Han:
>
>> @Alexandre: cool!
>>
>> @ Stefan: Full SSD cluster and 10G switches?
>
> Yes
>
>
>> Couple of weeks ago I saw
>> that you use journal aio, did you notice performance improvement with it?
>
> journal
>>But who cares? it's also on the 2nd node. or even on the 3rd if you have
>>replicas 3.
Yes but you could also suffer a crash while writing the first replica.
If the journal is in tmpfs, there is nothing to replay.
On Thu, Nov 22, 2012 at 4:35 PM, Alexandre DERUMIER wrote:
>
> >>But who cares
kaged in bobtail) and also see if this behavior happens
>> with cephfs. It's still too early in the morning for me right now to
>> come up with a reasonable explanation for what's going on. It might be
>> worth running blktrace and seekwatcher to see what the
Hum sorry, you're right. Forget about what I said :)
On Thu, Nov 22, 2012 at 4:54 PM, Stefan Priebe - Profihost AG
wrote:
> I thought the Client would then write to the 2nd is this wrong?
>
> Stefan
>
> Am 22.11.2012 um 16:49 schrieb Sébastien Han :
>
>>>>
ity than my small database
> does. Instead, I prefer to perform block migrations rather than live ones
> until cephfs becomes more stable.
>
> Dave Spano
> Optogenics
> Systems Administrator
>
>
> --
> *From: *"Sébastien Han"
>
1 - 100 of 216 matches
Mail list logo