[ceph-users] inkscope version 1.4

2016-05-31 Thread eric mourgaya
hi guys,

Inkscope 1.4 is released.
You can find the  rpms and debian packages at
https://github.com/inkscope/inkscope-packaging.
This release add a monitor panel  using collectd , and  also some features
about user login.

Enjoy it!


-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph breizh meetup

2016-04-12 Thread eric mourgaya
hi,

The next ceph breizh meetup up will be organized at Nantes,the April 19th
 in the Suravenir Building:
at 2 Impasse Vasco de Gama, 44800 Saint-Herblain

Here the doodle:

http://doodle.com/poll/3mxqqgfkn4ttpfib

Will see you soon at Nantes

-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] french meetup website

2015-12-07 Thread eric mourgaya
 Hi,
 I glad to  write that a new  website ( in french ) is now available. This
website is managed by the  ceph breizh  community.  You can find  a report
of the  last meetup on this page:
ceph breizh (http://ceph.bzh) <http://ceph.bzh>

 enjoy it and join us.



-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] next ceph breizh camp

2015-11-17 Thread eric mourgaya
Hi,

The next ceph breizh camp will  take place  the 26th  November at
University of Nantes and begin at 10.00 AM, at:

IGARUN (institut de géographie), Salle réunion (991/992), 1er étage
Chemin de la Censive du Tertre, sur le campus Tertre de Nantes.

You can enroll yourself at :
http://doodle.com/poll/vtqum8wyk2dciqtf


have a good day,
-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph breizh camp

2015-06-08 Thread eric mourgaya
Hey,

The next ceph breizh camp will take place at Rennes (Britany) the 16th June.
The  meetup will begin at 10h00 at:

IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires
263 Avenue Général Leclerc
35000 Rennes

building IRISA/Inria 12 F, allée Jean Perrin

https://goo.gl/maps/hokcj

The room  for the meetup will be Markov,  and of course the chains are
welcome.

Your contact will be  Pascal Morillon.

You can enroll yourself  at this address :
http://doodle.com/zfx75vn3y2ws9igu

During this day we will have 3 presentations:

-Around S3 gateway (Ghislain Chevalier)
-Deployment of Ceph  in a container (SIB)
-And the tool Crushsim (Xavier Villaneau)

 Enjoy you meetup.


-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph breizh meetup

2015-03-02 Thread eric mourgaya
Hi cephers,

The next  ceph breizhcamp will be scheduled the 12th march 2015, at Nantes
more precisely at Suravenir Assurance 2 rue vasco de gama,Saint-Herblain,
France.
It will begin at 10.00AM.

join us an fill the http://doodle.com/hvb99f2am7qucd5q
-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] inkscope RPMS and DEBS packages

2015-01-21 Thread eric mourgaya
Hi,

 the ceph  admin and supervision interface Inkscope  is now  packaged.
RPMS and DEBS packages are available at :
  https://github.com/inkscope/inkscope-packaging

enjoy it!

-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] look into erasure coding

2014-11-07 Thread eric mourgaya
Hi,

  In erasure coding pool, how do we know which OSDs keeping the data chunk
and which one the keep the encoding chunk?

There was this question yesterday on  ceph irc channel on erasure code.
http://ceph.com/docs/giant/dev/erasure-coded-pool/,
 we  really have a  difference between k and m  chunks (supposing that m is
the number of split and k the number of code chunk),  data  is splitting
 in m chunk, and we generate k chunks code based on   the previous m
chunks. So these chunks are differents, right?

1) first in all documentations about erasure, it says  that the number of
coding is greater than the number of data splitted , but  seeing
https://wiki.ceph.com/Planning/Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend,
I see that the coding chunk number is  not greater than the number of
splits. What is the goal,  reducing the used space?

2) We allow the loss of k chunks between (n+k) chunks,
  What's happening  when  you lost a chunk, does ceph rebuilt it somewhere
on another osd (not on  the n+k-1 previous one)?
3)so is it  important to known where are the  chunks of code to make a
failure rules?

4) According to these following rules in crush map the erasure code don't
take care  about failure domain rule. Right? ie the ruletset of the pool
don't matter, right?
let 's take an example: my failure domain is composed by 3 rooms, so
usually,  in a pool with size equal to 3 we have a replicate in  each room.
But in erasure coding rule,  we don't have this, does  the  rule applied
only on the chunk that contains the code?

rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type room
step emit
}

rule erasure-code {
ruleset 1
type erasure
min_size 3
max_size 20
step set_chooseleaf_tries 5
step take default
step chooseleaf indep 0 type host
step emit
}


5) what do  you think about something like :

rule room_erasure-code {
ruleset 1
type erasure
min_size 3
max_size 20
step set_chooseleaf_tries 5
step take default
step chooseleaf indep 0 type room
step emit
}


And  an erasure code  with  (m=3, and k=2).
 and does this settings available with:

ceph osd erasure-code-profile set failprofile k=2 m=3
ceph osd erasure-code-profile set failprofile ruleset-root=room_erasure-code
 and I can lost 2 room without problem, right?

I would like  to  add a summary of  your answer in documentation, would you
help me  on this?

-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] osd flapping ; heartbeat failed

2014-06-23 Thread eric mourgaya
Hi,

my version of ceph is 0.72.2 on scientific linux with
2.6.32-431.1.2.el6.x86_64 kernel.

after a network trouble on all my nodes. Osd flap up to down periodically.
I have to set* nodown parameter* to stabilize it. I have a public_network
and a cluster_network.


 I have this message on most of osd:

2014-06-23 08:08:59.750879 7f6bd3661700 -1 osd.y 53377 he*artbeat_check: no
reply from osd.xxx ever on either front or back*, first ping sent
2014-06-22 20:06:10.055264 (cutoff 2014-06-23 08:08:24.750744)


 cluster b71fecc6-0323-4f08-8b49-e8ed1ff2d4ce

health HEALTH_WARN 1 pgs backfill; 73 pgs down; 196 pgs peering; 196 pgs
stuck inactive; 197 pgs stuck unclean; recovery 592/2459924 objects
degraded (0.024%); nodown flag(s) set

monmap e5: 3 mons at
{bb-e19-x4=10.257.53.236:6789/0,cephfrontux1-r=10.257.53.241:6789/0,cephfrontux2-r=10.257.53.242:6789/0},
election epoch 202, quorum 0,1,2 bb-e19-x4,cephtux1-r,cephtux2-r

osdmap e53377: 34 osds: 33 up, 33 in

flags nodown

pgmap v5928500: 5596 pgs, 5 pools, 4755 GB data, 1212 kobjects

9466 GB used, 17248 GB / 26715 GB avail

592/2459924 objects degraded (0.024%)

5398 active+clean

1 active+remapped+wait_backfill

123 peering

73 down+peering

1 active+clean+scrubbing


 grep check ceph-osd.*.log ' '| awk '{print $5,$7,'problem',$11}'|sort -u


 osd.10 heartbeat_check: problem osd.0

osd.10 heartbeat_check: problem osd.11

osd.10 heartbeat_check: problem osd.19

.

 is the same for most os dlog.

I wrote some options  but nothing

[osd]
osd_heartbeat_grace = 35
osd_min_down_reports = 4
osd_heartbeat_addr = 10.157.53.224
mon_osd_down_out_interval = 3000
osd_heartbeat_interval = 12
osd_mkfs_options_xfs = -f
mon_osd_min_down_reporters = 3
osd_mkfs_type = xfs

  Have you an idea to fix it?



-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph breizh meetup

2014-06-19 Thread eric mourgaya
Hi,

 The meeting will be in this building arkea
https://maps.google.fr/maps?q=32+Rue+Louis+Lichou,+Le+Relecq-Kerhuonhl=frll=48.40603,-4.410431spn=0.012279,0.027874sll=46.884588,1.628424sspn=10.994036,11.865234oq=32,+rue+louit=hhnear=32+Rue+Louis+Lichou,+29480+Le+Relecq-Kerhuonz=16layer=ccbll=48.406849,-4.418578panoid=FDavXLJvZQca6ONTW2jCWgcbp=12,48.46,,0,-8.88
https://www.google.fr/maps/@48.406849,-4.418578,3a,75y,48.46h,98.88t/data=!3m4!1e1!3m2!1sFDavXLJvZQca6ONTW2jCWg!2e0?hl=fr

I put my telephone number on  comment section in the doodle.

 will see on monday

-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph meetup brest

2014-06-16 Thread eric mourgaya
Hi,


The next breizh meetup will take place at brest (Credit Mutuel Arkea, 1,
rue louis lichou) the 23th june 2014.
 We will  try to install calamari and  speak about export rbd image to a
third datacenter.
http://doodle.com/5wtnb2aekx294nzz



-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph online meeting

2014-06-02 Thread eric mourgaya
Hi All,

First, thanks for your  welcome messages and  your votes.
Note  that the next online meeting is on the 6th June:
https://wiki.ceph.com/Community/Meetings .

Note also  the  next openstack meetup  that  will speak about  storage and
ceph : http://www.meetup.com/OpenStack-France/events/172756002/

Thanks and Best Regards,

-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph] Failure in osd creation

2014-02-21 Thread eric mourgaya
 journal
 read_header error decoding journal header
 [r-cephosd302][WARNIN] 2014-01-24 14:59:13.051076 7f4f47f49780 -1
 filestore(/var/lib/ceph/osd/ceph-4) could not find
 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
 [r-cephosd302][WARNIN] 2014-01-24 14:59:13.220053 7f4f47f49780 -1 created
 object store /var/lib/ceph/osd/ceph-4 journal
 /var/lib/ceph/osd/ceph-4/journal for osd.4 fsid
 632d789a-8560-469b-bf6a-8478e12d2cb6
 [r-cephosd302][WARNIN] 2014-01-24 14:59:13.220135 7f4f47f49780 -1 auth:
 error reading file: /var/lib/ceph/osd/ceph-4/keyring: can't open
 /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory
 [r-cephosd302][WARNIN] 2014-01-24 14:59:13.220572 7f4f47f49780 -1 created
 new key in keyring /var/lib/ceph/osd/ceph-4/keyring
 [r-cephosd302][WARNIN] added key for osd.4
 root@r-cephrgw01:/etc/ceph# ceph -s
 cluster 632d789a-8560-469b-bf6a-8478e12d2cb6
  health HEALTH_OK
  monmap e3: 3 mons at {r-cephosd101=
 10.194.182.41:6789/0,r-cephosd102=10.194.182.42:6789/0,r-cephosd103=10.194.182.43:6789/0},
 election epoch 6, quorum 0,1,2 r-cephosd101,r-cephosd102,r-cephosd103
  osdmap e37: 5 osds: 5 up, 5 in
   pgmap v240: 192 pgs, 3 pools, 0 bytes data, 0 objects
 139 MB used, 4146 GB / 4146 GB avail
  192 active+clean

 root@r-cephrgw01:/etc/ceph# ceph osd tree
 # idweight  type name   up/down reweight
 -1  6.77root default
 -2  0.45host r-cephosd101
 0   0.45osd.0   up  1
 -3  0.45host r-cephosd102
 1   0.45osd.1   up  1
 -4  0.45host r-cephosd103
 2   0.45osd.2   up  1
 -5  2.71host r-cephosd301
 3   2.71osd.3   up  1
 -6  2.71host r-cephosd302
 4   2.71osd.4   up  1

 Now, the new osd is up

 I didn't understand where the problem is...

 Why isn't the osd journal size in the osd.# section taken into account?
 Why does ceph try to recreate osd.0?
 Why does ceph-deploy indicate that the osd is ready for use?
 Why doesn't ceph-deploy create all the files?
 Why is the bootstrap-osd not correct?

 Thanks


 - - - - - - - - - - - - - - - - -
 Ghislain Chevalier
 ORANGE LABS FRANCE
 Storage Service Architect
  +33299124432
 ghislain.cheval...@orange.com



 _

 Ce message et ses pieces jointes peuvent contenir des informations
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
 recu ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
 electroniques etant susceptibles d'alteration,
 Orange decline toute responsabilite si ce message a ete altere, deforme ou
 falsifie. Merci.

 This message and its attachments may contain confidential or privileged
 information that may be protected by law;
 they should not be distributed, used or copied without authorisation.
 If you have received this email in error, please notify the sender and
 delete this message and its attachments.
 As emails may be altered, Orange is not liable for messages that have been
 modified, changed or falsified.
 Thank you.

 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] french ceph meetup

2013-12-02 Thread eric mourgaya
Hi,

The first unofficial French ceph user meetup took place in Nantes on Friday
29th 2013.  There were 8 participants.

Yann from the University of Nantes related his use case of ceph. During his
presentation we talked about hardware choices, about using SSD and problems
he has experienced in his infrastructure. We also talked about his choice
of cephfs and rbd to access his ceph cluster. With Q/A this occupied our
morning.

In the afternoon, we talked about the supervision and administration of
ceph. Alain from Orange Labs showed us a first version of a web interface
based on ceph-rest-api to create ceph objects and a d3js view like 'osd
tree' and 'osd status'; it was great. We discussed what this interface
should look like, and proposed to create a github projet to boost the
developpement of this interface. That should happen soon.

We think that it is a pity that we can not specify the ruleset when
creating a pool with ceph_rest_api:
osd/pool/create?pool=pool(poolname)pg
_num=pg_num(int[0-])pgp_num={pgp_num( int[0-])}. We will list all the
differences between ceph_rest_api and the ceph cli and keep inktank team
informed.

We also had a discussion about how to aggregate informations needed for
supervision, administration and d3js visualization.  Pierre from Affilae
proposed using mongoDB instead of redis  to store all the information, not
only ceph information but also hardware and system information. We are
going to test this idea. We will propose a solution to collect and analyse
informations for nagios, d3js and administration interfaces. In conclusion,
the community needs an administration interface to help ceph get adopted by
more companies. So let's do it!
I enjoyed this meeting and I 'm looking forward to the next one.
People who are interested by the next French meetup can send me an e-mail.


Best Regards,
-- 
Eric Mourgaya,


Respectons la planete!
Luttons contre la mediocrite!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com