[ceph-users] Network testing tool.

2016-09-13 Thread Owen Synge
Dear all,

Often issues arise with badly configured network switches, vlans, and
such like. Knowing each node routes to is a major deployment fail and
can be difficult to diagnose.

The brief looks like this:

Description:

  * Diagnose network issues quickly for ceph.
  * Identify network issues before deploying ceph.

A typical deployment will have 2 networks and potentially 3.

  * External network for client access.
  * Internal network for data replication. (Strongly recommended)
  * Administration network. (Optional)

Typically we will have salt available on all nodes, but it would be the
same for ansible or an other config management solution.

  * This will make injection of IP addresses and hosts trivial.

Before I go any further with developing a solution, does anyone know of
a pre made solution, that will avoid me writing much code, or ideally
any code.

So far I have only found tools for testing connectivity between 1 point
and another, not for testing 1:N or N:N.

If I must write such a tool my self, I imagine the roadmap to start with
just ping, and then expand form here, with commands such as iperf, port
range test etc.

All suggestions are welcome particularly tools that save time.
dependency on puppet has already ruled out one solution.

Best regards

Owen Synge
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Owen Synge
Hi,

To add to Udo's point,

Do remember that by default journals take ~6Gb.

For this reason I suggest making Virtual disks larger than 20Gb for
testing although its slightly bigger than absolutely necessary.

Best regards

Owen



On 02/10/2015 01:26 PM, Udo Lembke wrote:
> Hi,
> your will get further trouble, because your weight is not correct.
> 
> You need an weight >= 0.01 for each OSD. This mean, you OSD must be 10GB
> or greater!
> 
> 
> Udo
> 
> Am 10.02.2015 12:22, schrieb B L:
>> Hi Vickie,
>>
>> My OSD tree looks like this:
>>
>> ceph@ceph-node3:/home/ubuntu$ ceph osd tree
>> # idweighttype nameup/downreweight
>> -10root default
>> -20host ceph-node1
>> 00osd.0up1
>> 10osd.1up1
>> -30host ceph-node3
>> 20osd.2up1
>> 30osd.3up1
>> -40host ceph-node2
>> 40osd.4up1
>> 50osd.5up1
>>
>>
>>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB
21284 (AG
Nürnberg)

Maxfeldstraße 5

90409 Nürnberg

Germany
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph osd replacement with shared journal device

2014-09-29 Thread Owen Synge
Hi Dan,

At least looking at upstream to get journals and partitions persistently
working, this requires gpt partitions, and being able to add a GPT
partition UUID to work perfectly with minimal modification.

I am not sure the status of this on RHEL6, The latest Fedora and
OpenSUSE support this but SLE12 (To be released) and I think RHEL7 do
support this.

Im sure you can bypass this as every data partition contains a symlink
to the journal partition, but persistent naming may be more work if you
dont use GPT partitions.

Best of luck.

Owen





On 09/29/2014 10:24 AM, Dan Van Der Ster wrote:
> Hi,
> 
>> On 29 Sep 2014, at 10:01, Daniel Swarbrick 
>>  wrote:
>>
>> On 26/09/14 17:16, Dan Van Der Ster wrote:
>>> Hi,
>>> Apologies for this trivial question, but what is the correct procedure to 
>>> replace a failed OSD that uses a shared journal device?
>>>
>>> I’m just curious, for such a routine operation, what are most admins doing 
>>> in this case?
>>>
>>
>> I think ceph-osd is what you need.
>>
>> ceph-osd -i  —mkjournal
> 
> 
> At the moment I am indeed using this command to in our puppet manifests for 
> creating and replacing OSDs. But now I’m trying to use the ceph-disk udev 
> magic, since it seems to be the best (perhaps only?) way to get persistently 
> named OSD and journal devs (on RHEL6).
> 
> Cheers, Dan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Issues compiling Ceph (master branch) on Debian Wheezy (armhf)

2014-07-25 Thread Owen Synge
Dear Deven,

Another solution is to compile leveldb and ceph without tcmalloc support :)

Ceph and leveldb work just fine without gperftools, and I am yet to do
benchmarks as to how much performance benefit you get from
google-perftools replacement tcmalloc of globc malloc.

Best regards

Owen



On 07/25/2014 04:52 PM, Deven Phillips wrote:
> root@cubie01:~# aptitude search perftools
> p   google-perftools   - command
> line utilities to analyze the performance of C++ programs
> root@cubie01:~# aptitude install google-perftools
> The following NEW packages will be installed:
>   google-perftools{b}
> The following packages are RECOMMENDED but will NOT be installed:
>   graphviz gv
> 0 packages upgraded, 1 newly installed, 0 to remove and 40 not upgraded.
> Need to get 78.3 kB of archives. After unpacking 238 kB will be used.
> The following packages have unmet dependencies:
>  google-perftools : Depends: libgoogle-perftools4 which is a virtual
> package.
> Depends: curl but it is not going to be installed.
> The following actions will resolve these dependencies:
> 
>  Keep the following packages at their current version:
> 1) google-perftools [Not Installed]
> 
> 
> 
> Accept this solution? [Y/n/q/?] n
> 
> *** No more solutions available ***
> 
> 
> 
> On Fri, Jul 25, 2014 at 10:51 AM, zhu qiang 
> wrote:
> 
>> Hi,
>>
>> may be you miss : libgoogle-perftools-dev.
>>
>> try apt-get install  -y libgoogle-perftools-dev
>>
>>
>>
>> best.
>>
>>
>>
>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
>> Of *Deven Phillips
>> *Sent:* Friday, July 25, 2014 11:55 AM
>> *To:* ceph-users@lists.ceph.com
>> *Subject:* [ceph-users] Issues compiling Ceph (master branch) on Debian
>> Wheezy (armhf)
>>
>>
>>
>> Hi all,
>>
>>
>>
>> I am in the process of installing and setting up Ceph on a group of
>> Allwinner A20 SoC mini computers. They are armhf devices and I have
>> installed Cubian (http://cubian.org/), which is a port of Debian Wheezy.
>> I tried to follow the instructions at:
>>
>>
>>
>> http://ceph.com/docs/master/install/build-ceph/
>>
>>
>>
>> But I found that some needed dependencies were not installed. Below is a
>> list of the items I had to install in order to compile Ceph for these
>> devices:
>>
>>
>>
>> uuid-dev
>>
>> libblkid-dev
>>
>> libudev-dev
>>
>> libatomic-ops-dev
>>
>> libsnappy-dev
>>
>> libleveldb-dev
>>
>> xfslibs-dev
>>
>> libboost-all-dev
>>
>>
>>
>> I also had to specify --without-tcmalloc because I could not find a
>> package which implements that for the armhf platform.
>>
>>
>>
>> I hope this helps others!!
>>
>>
>>
>> Deven Phillips
>>
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com