Hello,
I created a test cluster of 3 OSD hosts (xen1,2,3) based on Ubuntu Xenial,
ceph 10.2.1 using the quick start steps in the docs master branch.
After jumping through a few problems, mainly from the inconsistent details
in the docs, i got a stable cluster running with RGW.
Running s3boto tes
r case
> I’d say rgw_dns_name = xen1
>
> And you should be good to go.
>
> JC
>
> On May 31, 2016, at 04:51, hp cre wrote:
>
> Hello,
>
> I created a test cluster of 3 OSD hosts (xen1,2,3) based on Ubuntu Xenial,
> ceph 10.2.1 using the quick start steps in the doc
://host.tld/bucket/. Or change the operation of your
> client to conform to the latter.
>
> There are also a few config parameters that will need to be added to
> ceph.conf to allow dns bucket resolution.
>
> Austin
>
> Sent from my iPhone
>
> On May 31, 2016, at 5:51 AM
address of the node where the RGW runs
>
> JC
>
> On May 31, 2016, at 13:46, hp cre wrote:
>
> Thanks Jean-Charles. I tried to add this in ceph.conf but did not make a
> difference.
>
> On 31 May 2016 at 17:11, LOPEZ Jean-Charles wrote:
>
>>
Hello,
I have been trying to deploy bluestore OSDs in a test cluster of 2x OSDs
and 3x mon (xen1,2,3) on Ubuntu Xenial and Jewel 10.2.1.
Activating the OSDs gives an error in systemd as follows. the culprit is
the command "ceph-osd --get-device-fsid" which fails to get fsid.
===
Hello all, i seem to have a problem with the ceph version available at
ports.ubuntu.com in the armhf branch.
The latest available version is now infernalis 9.2, however, whenever i
try to update my system, i still get the hammer version (0.94.5).
I've been checking everyday, and it seems the a
Hello all, i seem to have a problem with the ceph version available at
ports.ubuntu.com in the armhf branch.
The latest available version is now infernalis 9.2, however, whenever i
try to update my system, i still get the hammer version (0.94.5).
I've been checking everyday, and it seems the a
Hello all,
I was just installing jewel 10.1.0 on ubuntu xenial beta 2.
I got an error when trying to create a mon about failure to find command
'initctl' which is in upstart.
Tried to install upstart, then got an error 'com.ubuntu...' not found.
Anyway, i thought that with jewel release it would
reate-initial
Step 3 gave error in Python scripts. Meaning it could not find initctl
command. Searched for this command and found out our belongs to upstart.
On 11 Apr 2016 09:32, "James Page" wrote:
> Hi
>
> On Sun, 10 Apr 2016 at 16:39 hp cre wrote:
>
>> Hello all,
node install running xenial - its correctly detecting systemd
and using systemctl instead of initctl.
On Mon, 11 Apr 2016 at 10:18 James Page wrote:
> On Mon, 11 Apr 2016 at 10:02 hp cre wrote:
>
>> Hello James,
>>
>> It's a default install of xenial server beta 2
-- Forwarded message --
From: "hp cre"
Date: 11 Apr 2016 15:50
Subject: Re: [ceph-users] Ubuntu xenial and ceph jewel systemd
To: "James Page"
Cc:
Here is exactly what has been done (just started from scratch today):
1- install default xenial beta 2
2
I wanted to try the latest ceph-deploy. Thats why i downloaded this version
(31). Latest ubuntu version is (20).
I tried today at the end of the failed attempt to uninstall this version
and install the one that came with xenial, but whatever i did, it always
defaulted to version 31. Maybe someone
Hey James,
Did you check my steps? What did you do differently and worked for your?
Thanks for sharing..
On 11 Apr 2016 22:39, "James Page" wrote:
> On Mon, 11 Apr 2016 at 21:35 hp cre wrote:
>
>> I wanted to try the latest ceph-deploy. Thats why i downloaded this
>>
As far as i remember, the documentation did say that either filesystems
(ext4 or xfs) are OK, except for xattr which was better represented on xfs.
I would think the best move would be to make xfs the default osd creation
method and put in a warning about ext4 being deprecated in future
releases
:19, James Page wrote:
> Hello
>
> On Mon, 11 Apr 2016 at 22:08 hp cre wrote:
>
>> Hey James,
>> Did you check my steps? What did you do differently and worked for your?
>>
>
> Your steps look OK to me; I did pretty much the same, but with three nodes
> instead
To answer Oliver's question, a standard ubuntu server install is just an
iso of the server being installed in an automated mode. no software
specific options are made except in tasksel, i choose openssh server only.
On 13 April 2016 at 10:49, hp cre wrote:
> I'll give it one mo
version.
>
> I'll request that a new release is made - in the meantime I'll pick both
> of those commits into the Xenial ceph-deploy package.
>
> On Wed, 13 Apr 2016 at 09:53 hp cre wrote:
>
>> To answer Oliver's question, a standard ubuntu server install is ju
Great! Please let me know when the patched version is up.
Thanks for your help, much appreciated.
On 13 April 2016 at 11:19, James Page wrote:
> n Wed, 13 Apr 2016 at 10:09 hp cre wrote:
>
>> I am using version .31 from either the ceph repo or the one you updated
>> in the
fyi, i tested ceph-deploy master branch and it worked fine.
On 13 April 2016 at 11:28, hp cre wrote:
> Great! Please let me know when the patched version is up.
>
> Thanks for your help, much appreciated.
>
> On 13 April 2016 at 11:19, James Page wrote:
>
>> n Wed, 13
Hello all, I'm currently studying the possibility of creating a small ceph
cluster on arm nodes.
The reasonably priced boards I found (like the banana pi/pro, Orange
pi/pro/h3, etc..) most have either dual core or quad core Allwinner chips
and 1GB RAM. They also use a micro sd card for os and a s
er sata
or usb. In buying those Chinese 16gb ssd disks. They are good for i/o but
not for write speed.
If you could provide any more info, then it would be great.
Thanks!
On 15 Feb 2015 17:48, "hp cre" wrote:
> Hello all, I'm currently studying the possibility of creating a
Yann, Thanks for the info. Its been a great help.
On 23 Mar 2015 14:44, "Yann Dupont - Veille Techno" <
veilletechno-i...@univ-nantes.fr> wrote:
> Le 22/03/2015 22:44, hp cre a écrit :
>
>>
>> Hello Yann,
>>
>> Thanks for your reply. Unfortunately,
Hello everyone,
I'm new to ceph but been working with proprietary clustered filesystem for
quite some time.
I almost understand how ceph works, but have a couple of questions which
have been asked before here, but i didn't understand the answer.
In the closed source world, we use clustered fi
8 Nov 2014 23:33, "Gregory Farnum" wrote:
> On Tue, Nov 18, 2014 at 1:26 PM, hp cre wrote:
> > Hello everyone,
> >
> > I'm new to ceph but been working with proprietary clustered filesystem
> for
> > quite some time.
> >
> > I almost unders
BD is in and of itself a separate block device, not file
> system). I would imagine OpenStack works in a similar fashion.
>
> ----------
> *From: *"hp cre"
> *To: *"Gregory Farnum"
> *Cc: *ceph-users@lists.ceph.com
> *Sent: *Tuesday, November
25 matches
Mail list logo