just my own experience on this---
I have two MDS servers running (since I run cephFS). I have the config
dictating both MDS servers in the ceph.conf file.
When I issue a "ceph -s" I see the following:
1/1/1 up {0=alpha=up:active}, 1 up:standby
I have shut one MDS server down (current active
example:
Each server looks like this on their mounting:
/bin/mount -t ceph -o name=admin,secret=
10.10.10.138,10.10.10.252,10.10.10.103:/ /media/network-storage
all points to the monitor servers.
On Tue, Jan 17, 2017 at 12:27 PM, Alex Evonosky
wrote:
> yes they are. I created one volume
gt;
> On Tue, 2017-01-17 at 12:19 -0500, Alex Evonosky wrote:
> > for whats its worth, I have been using CephFS shared between six
> > servers (all kernel mounted) and no issues. Running three monitors
> > and 2 meta servers (one as backup). This has been running great.
> >
for whats its worth, I have been using CephFS shared between six servers
(all kernel mounted) and no issues. Running three monitors and 2 meta
servers (one as backup). This has been running great.
On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tart wrote:
> On Tue, 2017-01-17 at 13:49 +0100, Loris
Since this was a test lab, I totally purged the whole cluster and
re-deployed.. working good now, thank you.
Alex F. Evonosky
<https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky>
On Sat, Jan 7, 2017 at 9:14 PM, Alex Evonosky
wrote:
> Thank you..
>
>
k at: Adding Monitors
>
> If you are using centos or similar, the latest package is available here:
>
> # http://download.ceph.com/rpm-jewel/el7/noarch/ceph-deploy-
> 1.5.37-0.noarch.rpm
>
> Regards,
>
>
> On Sun, Jan 8, 2017 at 9:53 AM, Alex Evonosky
> wrote:
&g
7 at 6:36 PM, Shinobu Kinjo wrote:
> How did you add a third MON?
>
> Regards,
>
> On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky
> wrote:
> > Anyone see this before?
> >
> >
> > 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't
Anyone see this before?
2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't
decrypt with error: error decoding block for decryption
2017-01-07 16:55:11.406053 7f095b379700 0 -- 10.10.10.138:6789/0 >>
10.10.10.252:6789/0 pipe(0x55cf8d028000 sd=11 :47548 s=1 pgs=0 cs=0 l=0
c=0x
Hello group--
I have been running ceph 10.2.3 for awhile now without any issues. This
evening my admin node (which is also an OSD and Monitor) crashed. I
checked my other OSD servers and the data seems to still be there.
Is there an easy way to bring the admin node back into the cluster? I am
Thank you sir. Ubuntu here as well.
On Fri, Dec 9, 2016 at 12:54 PM, Francois Lafont <
francois.lafont.1...@gmail.com> wrote:
> On 12/09/2016 06:39 PM, Alex Evonosky wrote:
>
> > Sounds great. May I asked what procedure you did to upgrade?
>
> Of course.
Francois-
Sounds great. May I asked what procedure you did to upgrade?
Thank you!
On Fri, Dec 9, 2016 at 12:20 PM, Francois Lafont <
francois.lafont.1...@gmail.com> wrote:
> Hi,
>
> Just for information, after the upgrade to the version
> 10.2.4-1-g5d3c76c (5d3c76c1c6e991649f0beedb80e682360
disregard--
found the issue, it was a remote hostname issue not matching the
localhostname.
Thank you.
On Thu, Sep 8, 2016 at 10:26 PM, Alex Evonosky
wrote:
> Hey group-
>
> I am a new CEPH user on Ubuntu and notice this when creating a brand new
> monitor following the d
Hey group-
I am a new CEPH user on Ubuntu and notice this when creating a brand new
monitor following the documentation:
storage@alex-desktop:~/ceph$ ceph-deploy --overwrite-conf mon create
alex-desktop
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/storage/.cephdeploy.conf
[ceph_d
13 matches
Mail list logo