Marc Roos wrote:
> > In the past I see some good results (benchmark &
> > latencies) for MySQL and PostgreSQL. However, I've always used
> > 4MB object size. Maybe i can get much better
> > performance on smaller object size. Haven't tried actually.
>
> Did you tune mysql / postgres for
Yes, I had to tune some settings on PostgreSQL. Especially on:
synchronous_commit = off
I have a default RBD settings.
Do you have any recommendation?
Thanks,
Gencer.
On 19.10.2020 12:49:51, Marc Roos wrote:
> In the past I see some good results (benchmark & latencies) for MySQL
and
.
On 17.10.2020 18:39:44, Gencer W. Genç wrote:
Hi Irek,
In the past I see some good results (benchmark & latencies) for MySQL and
PostgreSQL. However, I've always used 4MB object size. Maybe i can get much
better performance on smaller object size. Haven't tried actually.
Why are you not recommen
ncer.
On 17.10.2020 18:01:19, Irek Fasikhov wrote:
Hi,
Ceph for a transaction database bad solution from medium loads.
сб, 17 окт. 2020 г. в 16:29, Gencer W. Genç mailto:gen...@gencgiyen.com]>:
Hi,
I have an existing few RBDs. I would like to create a new RBD Image for
PostgreSQL. Do you h
Hi,
I have an existing few RBDs. I would like to create a new RBD Image for
PostgreSQL. Do you have any suggestions for such use cases? For example;
Currently defaults are:
Object size (4MB) and Stripe Unit (None)
Features: Deep flatten + Layering + Exclusive Lock + Object Map + FastDiff
so much once again for your great help!
Gencer.
On 6.06.2020 00:15:15, Sebastian Wagner wrote:
Am 05.06.20 um 22:47 schrieb Gencer W. Genç:
> Hi Sebastian,
>
> I have go ahead and dig into github.com/ceph source code. I see that
> mons are grouped under name 'mon'. This ma
top mon.vx-rg23-rk65-u43-130
in the logs.
please make sure `ceph mon ok-to-stop vx-rg23-rk65-u43-130`
succeeds.
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Hi Sebastian,
>
> I cannot see my replies in here. So i put attachment as a body here:
>
> 2020-05-21T18:52:36.813+
Hi,
I also tried:
$ ceph mon ok-to-stop all
No luck again. It seems Ceph ignores this.
Other Ceph cluster which has 9 nodes (and 3 mons) successfully upgraded.
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hi All,
I am trying to upgrade ceph 15.2.1 to 15.2.3. I've two node setup on small
environment for test only. I ran the following commands:
$ ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130
>> quorum should be preserved (vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1)
>>after stopping
upgrade can continue?
If so, Should I upgrade, or wait for 15.2.3 because @Ashley said 15.2.2 has
problems
Thanks,
Gencer.
On 25.05.2020 19:25:49, Sebastian Wagner wrote:
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
please make s
gs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB
avail
Sebastian Wagner wrote:
> Hi Gencer,
>
> I'm going to need the full mgr log file.
>
> Best,
> Sebastian
>
> Am 20.05.20 um 15:07 schrieb Gencer W. Genç:
> > Ah yes,
> >
> > {
> >
idea how can this be done with 2 node?
Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote:
This is 2 node setup. I have no third node :(
I am planning to add more in the future but currently 2 nodes only.
At the moment, is there a --force comm
Hi Sebastian,
I did not get your reply via e-mail. I am very sorry for this. I hope you can
see this message...
I've re-run the upgrade and attached the log.
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi Ashley,
Have you seen my previous reply? If so, and no solution then does anyone has
any idea how can this be done with 2 node?
Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç wrote:
This is 2 node setup. I have no third node :(
I am planning to add more in the future but currently
21:28:19 +0800 Gencer W. Genç
wrote
I have 2 mons and 2 mgrs.
cluster:
id: 7d308992-8899-11ea-8537-7d489fa7c193
health: HEALTH_OK
services:
mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)
mgr: vx-rg23-rk65-u43-130.arnvag(active
a second to
allow to upgrade to continue, can you bring up an extra MON?
Thanks
On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç
wrote
Hi Ashley,
I see this:
[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id
completed only the two MGR instances
If not:
ceph orch upgrade stop
ceph orch upgrade start --ceph-version 15.2.2
and monitor the watch-debug log
Make sure at the end you run:
ceph config set mgr mgr/cephadm/log_to_cluster_level info
On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç
version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus
(stable)": 28,
"ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus
(stable)": 2
}
}
How can i fix this?
Gencer.
On 20.05.2020 16:04:33, Ashley Merrick wrote:
Does:
ceph version
does
ceph orch upgrade status
show?
On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç
wrote
Hi,
I've 15.2.1 installed on all machines. On primary machine I executed ceph
upgrade command:
$ ceph orch upgrade start --ceph-version 15.2.2
When I check ceph -s I see this:
Hi,
I've 15.2.1 installed on all machines. On primary machine I executed ceph
upgrade command:
$ ceph orch upgrade start --ceph-version 15.2.2
When I check ceph -s I see this:
progress:
Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
[=...] (remaining: 8h)
dashboard issue and this?
Thanks,
Gencer.
On 19.05.2020 23:44:25, Gencer W. Genç wrote:
Hi,
I was browsing dashboard today. Then suddently it stopped working and i got 502
errors. I checked via root login and see thet ceph health is down to WARN.
I can access all rdb devices and CephFS. They work. All
Hi,
I was browsing dashboard today. Then suddently it stopped working and i got 502
errors. I checked via root login and see thet ceph health is down to WARN.
I can access all rdb devices and CephFS. They work. All OSDs in server-1 is up.
health: HEALTH_WARN
1 hosts fail
> On Apr 29, 2020, at 16:06, Gencer W. Genç wrote:
>
> Hi,
>
> ===
> NOTE: I do not see my thread in ceph-list for some reason. I don't know if
> list received my question or not. So, sorry if this is duplicate.
> ===
>
> I just deployed a new cluster with cephadm
Hi,
===
NOTE: I do not see my thread in ceph-list for some reason. I don't know if list
received my question or not. So, sorry if this is duplicate.
===
I just deployed a new cluster with cephadm instead of ceph-deploy. In tyhe
past, If i change ceph.conf for tweaking, i was able to copy them
Hi,
I just deployed a new cluster with cephadm instead of ceph-deploy. In tyhe
past, If i change ceph.conf for tweaking, i was able to copy them and apply to
all servers. But i cannot find this on new cephadm tool.
I did few changes on ceph.conf but ceph is unaware of those changes. How can i
Hi Volker,
Thank you so much for your quick fix for me. It worked. I got my dashboard back
and ceph is in HEALTH_OK state.
Thank you so much again and stay safe!
Regards,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
What about Quasar? (https://www.google.com/search?q=quasar)
It's belong to the universe.
True that there are no so much options for Q.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
.
=== REPLY ABOVE ===
Hi Gencer,
On 2020-03-22 09:37, Gencer W. Genç wrote:
...
Hmm, I thought we had fixed that bug by merging the following fix:
https://github.com/ceph/ceph/pull/33513
@Volker, would you mind taking a look at this? Thanks in advance!
Lenz
--
SUSE
After upgrading from 15.1.0 to 15.1.1 of Octopus im seeing this error for
dashboard:
cluster:
id: c5233cbc-e9c2-4db3-85e1-423737a95a8c
health: HEALTH_ERR
Module 'dashboard' has failed: ('pwdUpdateRequired',)
Also executing any command result in:
Error EIO:
29 matches
Mail list logo