Den tors 1 aug. 2019 kl 07:31 skrev Muhammad Junaid :
> Your email has cleared many things to me. Let me repeat my understanding.
> Every Critical data (Like Oracle/Any Other DB) writes will be done with
> sync, fsync flags, meaning they will be only confirmed to DB/APP after it
> is actually writ
Hi all,
I'd like to update the tunables on our older ceph cluster, created with firefly
and now on luminous. I need to update two tunables, chooseleaf_vary_r from 2 to
1, and chooseleaf_stable from 0 to 1. I'm going to do 1 tunable update at a
time.
With the first one, I've dumped the current
Thank you Greg,
Another question , we need to give new destination object , so that we can
read them separately in parallel with src object . This function resides
in objector.h , seems to be like internal and can it be used in interface
level and can we use this in our client ? Currently we us
Thanks Paul and Janne
Your email has cleared many things to me. Let me repeat my understanding.
Every Critical data (Like Oracle/Any Other DB) writes will be done with
sync, fsync flags, meaning they will be only confirmed to DB/APP after it
is actually written to Hard drives/OSD's. Any other appl
https://hub.docker.com/r/ceph/ceph/tags
> Am 01.08.2019 um 04:16 schrieb "zhanrzh...@teamsun.com.cn"
> :
>
> Hello Paul,
> Thank you a lot for reply.Are there other verion ceph ready run on docker.
>
> --
> zhanrzh...@teamsun.com.cn
>> Please don't start new deployments with Lumi
Hi, noted and thanks a lot.
Best Rgds
/stwong
-Original Message-
From: Ricardo Dias
Sent: Thursday, July 25, 2019 8:47 PM
To: ST Wong (ITSC) ; Manuel Lausch
; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Please help: change IP address of a cluster
Hi,
The monmaptool has a dif
Hello Paul,
Thank you a lot for reply.Are there other verion ceph ready run on docker.
--
zhanrzh...@teamsun.com.cn
>Please don't start new deployments with Luminous, it's EOL since last
>month: https://docs.ceph.com/docs/master/releases/schedule/
>(but it'll probably still receive
Hi Igor,
Thank you so much for the intel, I will now repair all OSD
Kind regards,
Sylvain.
Le 30/07/2019 à 11:22, Igor Fedotov a écrit :
Pool stats issue with upgrades to nautilus
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel
antivirus Avast.
https
I was able to answer my own question. For future interested parties, I
initiated a deep scrub on the placement group, which cleared the error.
On Tue, Jul 30, 2019 at 1:48 PM Brett Chancellor
wrote:
> I was able to remove the meta objects, but the cluster is still in WARN
> state
> HEALTH_WARN 1
Hi,
we are seeing a trend towards rather large RGW S3 buckets lately.
we've worked on
several clusters with 100 - 500 million objects in a single bucket, and we've
been asked about the possibilities of buckets with several billion objects more
than once.
From our experience: buckets with tens of
On Wed, Jul 31, 2019 at 1:32 AM nokia ceph wrote:
> Hi Greg,
>
> We were trying to implement this however having issues in assigning the
> destination object name with this api.
> There is a rados command "rados -p cp " , is
> there any librados api equivalent to this ?
>
The copyfrom operatio
Hi Nathan,
Indeed that was the reason. With your hint I was able to find the
relevant documentation:
https://docs.ceph.com/docs/master/cephfs/client-auth/
that is completely absent from:
https://docs.ceph.com/docs/master/cephfs/quota/#configuration
I will send a pull request to include the lin
Please don't start new deployments with Luminous, it's EOL since last
month: https://docs.ceph.com/docs/master/releases/schedule/
(but it'll probably still receive critical updates if anything happens
as there are lots of deployments out there)
But there's no good reason to not start with Nautilus
Yes, this is power-failure safe. It behaves exactly the same as a real
disk's write cache.
It's really a question about semantics: what does it mean to write
data? Should the data still be guaranteed to be there after a power
failure?
The answer for most writes is: no, such a guarantee is neither
> On Jul 30, 2019, at 7:49 AM, Mainor Daly
> wrote:
>
> Hello,
>
> (everything in context of S3)
>
>
> I'm currently trying to better understand bucket sharding in combination with
> an multisite - rgw setup and possible limitations.
>
> At the moment I understand that a bucket has a bucke
The client key which is used to mount the FS needs the 'p' permission
to set xattrs. eg:
ceph fs authorize cephfs client.foo / rwsp
That might be your problem.
On Wed, Jul 31, 2019 at 5:43 AM Mattia Belluco wrote:
>
> Dear ceph users,
>
> We have been recently trying to use the two quota attrib
On Wed, Jul 31, 2019 at 6:20 AM Marc Schöchlin wrote:
>
> Hello Jason,
>
> it seems that there is something wrong in the rbd-nbd implementation.
> (added this information also at https://tracker.ceph.com/issues/40822)
>
> The problem not seems to be related to kernel releases, filesystem types or
Hi Thomas,
We did some investigations some time before and got several rules how to
configure rgw and osd for big files stored on erasure-coded pool.
Hope it will be useful.
And if I have any mistakes, please let me know.
S3 object saving pipeline:
- S3 object is divided into multipart shards
Den ons 31 juli 2019 kl 06:55 skrev Muhammad Junaid :
> The question is about RBD Cache in write-back mode using KVM/libvirt. If
> we enable this, it uses local KVM Host's RAM as cache for VM's write
> requests. And KVM Host immediately responds to VM's OS that data has been
> written to Disk (Act
Hello Jason,
it seems that there is something wrong in the rbd-nbd implementation.
(added this information also at https://tracker.ceph.com/issues/40822)
The problem not seems to be related to kernel releases, filesystem types or the
ceph and network setup.
Release 12.2.5 seems to work properly
Dear ceph users,
We have been recently trying to use the two quota attributes:
- ceph.quota.max_files
- ceph.quota.max_bytes
to prepare for quota enforcing.
While the idea is quite straightforward we found out we cannot set any
additional file attribute (we tried with the directory pinning, too
Hi Greg,
We were trying to implement this however having issues in assigning the
destination object name with this api.
There is a rados command "rados -p cp " , is
there any librados api equivalent to this ?
Thanks,
Muthu
On Fri, Jul 5, 2019 at 4:00 PM nokia ceph wrote:
> Thank you Greg, we
22 matches
Mail list logo