On Thu, May 25, 2017 at 8:39 AM Stuart Harland <
s.harl...@livelinktechnology.net> wrote:
> Has no-one any idea about this? If needed I can produce more information
> or diagnostics on request. I find it hard to believe that we are the only
> people experiencing this, and thus far we have lost abo
I am trying to gather and understand on how can or has multitenancy solved for
network interfaces or isolation. I can get ceph under a virtualized env and
achieve the isolation but my question or though is more on the physical ceph
deployment.
Is there a way, we can have multiple networks(publi
You absolutely cannot do this with your monitors -- as David says every
node would have to participate in every monitor decision; the long tails
would be horrifying and I expect it would collapse in ignominious defeat
very quickly.
Your MDSes should be fine since they are indeed just a bunch of st
Has no-one any idea about this? If needed I can produce more information or
diagnostics on request. I find it hard to believe that we are the only people
experiencing this, and thus far we have lost about 40 OSDs to corruption due to
this.
Regards
Stuart Harland
> On 24 May 2017, at 10:32,
Hi John,
Sorry, I'm not sure what the largest file is on our systems.
We have lots of data sets that are ~8TB uncompressed, these typically
compress 3:1. Thus if the users wants a single file, we hit 3TB.
I'm rsyncing 360TB of data from an Isilon to cephfs, it'll be
interesting to see how cephfs
For the MDS, the primary doesn't hold state data that needs to be replayed
to a standby. The information exists in the cluster. Your setup would be
1 Active, 100 Standby. If the active went down, 1 of the standby's would
be promoted and read the information from the cluster.
With Mons, it's int
How much testing has there been / what are the implications of having a
large number of Monitor and Metadata daemons running in a cluster.
Thus far I have deployed all of our Ceph clusters as a single service type
per physical machine but I am interested in a use case where we deploy
dozens/hundr
Hello all,
I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and
exports the usage information for all users and buckets. This is my first
prometheus exporter so if anyone has feedback I'd greatly appreciate it.
I've tested it against Hammer, and will shortly test against Jew
On Thu, May 25, 2017 at 2:14 PM, Ken Dreyer wrote:
> On Wed, May 24, 2017 at 12:36 PM, John Spray wrote:
>>
>> CephFS has a configurable maximum file size, it's 1TB by default.
>>
>> Change it with:
>> ceph fs set max_file_size
>
> How does this command relate to "ceph mds set max_file_size"
On Wed, May 24, 2017 at 12:36 PM, John Spray wrote:
>
> CephFS has a configurable maximum file size, it's 1TB by default.
>
> Change it with:
> ceph fs set max_file_size
How does this command relate to "ceph mds set max_file_size" ? Is it different?
I've put some of the information in this t
10 matches
Mail list logo