Hello!
I'm validating IO performance of CephFS vs. NFS.
Therefore I have mounted the relevant filesystems on the same client.
Then I start fio with the following parameters:
action = randwrite randrw
blocksize = 4k 128k 8m
rwmixreadread = 70 50 30
32 jobs run in parallel
The NFS share is
3. August 2017 16:37, "Burkhard Linke"
schrieb:
> Hi,
>
> On 03.08.2017 16:31, c.mo...@web.de wrote:
>
>> Hello!
>>
>> I have purged my ceph and reinstalled it.
>> ceph-deploy purge node1 node2 node3
>> ceph-deploy purgedata node1 node2 node3
Hello!
I have purged my ceph and reinstalled it.
ceph-deploy purge node1 node2 node3
ceph-deploy purgedata node1 node2 node3
ceph-deploy forgetkeys
All disks configured as OSDs are physically in two servers.
Due to some restrictions I needed to modify the total number of disks usable as
OSD,
26. Juli 2017 11:29, "Wido den Hollander" schrieb:
>> Op 26 juli 2017 om 11:26 schreef c.mo...@web.de:
>>
>> Hello!
>>
>> Based on the documentation for defining quotas in CephFS for any directory
>> (http://docs.ceph.com/docs/master/cephfs/quota), I defined a quota for
>>
Hello!
Based on the documentation for defining quotas in CephFS for any directory
(http://docs.ceph.com/docs/master/cephfs/quota/), I defined a quota for
attribute max_bytes:
ld4257:~ # getfattr -n ceph.quota.max_bytes /mnt/ceph-fuse/MTY/
getfattr: Removing leading '/' from absolute path names
Understood.
Would you recommend to have a dedicated pool for the data that is directly
written using librados and another pool for the filesystem (CephFS)?
24. Juli 2017 19:46, "David Turner" schrieb:
You might be able to read these objects using s3fs if you're using a
RadosGW. But
Hello!
I created CephFS according to documentation:
$ ceph osd pool create hdb-backup
$ ceph osd pool create hdb-backup_metadata
$ ceph fs new
I can mount this pool with user admin:
ld4257:/etc/ceph # mount -t ceph 10.96.5.37,10.96.5.38,10.96.5.38:/ /mnt/cephfs
-o
THX.
Mount is working now.
The auth list for user mtyadm is now:
client.mtyadm
key: AQAlyXVZEfsYNRAAM4jHuV1Br7lpRx1qaINO+A==
caps: [mds] allow r,allow rw path=/MTY
caps: [mon] allow r
caps: [osd] allow rw pool=hdb-backup,allow rw pool=hdb-backup_metadata
24. Juli 2017 13:25, "Дмитрий
Hello!
I want to mount CephFS with a dedicated user in order to avoid putting the
admin key on every client host.
Therefore I created a user account
ceph auth get-or-create client.mtyadm mon 'allow r' mds 'allow rw path=/MTY'
osd 'allow rw pool=hdb-backup,allow rw pool=hdb-backup_metadata' -o
Hello!
My understanding is that I create on (big) pool for all DB backups written to
storage.
The clients have restricted access to a specific directory only, means they can
mount only this directory.
Can I define a quota for a specific directory, or only for the pool?
And do I need to define
19. Juli 2017 17:34, "LOPEZ Jean-Charles" schrieb:
> Hi,
>
> you must add the extra pools to your current file system configuration: ceph
> fs add_data_pool
> {fs_name} {pool_name}
>
> Once this is done, you just have to create some specific directory layout
> within
Hello!
I want to organize data in pools and therefore created additional pools:
ceph osd lspools
0 rbd,1 templates,2 hdb-backup,3 cephfs_data,4 cephfs_metadata,
As you can see, pools "cephfs_data" and "cephfs_metadata" belong to a Ceph
filesystem.
Question:
How can I write data to other pools,
Hi!
I have installed Ceph using ceph-deploy.
The Ceph Storage Cluster setup includes these nodes:
ld4257 Monitor0 + Admin
ld4258 Montor1
ld4259 Monitor2
ld4464 OSD0
ld4465 OSD1
Ceph Health status is OK.
However, I cannot mount Ceph FS.
When I enter this command on ld4257
mount -t ceph
Hello!
I have installed
ceph-deploy-1.5.36git.1479985814.c561890-6.6.noarch.rpm
on SLES11 SP4.
When I start ceph-deploy, I get an error:
ceph@ldcephadm:~/dlm-lve-cluster> ceph-deploy new ldcephmon1
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 18, in
from
14 matches
Mail list logo