Hello
I would like to know if using the bucket notification system to http for
backuping S3 bucket is a good move or not.
Is someone already did it ?
I got around 100to of little documents to backup and archive for legal purpose
and dr each day.
Many thanks for your help.
oau
__
hello
when I run my borgbackup over cephfs volume (10 subvolumes for 1.5To) I
can see a big increase of osd space usage and 2 or 3 osd goes near
full, or full, then out and finally the cluster goes in error.
Any tips to prevent this ?
My cluster is cephv15 with :
9 nodes :
each node run : 2x6t
TiB 9.23M9.4 TiB
10.67 26 TiB
is that a normal behaviour ?
oau
Le lundi 05 avril 2021 à 15:17 +0200, Olivier AUDRY a écrit :
> hello
>
> when I run my borgbackup over cephfs volume (10 subvolumes for 1.5To)
> I
> can see a big increase of osd space usage and
x27;m writting this email the metadata pool goes from 70Go to
39Go as the backup is still running.
I don't really get what is going on here ...
oau
Le mardi 06 avril 2021 à 15:08 +0200, Burkhard Linke a écrit :
> Hi,
>
> On 4/6/21 2:20 PM, Olivier AUDRY wrote:
> > hello
> >
hello
perhaps you should have more than one MDS active.
mds: cephfs:3 {0=cephfs-d=up:active,1=cephfs-e=up:active,2=cephfs-
a=up:active} 1 up:standby-replay
I got 3 active mds and one standby.
I'm using rook in kubernetes for this setup.
oau
Le lundi 03 mai 2021 à 19:06 +0530, Lokendra Rathour
hello
as far as I know there is no perf advantage to do this. Personnaly I'm
doing it in order to monitoring the two different bandwidth usage.
oau
Le mardi 16 juin 2020 à 16:42 +0200, Marcel Kuiper a écrit :
> Hi
>
> I wonder if there is any (theoretical) advantage running a separate
> backend
hello
is there a way to push this config directly into ceph without using the
ceph.conf file ?
thanks for your tips
oau
Le vendredi 12 juin 2020 à 15:24 +, Stefan Wild a écrit :
> On 6/12/20, 5:40 AM, "James, GleSYS" wrote:
>
> > When I set the debug_rgw logs to "20/1", the issue disappea
hello
ceph is the definitive solution for storage. That's all.
I'm happy user since 2014 and I never lost any data. When I remember
how painfull was the firmware upgrade of emc, netapp, hp storage and
the time passed to recover lost data . Ceph is just amazing !
so many thx to you guys. Th
Just sharing my xp :
- storing photos for photoways, now photobox in early 2000. A bug in
the HP storage enclosure earase all the raidgroup. 3 weeks to
recalculate all the thumbnail with a dedicated server specialized in
resizing images.
- little emc storage with something like 10 disk. 3 for the
hello
I got a huge difference of read/write performance on my ceph cluster
when I use rbd.
rbd bench reach the limit of my cluster (1gbps network) when the
performance in mapped rbd is very low. 30MB/s.
ceph version : ceph version 14.2.2
(4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stabl
ût 2019 à 15:56 +0200, Ilya Dryomov a écrit :
> On Wed, Aug 14, 2019 at 2:49 PM Paul Emmerich > wrote:
> > On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY
> > wrote:
> > > let's test random write
> > > rbd -p kube bench kube/bench --io-type write --io-size
hello
here the result :
fio --ioengine=rbd --name=test --bs=4k --iodepth=1 --rw=randwrite --
runtime=60 -pool=kube -rbdname=bench
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=rbd, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=2
hello
just for the record the nvme disk are pretty fast.
dd if=/dev/zero of=test bs=8192k count=100 oflag=direct
100+0 records in
100+0 records out
838860800 bytes (839 MB, 800 MiB) copied, 0.49474 s, 1.7 GB/s
oau
Le vendredi 16 août 2019 à 13:31 +0200, Olivier AUDRY a écrit :
> he
hello
here on the nvme partition directly
- libaio randwrite /dev/nvme1n1p4 => WRITE: bw=12.1MiB/s (12.7MB/s),
12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=728MiB (763MB), run=60001-
60001msec
- libaio randread /dev/nvme1n1p4 => READ: bw=35.6MiB/s (37.3MB/s),
35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/
Ok I read your link. My ssds are bad. They got capacitors ... I don't
choose them. They come with the hardware I rent. Perhaps it will be
better to switch to hdd. I cannot even but journal on them ... bad news
:(
Le vendredi 16 août 2019 à 17:37 +0200, Olivier AUDRY a écrit :
> hello
&
fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randwrite
-runtime=60 -pool=kube -rbdname=bench
WRITE: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s-
93.9MB/s), io=5548MiB (5817MB), run=61935-61935msec
fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randread
-runtime=60 -pool=k
RBD
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread
-runtime=60 -pool=kube -rbdname=bench
READ: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s),
io=1308MiB (1371MB), run=60011-60011msec
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randwrite
-runtime=60 -pool
-10.5MiB/s (11.0MB/s-
11.0MB/s), io=631MiB (662MB), run=60021-60021msec
=> New hardware : 32.6MB/s READ / 10.5MiB WRITE
=> Old hardware : 184MiB/s READ / 46.9MiB WRITE
No discussion ? I suppose I will keep the old hardware. What do you
think ? :D
Le vendredi 16 août 2019 à 21:17 +0200, O
Write and read with 2 hosts 4 osd :
mkfs.ext4 /dev/rbd/kube/bench
mount /dev/rbd/kube/bench /mnt/
dd if=/dev/zero of=test bs=8192k count=1000 oflag=direct
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 117.541 s, 71.4 MB/s
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randwrite
-direct=1 -ru
hello
as far as I know and according to the document the mon just share the
cluster map with the client. Not the data
"Storage cluster clients retrieve a copy of the cluster map from the
Ceph Monitor."
https://docs.ceph.com/docs/master/architecture/
Le mercredi 25 septembre 2019 à 22:25 +0800,
hello
if it's only for a poc you can try to rent devices for some month.
for less that 100€ per month you can get this kind of device at ovh.com
- 32go RAM
- Intel Xeon-E 2274G - 4 c/8 t - 4 GHz/4.9 GHz
- 3× 4 To HDD SATA Soft RAID or 2× 960 Go SSD NVMe Soft RAID
- 2 x 2Gbps network
Personnaly
hello
I did not do windows vm on kvm since years but back in time for good io
performance on windows vm on kvm virtio driver has to be installed.
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
oau
Le jeudi 02 avril 2020 à 15:28 +, Frank Schilde
22 matches
Mail list logo