Hi,
rclone can be your friend: https://rclone.org/
Regards,
--
Jarek
czw., 26 wrz 2019 o 14:55 CUZA Frédéric napisał(a):
> Hi everyone,
>
> As aynone ever made a backup of a ceph bucket into Amazon Glacier ?
>
> If so did you use a script that use the api to “migrate” the objects ?
>
>
>
> If
d.${osd}"
> sudo systemctl start ceph-osd@$osd.service
> done &
> done
> wait
> sudo systemctl start ceph-osd.target
>
> On Thu, Nov 16, 2017 at 9:19 AM Piotr Dałek
> wrote:
>
>> On 17-11-16 02:44 PM, Jaroslaw Owsiewski wrote:
>> > HI,
HI,
what exactly means message:
filestore_split_multiple = '24' (not observed, change may require restart)
This has happend after command:
# ceph tell osd.0 injectargs '--filestore-split-multiple 24'
Do I really need to restart OSD to make changes to take effect?
ceph version 12.2.1 () lumin
http://tracker.ceph.com/issues/22015 - someone else has this issue?
Regards
--
Jarek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
$ radosgw-admin metadata list user
--
Jarek
--
Jarosław Owsiewski
2017-08-01 9:52 GMT+02:00 Diedrich Ehlerding <
diedrich.ehlerd...@ts.fujitsu.com>:
> Hello,
>
> according to the manpages of radosgw-admin, it is possible to
> suspend, resume, create, remove a single radosgw user, but I
Hi,
We observed the same behavior with kernel 4.7 and Ubuntu 14.04 under heavy
load. Kernel 4.2 is stable. We use only S3 gateway.
--
Jarek
--
Jarosław Owsiewski
2016-11-15 11:31 GMT+01:00 Nick Fisk :
> Hi All,
>
>
>
> Just a slight note of caution. I had been running the 4.7 kernel (With
>
https://www.suse.com/documentation/ses-3/book_storage_admin/data/ceph_rgw_manual.html
- this is example how documentation should look like :-).
Regards
--
Jarek
--
Jarosław Owsiewski
2016-11-09 15:48 GMT+01:00 Matthew Vernon :
> Hi,
>
> I have a jewel/Ubuntu16.40 ceph cluster. I attempted to
2016-09-22 16:20 GMT+02:00 Wido den Hollander :
>
> > Op 22 september 2016 om 16:13 schreef Matteo Dacrema >:
> >
> >
> > To be more precise, the node with different OS are only the OSD nodes.
> >
>
> I haven't seen real issues, but a few which I could think of which
> *potentially* might be a pr
I think that first symptoms of out problems occurred when we posted this
issue:
http://tracker.ceph.com/issues/15727
Regards
--
Jarek
--
Jarosław Owsiewski
2016-07-14 15:43 GMT+02:00 Jaroslaw Owsiewski <
jaroslaw.owsiew...@allegrogroup.com>:
> 2016-07-14 15:26 GMT+02:00 Luis
2016-07-14 15:26 GMT+02:00 Luis Periquito :
> Hi Jaroslaw,
>
> several things are springing up to mind. I'm assuming the cluster is
> healthy (other than the slow requests), right?
>
>
Yes.
> From the (little) information you send it seems the pools are
> replicated with size 3, is that correct
Hi,
we have problem with drastic performance slowing down on a cluster. We used
radosgw with S3 protocol. Our configuration:
153 OSD SAS 1.2TB with journal on SSD disks (ratio 4:1)
- no problems with networking, no hardware issues, etc.
Output from "ceph df":
GLOBAL:
SIZE AVAIL RAW
Hi,
attached.
Regards,
--
Jarek
--
Jarosław Owsiewski
2016-06-20 11:01 GMT+02:00 Kanchana. P :
> Hi,
>
> Do anyone have a working configuration of ceph s3 to run with cosbench
> tool.
>
> Thanks,
> Kanchana.
>
>
>
> ___
> ceph-users mailing list
>
Probably this is the reason:
https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html
Regards,
--
Jarosław Owsiewski
2016-02-17 15:28 GMT+01:00 Alexandr Porunov :
> Hello,
>
> I have problem with port changes of rados gateway node.
> I don't know why but I cannot change listening port
FYI
--
Jarek
-- Forwarded message --
From: Jaroslaw Owsiewski
Date: 2016-02-09 12:00 GMT+01:00
Subject: Re: [ceph-users] Increasing time to save RGW objects
To: Wade Holler
Hi,
For example:
# ceph --admin-daemon=ceph-osd.98.asok perf dump
generaly:
ceph --admin-daemon
14 matches
Mail list logo