Daniel
--
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH | Hochstraße 11
| 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de | www.centerdevice.de
Geschäftsführung: Dr. Patrick Pe
nt based maintenance tasks that are ensured
to only run on n (=3)
OSDs at time?
Can I do anything to smooth this out or reduce it somehow?
Thanks,
Daniel
--
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH
https://www.centerdevice.de___
cep
Bump
On 2016-07-25 14:05:38 +, Daniel Schneller said:
Hi!
I created a bunch of test containers with some objects in them via
RGW/Swift (Ubuntu, RGW via Apache, Ceph Hammer 0.94.1)
Now I try to get rid of the test data.
I manually staretd with one container:
~/rgwtest ➜ swift -v -V
s_gateway_overview/attachments/audio/1077/export/events/attachments/virt_iaas_ceph_rados_gateway_overview/audio/1077/Fosdem_RGW.pdf
but without the "audio track" it doesn't really help me.
Thanks!
Daniel
--
Daniel Schneller
Principal Cloud
On 2016-06-27 16:01:07 +, Lionel Bouton said:
Le 27/06/2016 17:42, Daniel Schneller a écrit :
Hi!
* Network Link saturation.
All links / bonds are well below any relevant load (around 35MB/s or
less)
...
Or you sure ? On each server you have 12 OSDs with a theoretical
bandwidth of at
client, as I see it)
which metrics would I have to collect/look at to verify/reject the
assumption that we are limited by our pure HDD setup?
Thanks a lot!
Daniel
--
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH
https://www.centerdevice.de
proxy configuration on all these machines.
--
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH
https://www.centerdevice.de___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
differently,
too, in case I am overlooking something obvious.
Main requirements are
a) client admin can create new rbd volumes in a dedicated pool,
b) client admin can limit access to a volume to a specific user/secret.
Thanks!
Daniel
--
Daniel Schneller
Principal Cloud Engineer
what is in them,
just the count.
Is that something the RGW maintains for cheap queries?
Thanks,
Daniel
--
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH
https://www.centerdevice.de___
ceph-users mailing list
ceph-users@lists.ceph.com
http
e created the user using the admin key. So it
is not exactly clear what was going on then. Nevertheless, user exists
now, so it might remain a mistery...
In any case, making radosgw-admin at least _inform_ about unknown
arguments might be a better idea than just silently ignoring them.
T
Bump... :)
On 2015-11-02 15:52:44 +, Daniel Schneller said:
Hi!
I am trying to set up a Rados Gateway, prepared for multiple regions
and zones, according to the documenation on
http://docs.ceph.com/docs/hammer/radosgw/federated-config/.
Ceph version is 0.94.3 (Hammer).
I am stuck at the
favor
avisar imediatamente, respondendo esta mensagem.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--
Daniel Schneller
Principal Cloud Engine
ols": [
{ "key": "default-placement",
"val": { "index_pool": ".eu-zone1.rgw.buckets.index",
"data_pool": ".eu-zone1.rgw.buckets"}
}
]
}
These pools are defined:
rbd
images
volumes
.eu-zone1.rgw.root
.eu-zone1.rgw.control
.eu-zone1.rgw.gc
.eu-zone1.rgw.buckets
.eu-zone1.rgw.buckets.index
.eu-zone1.rgw.buckets.extra
.eu-zone1.log
.eu-zone1.intent-log
.eu-zone1.usage
.eu-zone1.users
.eu-zone1.users.email
.eu-zone1.users.swift
.eu-zone1.users.uid
.eu.rgw.root
.eu-zone1.domain.rgw
.rgw
.rgw.root
.rgw.gc
.users.uid
.users
.rgw.control
.log
.intent-log
.usage
.users.email
.users.swift
--
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH
https://www.centerdevice.de
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 2015-07-08 10:34:14 +, Wido den Hollander said:
On 08-07-15 12:20, Daniel Schneller wrote:
Hi!
Just a quick question regarding mixed versions. So far a cluster is
running on 0.94.1-1trusty without Rados Gateway. Since the packets have
been updated in the meantime, installing radosgw
Hi!
Just a quick question regarding mixed versions. So far a cluster is
running on 0.94.1-1trusty without Rados Gateway. Since the packets have
been updated in the meantime, installing radosgw now would entail
bringing a few updated dependencies along. OSDs and MONs on the nodes
that are to b
On 2015-07-03 01:31:35 +, Johannes Formann said:
Hi,
When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
do not seem to shut down correctly. Clients hang and ceph osd tree show
the OSDs of that node still up. Repeated runs of ceph osd tree show
them going down after a whi
Hi!
We are seeing a strange - and problematic - behavior in our 0.94.1
cluster on Ubuntu 14.04.1. We have 5 nodes, 4 OSDs each.
When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
do not seem to shut down correctly. Clients hang and ceph osd tree show
the OSDs of that node stil
> On 23.06.2015, at 14:13, Gregory Farnum wrote:
>
> ...
> On the other hand, there are lots of administrative tasks that can run
> and do something like this. The CERN guys had a lot of trouble with
> some daemon which wanted to scan the OSD's entire store for tracking
> changes, and was instal
Hi!
Recently over a few hours our 4 Ceph disk nodes showed unusually high
and somewhat constant iowait times. Cluster runs 0.94.1 on Ubuntu
14.04.1.
It started on one node, then - with maybe 15 minutes delay each - on the
next and the next one. Overall duration of the phenomenon was about 90
min
On 2015-06-18 09:53:54 +, Joao Eduardo Luis said:
Setting 'mon debug = 0/5' should be okay. Unless you see that setting
'/5' impacts your performance and/or memory consumption, you should
leave that be. '0/5' means 'output only debug 0 or lower to the logs;
keep the last 1000 debug level 5
On 2015-06-17 18:52:51 +, Somnath Roy said:
This is presently written from log level 1 onwards :-)
So, only log level 0 will not log this..
Try, 'debug_mon = 0/0' in the conf file..
Yeah, once I had sent the mail I realized that "1" in the log line was
the level. Had overlooked that befor
Hi!
I am seeing our monitor logs filled over and over with lines like these:
2015-06-17 20:29:53.621353 7f41e48b1700 1 mon.node02@1(peon).log
v26344529 check_sub sending message to client.? 10.102.4.11:0/1006716
with 1 entries (version 26344529)
2015-06-17 20:29:53.621448 7f41e48b1700 1 mon.
erely a timing related assumption. Could it be that the RGW
code causes OSDs to fail either with the big upload alone or in conjunction
with parallel other requests? We are seeing more crashes, though, than large
uploads, so this remains a guess at best.
We created this http://tracker.cep
On 2015-05-16 04:13:57 +, Tuomas Juntunen said:
Hey Pavel
Could you share your C program and the process how you were able to fix
the images.
Thanks
Tuomas
Pavel,
That would indeed be invaluable!
Thank you very much in advance!
Daniel
__
Hello!
In our cluster we had a nasty problem recently due to a very large
number of buckets for a single RadosGW user.
The bucket limit was disabled earlier, and the number of buckets grew
to the point where OSDs started to go down due to excessive access
times, missed heartbeats etc.
We hav
Hello!
I am wondering if there is a limit to the number of (Swift) users that
should be observed when using RadosGW.
For example, if I were to offer storage via S3 or Swift APIs with Ceph
and RGW as the backing implementation and people could just sign up
through some kind of public website, n
Hi!
I am trying to get behind the values in ceph -w, especially those
regarding throughput(?) at the end:
2015-05-15 00:54:33.333500 mon.0 [INF] pgmap v26048646: 17344 pgs:
17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail;
6023 kB/s rd, 549 kB/s wr, 7564 op/s
2015-05-15
On 2015-05-14 21:04:06 +, Daniel Schneller said:
On 2015-04-23 19:39:33 +, Sage Weil said:
On Thu, 23 Apr 2015, Pavel V. Kaygorodov wrote:
Hi!
I have copied two of my pools recently, because old ones has too many pgs.
Both of them contains RBD images, with 1GB and ~30GB of data
You should be able to do just that. We recently upgraded from Firefly
to Hammer like that. Follow the order described on the website.
Monitors, OSDs, MDSs.
Notice that the Debian packages do not restart running daemons, but
they _do_ start up not running ones. So say for some reason before you
On 2015-04-23 19:39:33 +, Sage Weil said:
On Thu, 23 Apr 2015, Pavel V. Kaygorodov wrote:
Hi!
I have copied two of my pools recently, because old ones has too many pgs.
Both of them contains RBD images, with 1GB and ~30GB of data.
Both pools was copied without errors, RBD images are mounta
Bump...
On 2015-03-03 10:54:13 +, Daniel Schneller said:
Hi!
After realizing the problem with log rotation (see
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/17708)
and fixing it, I now for the first time have some
meaningful (and recent) logs to look at.
While from an
Hi!
After realizing the problem with log rotation (see
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/17708)
and fixing it, I now for the first time have some
meaningful (and recent) logs to look at.
While from an application perspective there seem
to be no issues, I would like to und
On 2015-03-02 18:17:00 +, Gregory Farnum said:
I'm not very (well, at all, for rgw) familiar with these scripts, but
how are you starting up your RGW daemon? There's some way to have
Apache handle the process instead of Upstart, but Yehuda says "you
don't want to do it".
-Greg
Well, we in
On our Ubuntu 14.04/Firefly 0.80.8 cluster we are seeing
problem with log file rotation for the rados gateway.
The /etc/logrotate.d/radosgw script gets called, but
it does not work correctly. It spits out this message,
coming from the postrotate portion:
/etc/cron.daily/logrotate:
reload:
On 2015-02-28 20:46:15 +, Gregory Farnum said:
Sounds good!
-Greg
On Sat, Feb 28, 2015 at 10:55 AM David
wrote:
Hi!
We did that a few weeks ago and it mostly worked fine.
However, on startup of one of the 4 machines, it got stuck
while starting OSDs (at least that's what the console
ou
On 2015-02-03 18:48:45 +, Alexandre DERUMIER said:
debian deb packages update are not restarting services.
(So, I think it should be the same for ubuntu).
you need to restart daemons in this order:
-monitor
-osd
-mds
-rados gateway
http://ceph.com/docs/master/install/upgrading-ceph/
Ju
Understood. Thanks for the details.
Daniel
On Tue, Feb 3, 2015 at 1:23 PM -0800, "Gregory Farnum" wrote:
On Tue, Feb 3, 2015 at 1:17 PM, John Spray wrote:
> On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller
> wrote:
>> Now, say I wanted to put /baremetal
On 2015-02-03 18:19:24 +, Gregory Farnum said:
Okay, I've looked at the code a bit, and I think that it's not showing
you one because there isn't an explicit layout set. You should still
be able to set one if you like, though; have you tried that?
Actually, no, not yet. We were setting up
We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our
clients.
There are no specific settings configured for /baremetal.
As a result, trying to get the directory layout via getfattr does not work
getfattr -n 'ceph.dir.layout' /cephfs
/cephfs: ceph.dir.layout: No such attribute
In the absence of other clues, you might want to try checking that the
network is coming up before ceph tries to mount.
Now I think on it, that might just be it - I seem to recall a similar problem
with cifs mounts, despite having the _netdev option. I had to issue a mount in
/etc/network/if-up
Hi!
We have a CephFS directory /baremetal mounted as /cephfs via FUSE on
our clients.
There are no specific settings configured for /baremetal.
As a result, trying to get the directory layout via getfattr does not work
getfattr -n 'ceph.dir.layout' /cephfs
/cephfs: ceph.dir.layout: No such att
On 2015-02-02 16:09:27 +, Gregory Farnum said:
That said, for a point release it shouldn't matter what order stuff
gets restarted in. I wouldn't worry about it. :)
That is good to know. One follow-up then: If the packets trigger
restarts, they will most probably do so for *all* daemons vi
Hello!
We are planning to upgrade our Ubuntu 14.04.1 based cluster from Ceph
Firefly 0.80.7 to 0.80.8. We have 4 nodes, 12x4TB spinners each (plus
OS disks). Apart from the 12 OSDs per node, nodes 1-3 have MONs running.
The instructions on ceph.com say it is best to first restart the MONs,
t
Thanks for your input. We will see what we can find out
with the logs and how to proceed from there.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 2014-12-01 10:03:35 +, Dan Van Der Ster said:
Which version of Ceph are you using? This could be related:
http://tracker.ceph.com/issues/9487
Firefly. I had seen this ticket earlier (when deleting a whole pool) and hoped
the backport of the fix would be available some time soon. I must
Hi!
We take regular (nightly) snapshots of our Rados Gateway Pools for
backup purposes. This allows us - with some manual pokery - to restore
clients' documents should they delete them accidentally.
The cluster is a 4 server setup with 12x4TB spinning disks each,
totaling about 175TB. We are run
On 2014-11-11 13:12:32 +, ವಿನೋದ್ Vinod H I said:
Hi,
I am having problems accessing rados gateway using swift interface.
I am using ceph firefly version and have configured a "us" region as
explained in the docs.
There are two zones "us-east" and "us-west".
us-east gateway is running on ho
To remove the max_bucket limit I used
radosgw-admin user modify --uid= --max-buckets=0
Off the top of my head, I think
radosgw-admin user info --uid=
will show you the current values without changing anything.
See also this thread I started about this topic a few weeks ago.
https://www.m
Apart from the current "there is a bug" part, is the idea to copy a
snapshot into a new pool a viable one for a full-backup-restore?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 2014-10-30 10:14:44 +, Dan van der Ster said:
Hi Daniel,
I can't remember if deleting a pool invokes the snap trimmer to do the
actual work deleting objects. But if it does, then it is most
definitely broken in everything except latest releases (actual dumpling
doesn't have the fix yet
Ticket created: http://tracker.ceph.com/issues/9941
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Bump :-)
Any ideas on this? They would be much appreciated.
Also: Sorry for a possible double post, client had forgotten its email config.
On 2014-10-22 21:21:54 +, Daniel Schneller said:
We have been running several rounds of benchmarks through the Rados
Gateway. Each run creates
eph 0.80.7-1trusty
daniel.schneller@node01 [~] $
➜ uname -a
Linux node01 3.13.0-27-generic #50-Ubuntu SMP Thu May 15 18:06:16
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Copying without the snapshot works. Should this work at least in
theory?
Thanks!
Daniel
--
Dan
We have been running several rounds of benchmarks through the Rados
Gateway. Each run creates several hundred thousand objects and similarly
many containers.
The cluster consists of 4 machines, 12 OSD disks (spinning, 4TB) — 48
OSDs total.
After running a set of benchmarks we renamed the pools us
samuel writes:
> Hi all,This issue is also affecting us (centos6.5 based icehouse) and,
> as far as I could read,
> comes from the fact that the path /var/lib/nova/instances (or whatever
> configuration path you have in nova.conf) is not shared. Nova does not
> see this shared path and therefore
k = 0) v6 338+0+336
> (3685145659 0 4232894755) 0x7f00e430f540 con 0x7f040c055ca0
>
> By looking at these logs it seems that there are only 8 pgs on the
> .rgw pool, if this is correct then you may want to change that
> considering your workload.
>
> Yehuda
>
&g
36.802077 10 setting object write_tag=default.78418684.983095
36.818727 2 req 983095:4.932579:swift:PUT
/swift/v1//version:put_obj:http status=201
==
--
Daniel Schneller
Mobile Development Lead
CenterDevice GmbH | Merscheider Straße 1
,
Daniel
--
Daniel Schneller
Mobile Development Lead
CenterDevice GmbH | Merscheider Straße 1
| 42699 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.com <mailto:daniel.schnel...@centerdevice.
de opinion :)
Thanks!
Daniel
--
Daniel Schneller
Mobile Development Lead
CenterDevice GmbH | Merscheider Straße 1
| 42699 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.com |
On 09 Sep 2014, at 21:43, Gregory Farnum wrote:
> Yehuda can talk about this with more expertise than I can, but I think
> it should be basically fine. By creating so many buckets you're
> decreasing the effectiveness of RGW's metadata caching, which means
> the initial lookup in a particular bu
we tune any parameters to alleviate this?
Any feedback would be very much appreciated.
Regards,
Daniel
--
Daniel Schneller
Mobile Development Lead
CenterDevice GmbH | Merscheider Straße 1
| 42699 Solingen
tel: +49 1754155711
61 matches
Mail list logo