On 03/06/2013 12:10 AM, Sage Weil wrote:
There have been a few important bug fixes that people are hitting or
want:
- the journal replay bug (5d54ab154ca790688a6a1a2ad5f869c17a23980a)
- the - _ pool name vs cap parsing thing that is biting openstack users
- ceph-disk-* changes to support latest
Hi,
We did the opposite here; adding some SSD in free slots after having a
normal cluster running with SATA.
We just created a new pool for them and separated the two types. I
used this as a template:
Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each drive in same PG) is much more
utilize, then others, and there are some ops in queue on this slow
osd. This test is getting heads from s3 objects, alphabetically
sorted. This is strange. why this files is going in
Hi,
since I compile Debian packages, ceph -v doesn't work.
I follow this steps :
- git clone XXX
- git checkout origin/bobtail
- dch -i
- dpkg-source -b ceph
- cowbuilder --build ceph*dsc
and I obtain :
root! okko:~# ceph -v
ceph version ()
root! okko:~#
or with
On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron szi...@gmail.com wrote:
Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each drive in same PG) is much more
utilize, then others, and there are some ops in queue on this slow
osd. This test is getting heads from
Proxy-ing this in for a user I had a discussion with on irc this morning:
The question is is there a way to display usable space based on
replication level?
Ultimately what would be nice is to see something like the following:
---
$: sudo ceph --usable-space
Total Space: X / Y
Total Space: X / Y || Usable Space: A / B
Would it be possible to add this in at some point? Seems like a great
addition to go with some of the other 'usability enhancements' that
are planned. Or would this get computationally sticky based on having
many pools with different replication
On 03/05/2013 08:33 PM, Sage Weil wrote:
On Tue, 5 Mar 2013, Wido den Hollander wrote:
Wido, by 'user quota' do you mean something that is uid-based, or would
enforcement on subtree/directory quotas be sufficient for your use cases?
I've been holding out hope that uid-based usage accounting is
You're aware of the just-added ceph df? I don't know it well enough to know if
it's a solution, but it's in that space...
On Mar 6, 2013, at 6:48 AM, Patrick McGarry patr...@inktank.com wrote:
Proxy-ing this in for a user I had a discussion with on irc this morning:
The question is is
On 03/05/2013 12:33 PM, Sage Weil wrote:
Running 'du' on each directory would be much faster with Ceph since it
accounts tracks the subdirectories and shows their total size with an 'ls
-al'.
Environments with 100k users also tend to be very dynamic with adding and
removing users all
On Wednesday, March 6, 2013 at 11:07 AM, Jim Schutt wrote:
On 03/05/2013 12:33 PM, Sage Weil wrote:
Running 'du' on each directory would be much faster with Ceph since it
accounts tracks the subdirectories and shows their total size with an
'ls
-al'.
Environments with
I think the multi-site RGW stuff is somewhat orthogonal to OpenStack
where as the RBD backups needs to factor in Horizon, Cinder APIs and
where the logic for managing the backups sits.
Ross is looking to get a wiki setup for Ceph blueprints so we can
document the incremental snapshot stuff and
On 03/06/2013 12:13 PM, Greg Farnum wrote:
On Wednesday, March 6, 2013 at 11:07 AM, Jim Schutt wrote:
On 03/05/2013 12:33 PM, Sage Weil wrote:
Running 'du' on each directory would be much faster with Ceph since it
accounts tracks the subdirectories and shows their total size with an 'ls
-al'.
On Wednesday, March 6, 2013 at 11:58 AM, Jim Schutt wrote:
On 03/06/2013 12:13 PM, Greg Farnum wrote:
Check out the directory sizes with ls -l or whatever — those numbers are
semantically meaningful! :)
That is just exceptionally cool!
Unfortunately we can't (currently) use
On 03/06/2013 01:21 PM, Greg Farnum wrote:
Also, this issue of stat on files created on other clients seems
like it's going to be problematic for many interactions our users
will have with the files created by their parallel compute jobs -
any suggestion on how to avoid or fix it?
Great, thanks. Now i understand everything.
Best Regards
SS
Dnia 6 mar 2013 o godz. 15:04 Yehuda Sadeh yeh...@inktank.com napisał(a):
On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron szi...@gmail.com wrote:
Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each
On Wednesday, March 6, 2013 at 1:28 PM, Jim Schutt wrote:
On 03/06/2013 01:21 PM, Greg Farnum wrote:
Also, this issue of stat on files created on other clients seems
like it's going to be problematic for many interactions our users
will have with the files created by their parallel
On Wed, 6 Mar 2013, Greg Farnum wrote:
'ls -lh dir' seems to be just the thing if you already know dir.
And it's perfectly suitable for our use case of not scheduling
new jobs for users consuming too much space.
I was thinking I might need to find a subtree where all the
In 4f6a7e5ee1393ec4b243b39dac9f36992d161540 we effectively dropped support
for the legacy encoding for the OSDMap and incremental. However, we didn't
fix the decoding for the pgid.
Signed-off-by: Sage Weil s...@inktank.com
---
net/ceph/osdmap.c | 40 +++-
1
Hi Neil,
On 03/06/2013 08:27 PM, Neil Levine wrote:
I think the multi-site RGW stuff is somewhat orthogonal to OpenStack
Even when keystone is involved ?
where as the RBD backups needs to factor in Horizon, Cinder APIs and
where the logic for managing the backups sits.
Ross is looking to
On Wed, Mar 6, 2013 at 2:15 PM, Sage Weil s...@inktank.com wrote:
In 4f6a7e5ee1393ec4b243b39dac9f36992d161540 we effectively dropped support
for the legacy encoding for the OSDMap and incremental. However, we didn't
fix the decoding for the pgid.
Signed-off-by: Sage Weil s...@inktank.com
On Wed, 6 Mar 2013, Yehuda Sadeh wrote:
On Wed, Mar 6, 2013 at 2:15 PM, Sage Weil s...@inktank.com wrote:
In 4f6a7e5ee1393ec4b243b39dac9f36992d161540 we effectively dropped support
for the legacy encoding for the OSDMap and incremental. However, we didn't
fix the decoding for the pgid.
On Wed, Mar 6, 2013 at 2:45 PM, Loic Dachary l...@dachary.org wrote:
Hi Neil,
On 03/06/2013 08:27 PM, Neil Levine wrote:
I think the multi-site RGW stuff is somewhat orthogonal to OpenStack
Even when keystone is involved ?
Good question.
Yehuda: how would the asynchronously replicated user
So I've been playing with the ObjectOperationCompletion code a bit. It seems to
be really important to be able to handle decoding errors in in the
handle_completion() callback. In particular, I'd like to be able to reach out
and set the return value the user will see in the AioCompletion.
Any
On Wednesday, March 6, 2013 at 3:14 PM, Jim Schutt wrote:
When I'm doing these stat operations the file system is otherwise
idle.
What's the cluster look like? This is just one active MDS and a couple hundred
clients?
What is happening is that once one of these slow stat operations
on a
On Mar 6, 2013, at 5:57 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
The MDS process in my cluster is running at 100% CPU. In fact I thought the
cluster came down, but rather an ls was taking a minute. There aren't any
clients active. I've left the process running in case there is any
Which, looks to be in a tight loop in the memory model _sample…
(gdb) bt
#0 0x7f0270d84d2d in read () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x7f027046dd88 in std::__basic_filechar::xsgetn(char*, long) () from
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2 0x7f027046f4c5 in
27 matches
Mail list logo