On Fri, Feb 27, 2015 at 1:19 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 27 Feb 2015, Haomai Wang wrote:
Anyway, this leads to a few questions:
- Who is interested in using Manila to attach CephFS to guest VMs?
Yeah, actually we are doing this
On Fri, 27 Feb 2015, Haomai Wang wrote:
Anyway, this leads to a few questions:
- Who is interested in using Manila to attach CephFS to guest VMs?
Yeah, actually we are doing this
Thanks Mark for the results,
default values seem to be quite resonable indeed.
I also wonder is cpu frequency can have an impact on latency or not.
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks,
I'll try replay your benchmark to compare
- Mail original -
Hi Nigel,
Yes, but I didn't actually noticed that it is giving osd numbers as well. I
probably always used it before installation to list the disks :-)
Thanks, this is helpful.
Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
Commentary inline.
Note that when I talk about dependencies, I'm speaking as someone who does
distro packaging - and thus what would require manual changes on the
packager's part vs. ability to specify general constraints.
Sage Weil wrote:
Hammer will most likely be v0.94[.x]. We're getting
On 27/02/2015 00:59, Yehuda Sadeh-Weinraub wrote:
- Original Message -
From: Loic Dachary l...@dachary.org
To: Sage Weil sw...@redhat.com, ceph-devel@vger.kernel.org
Sent: Thursday, February 26, 2015 3:38:31 PM
Subject: Re: ceph versions
Hi Sage,
I prefer Option D because
Hi Ceph,
ceph-workbench is a command line script that is designed to bind together
various scripts I'm using when working on Ceph. A while back I thought it would
be a good idea to have such a swiss knife to match the needs of all Ceph
developers, instead of scripts scattered in various places
Hi,
during the performance weely meeting, I had mentioned
my experiences concerning the transaction structure
for write requests at the level of the FileStore.
Such a transaction not only contains the OP_WRITE
operation to the object in the file system, but also
a series of OP_OMAP_SETKEYS and
On Thu, 26 Feb 2015, Michael Kuriger wrote:
I¹d also like to set this up. I¹m not sure where to begin. When you say
enabled by default, where is it enabled?
The civetweb frontend is built into the radosgw process, so for the most
part you just have to get radosgw started and configured. It
Thanks Sage for the quick reply!
-=Mike
On 2/26/15, 8:05 AM, Sage Weil sw...@redhat.com wrote:
On Thu, 26 Feb 2015, Michael Kuriger wrote:
I¹d also like to set this up. I¹m not sure where to begin. When you
say
enabled by default, where is it enabled?
The civetweb frontend is built into
Sage Weil sweil at redhat.com writes:
Have you seen any problems? Any other feedback? The hope is to (vastly)
simplify deployment.
What about the memory footprint? In some cases the civetweb frontend
performs an additional buffering to properly calculate the size of a content
and to attach
On 26/02/2015 15:26, Wyllys Ingersoll wrote:
Trying to run ceph-mds on a freshly installed firefly cluster with no
ceph FS created yet.
It consistently crashes upon startup. Below is debug output showing
the point of the crash. Something is obviously misconfigured or
broken but Im at a loss
Trying to run ceph-mds on a freshly installed firefly cluster with no
ceph FS created yet.
It consistently crashes upon startup. Below is debug output showing
the point of the crash. Something is obviously misconfigured or
broken but Im at a loss as to where the issue would be. Any ideas?
$
I¹d also like to set this up. I¹m not sure where to begin. When you say
enabled by default, where is it enabled?
Many thanks,
Mike
On 2/25/15, 1:49 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 25 Feb 2015, Robert LeBlanc wrote:
We tried to get radosgw working with Apache + mod_fastcgi, but
Hi everyone,
I've posted an updated version of the richacl kernel patches and
user-space bits earlier today [1]. This should give anyone interested
in the topic a chance to get richacls up and running locally, on ext4,
to get a feeling of how they work. See the richacl homepage [2] for
Travis,
I backtracked and saw we indeed added the librados2 in the ceph-deploy purge
path. Sorry for the false alarm.
So, we will add a dependency check in the purge path like this.
1. Check any component installed in the system has dependency on the
librados2/librbd1 or not.
2. if not, remove
Hmm, we already obverse this duplicate omap keys set from pglog operations.
And I think we need to resolve it in upper layer, of course,
coalescing omap operations in FileStore is also useful.
@Somnath Do you do this dedup work in KeyValueStore already?
On Thu, Feb 26, 2015 at 10:28 PM, Andreas
On 02/26/2015 09:02 AM, Haomai Wang wrote:
Hmm, we already obverse this duplicate omap keys set from pglog operations.
And I think we need to resolve it in upper layer, of course,
Can we resolve this higher up easily? I am struck that it might be
easier to simply do the coalescing here.
This is the first (and possibly final) point release for Giant. Our focus
on stability fixes will be directed towards Hammer and Firefly.
We recommend that all v0.87 Giant users upgrade to this release.
Upgrading
-
* Due to a change in the Linux kernel version 3.18 and the limits of
On 26/02/2015 17:19, Wyllys Ingersoll wrote:
OK, attached is the initial log, or at least the earliest log I can find.
Ah, now that I look more closely at the backtrace, I realise that
creation succeeded, but it is now failing on subsequent runs because it
can't find the metadata pool. I
On 26/02/2015 17:58, Wyllys Ingersoll wrote:
Yeah, I noticed that too, so I recreated both of those pools and it
still wont start. It crashes in a different place now, but still wont
start, even after running 'newfs'. Attached is the debug log output
when I start ceph-mds
...
common/Thread.cc:
On 26/02/2015 18:07, Wyllys Ingersoll wrote:
Here is 'ceph df', followed by mds dump and osd dump
Your MDS map is trying to use pool 0 for both data and metadata.
Firstly, these should be different pools. Secondly, you have no pool
0. You do have metadata and data pools with ids 6 and 7,
On Thu, 26 Feb 2015, Wido den Hollander wrote:
On 25-02-15 20:31, Sage Weil wrote:
Hey,
We are considering switching to civetweb (the embedded/standalone rgw web
server) as the primary supported RGW frontend instead of the current
apache + mod-fastcgi or mod-proxy-fcgi approach.
- Original Message -
From: Abhishek Dixit dixita...@gmail.com
To: ceph-devel ceph-devel@vger.kernel.org
Sent: Wednesday, February 25, 2015 8:35:40 PM
Subject: RGW : Transaction Id in response?
Hi,
I was doing comparison of Open Stack Swift response headers and Ceph
RGW
Yeah, I noticed that too, so I recreated both of those pools and it
still wont start. It crashes in a different place now, but still wont
start, even after running 'newfs'. Attached is the debug log output
when I start ceph-mds
...
common/Thread.cc: In function 'int Thread::join(void**)' thread
OK, attached is the initial log, or at least the earliest log I can find.
On Thu, Feb 26, 2015 at 12:06 PM, John Spray john.sp...@redhat.com wrote:
On 26/02/2015 15:26, Wyllys Ingersoll wrote:
Trying to run ceph-mds on a freshly installed firefly cluster with no
ceph FS created yet.
It
On Thu, Feb 26, 2015 at 04:54:42PM +0100, Andreas Gruenbacher wrote:
Hi everyone,
I've posted an updated version of the richacl kernel patches and
user-space bits earlier today [1]. This should give anyone interested
in the topic a chance to get richacls up and running locally, on ext4,
to
I have created a couple of tests with various accounts and all seems
to work. Can you send me your wiki username so that I can check
permissions? If nothing else send me the content in an email and I'll
post the blueprint. Thanks.
On Wed, Feb 25, 2015 at 9:22 PM, Zhang, Jian jian.zh...@intel.com
Here is 'ceph df', followed by mds dump and osd dump
$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
1307T 1302T5039G 0.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
ks3backup 3 160M 0 0
Excellent, thanks.
It starts now without crashing, but I see lots of errors like this:
2015-02-26 08:24:19.134693 7f78091aa700 0 cephx: verify_reply
couldn't decrypt with error: error decoding block for decryption
2015-02-26 08:24:19.134695 7f78091aa700 0 -- 10.2.3.33:6800/5236
Thanks, we were able to get it up and running very quickly. If it
performs well, I don't see any reason to use Apache+fast_cgi. I don't
have any problems just focusing on civetweb.
On Wed, Feb 25, 2015 at 2:49 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 25 Feb 2015, Robert LeBlanc wrote:
We
Op 26 feb. 2015 om 18:22 heeft Sage Weil sw...@redhat.com het volgende
geschreven:
On Thu, 26 Feb 2015, Wido den Hollander wrote:
On 25-02-15 20:31, Sage Weil wrote:
Hey,
We are considering switching to civetweb (the embedded/standalone rgw web
server) as the primary supported RGW
You are corrrect, my keys were not configured correctly. All are
healthy and working now, thanks for your help.
-Wyllys
On Thu, Feb 26, 2015 at 1:32 PM, John Spray john.sp...@redhat.com wrote:
On 26/02/2015 18:27, Wyllys Ingersoll wrote:
Excellent, thanks.
It starts now without crashing,
Robert --
We are still having trouble with this.
Can you share your [client.radosgw.gateway] section of ceph.conf and
were there any other special things to be aware of?
-- Tom
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On
[client.radosgw.gateway]
host = radosgw1
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw print continue = false
rgw enable ops log = false
rgw ops log rados = false
rgw ops
+1 for proxy. Keep the civetweb lean and mean and if people need
extras let the proxy handle this. Proxies are easy to set-up and a
simple example could be included in the documentation.
On Thu, Feb 26, 2015 at 11:43 AM, Wido den Hollander w...@42on.com wrote:
Op 26 feb. 2015 om 18:22 heeft
Hi Axel,
On Thu, 26 Feb 2015, Axel Dunkel wrote:
Sage,
we use apache as a filter for security and additional functionality
reasons. I do like the idea, but we'd need some kind of interface to
filter/modify/process requests.
Civetweb has some basic functionality here:
On 26/02/2015 18:27, Wyllys Ingersoll wrote:
Excellent, thanks.
It starts now without crashing, but I see lots of errors like this:
2015-02-26 08:24:19.134693 7f78091aa700 0 cephx: verify_reply
couldn't decrypt with error: error decoding block for decryption
2015-02-26 08:24:19.134695
Sage,
we use apache as a filter for security and additional functionality
reasons. I do like the idea, but we'd need some kind of interface to
filter/modify/process requests.
Best regards
Axel Dunkel
-Ursprüngliche Nachricht-
Von: ceph-devel-ow...@vger.kernel.org
Hi,
Is there any way to know which OSD to map to which drive of a host from an
Admin node ?
We know from command like say 'ceph osd tree' to identify till the host where
an osd belongs to but not the disk. The workaround I have is to login to the
corresponding node and see the mount points to
On Thu, 26 Feb 2015, Somnath Roy wrote:
Hi,
Is there any way to know which OSD to map to which drive of a host from
an Admin node ? We know from command like say 'ceph osd tree' to
identify till the host where an osd belongs to but not the disk. The
workaround I have is to login to the
Thanks !
inline
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Thursday, February 26, 2015 1:20 PM
To: Somnath Roy
Cc: Ceph Development
Subject: Re: Some usability question
On Thu, 26 Feb 2015, Somnath Roy wrote:
Hi,
Is there any way to know which OSD to map to
在 2015年2月20日,06:23,John Spray john.sp...@redhat.com 写道:
Background: a while ago, we found (#10277) that existing cache expiration
mechanism wasn't working with latest kernels. We used to invalidate the top
level dentries, which caused fuse to invalidate everything, but an
Hi Sage,
Is there any timeline around the switch? So that we can plan ahead for the
testing.
We are running apache + mod-fastcgi in production at scale (540 OSDs, 9 RGW
hosts) and it looks good so far. Although at the beginning we came across a
problem with large volume of 500 error, which
On 25-02-15 20:31, Sage Weil wrote:
Hey,
We are considering switching to civetweb (the embedded/standalone rgw web
server) as the primary supported RGW frontend instead of the current
apache + mod-fastcgi or mod-proxy-fcgi approach. Supported here means
both the primary platform the upstream
On 27/02/2015 8:05 AM, Somnath Roy wrote:
Is there any way to know which OSD to map to which drive of a host from an
Admin node ?
still requires a ssh login, but I guess you know about ceph-disk list?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
- Original Message -
From: Loic Dachary l...@dachary.org
To: Sage Weil sw...@redhat.com, ceph-devel@vger.kernel.org
Sent: Thursday, February 26, 2015 3:38:31 PM
Subject: Re: ceph versions
Hi Sage,
I prefer Option D because it's self explanatory. We could also drop the
names. I
Hi everyone,
The online Ceph Developer Summit is next week[1] and among other things
we'll be talking about how to support CephFS in Manila. At a high level,
there are basically two paths:
1) Ganesha + the CephFS FSAL driver
- This will just use the existing ganesha driver without
Yehuda Sadeh-Weinraub wrote:
- Original Message -
From: Loic Dachary l...@dachary.org
To: Sage Weil sw...@redhat.com, ceph-devel@vger.kernel.org
Sent: Thursday, February 26, 2015 3:38:31 PM
Subject: Re: ceph versions
Hi Sage,
I prefer Option D because it's self explanatory.
Hi Sage,
I prefer Option D because it's self explanatory. We could also drop the names.
I became attached to them but they are confusing to the new users who is
required to remember that firefly is 0.80, giant is 0.87 etc.
Cheers
On 27/02/2015 00:12, Sage Weil wrote:
-- Option D -- labeled
[sorry for ceph-devel double-post, forgot to include openstack-dev]
Hi everyone,
The online Ceph Developer Summit is next week[1] and among other things
we'll be talking about how to support CephFS in Manila. At a high level,
there are basically two paths:
1) Ganesha + the CephFS FSAL driver
Hammer will most likely be v0.94[.x]. We're getting awfully close to
0.99, though, which makes many people think 1.0 or 1.00 (isntead of
0.100), and the current versioning is getting a bit silly. So let's talk
about alternatives!
Here are a few of options:
-- Option A -- doubles and triples
Can I also throw another option out there ?
Openstack uses a version scheme tied to the year of release [1].
Looking back at past releases [2], we can see that for example Icehouse
was the first release of 2014: 2014.1.
Icehouse eventually had stable releases which were versioned 2014.1.1,
On Fri, Feb 27, 2015 at 8:01 AM, Sage Weil sw...@redhat.com wrote:
Hi everyone,
The online Ceph Developer Summit is next week[1] and among other things
we'll be talking about how to support CephFS in Manila. At a high level,
there are basically two paths:
1) Ganesha + the CephFS FSAL
Hi,
One more issue I faced in case of Cache tiering.
I couldn't find any list/show command that will show the existing cache tiers
in the cluster along with its backup pool.
Am I missing anything ?
-Original Message-
From: Somnath Roy
Sent: Thursday, February 26, 2015 4:07 PM
To: 'Nigel
Hello cephalopods, I use s3-tests to test S3Proxy[1], Apache jclouds,
and an internal project. While s3-tests has good functionality as it
exists, the project has not progressed much over the last six months. I
have submitted over 20 pull requests to fix incorrect tests and for
additional test
56 matches
Mail list logo