Hi all,
We are working on jewel branch on a test cluster to validate some of the fixes.
But landed up in the following error when mapping an image using krbd on
Ubuntu 14.04.2 with 3.16.0-55 kernel version.
$ sudo rbd map -p pool1 rbd1
rbd: sysfs write failed
rbd: map failed: (5) Input/output
: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: Saturday, December 12, 2015 4:42 PM
> To: Varada Kari <varada.k...@sandisk.com>
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: Rbd map failure in 3.16.0-55
>
> On Sat, Dec 12, 2015 at 7:56 AM, Varada Kari <varada.k...
bably still an XFS
> partition) vs the block storage. Maybe we do this anyway (put metadata on
> SSD!) so it won't matter. But what happens when we are storing gobs of rgw
> index data or cephfs metadata? Suddenly we are pulling storage out of a
> different pool and those aren't curren
Hi James,
Are you mentioning SCSI OSD (http://www.t10.org/drafts.htm#OSD_Family) ? If
SCSI OSD is what you are mentioning, drive has to support all osd functionality
mentioned by T10.
If not, we have to implement the same functionality in kernel or have a wrapper
in user space to convert them
.*'
+#libceph_filestore_la_LDFLAGS += -export-symbols-regex '.*__ceph_plugin_.*'
endif
lib_LTLIBRARIES += libceph_filestore.la
Thanks,
Varada
> -Original Message-
> From: James (Fei) Liu-SSI [mailto:james@ssi.samsung.com]
> Sent: Tuesday, September 29, 2015 10:08 PM
> To: Varada Ka
if there is any problem.
And yes, I have not made any changes to cmake yet, once I have all the changes
for automake, will make cmake changes.
Varada
> -Original Message-
> From: James (Fei) Liu-SSI [mailto:james@ssi.samsung.com]
> Sent: Tuesday, September 29, 2015 11:02 PM
> To:
ada,
> Have you rebased the pull request to master already?
>
> Thanks,
> James
>
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Varada Kari
> Sent: Friday, September 11, 2015 3:28 AM
> To: Sage
3, 2015 2:05 AM
> To: Varada Kari <varada.k...@sandisk.com>; James (Fei) Liu-SSI
> <james@ssi.samsung.com>; Sage Weil <s...@newdream.net>; Matt W.
> Benjamin <m...@cohortfs.com>; Loic Dachary <l...@dachary.org>
> Cc: ceph-devel <ceph-deve
and let me know your comments.
Will submit a rebased PR soon with new store integration.
Thanks,
Varada
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Varada Kari
Sent: Friday, July 03, 2015 7:31 PM
To: Sage Weil &l
Something like erasure code plugins would suffice here? Each plugin has a
default parameters, which can be overridden with ceph.conf options.
Varada
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Friday, August 21, 2015 8:37 PM
To: Varada Kari varada.k
Hi all,
This is regarding generalizing the ceph-disk to work with different osd
backends like FileStore, KeyValueStore and NewStore etc...
All these object store implementations has different needs on the disk being
used for holding data and Meta data.
Sage suggest in the one of the pull
Hi all,
When I tried to build and install the packages on hammer branch. Facing the
issue mentioned below.
$dpkg -i ceph-common_0.94.2-1_amd64.deb
Selecting previously unselected package ceph-common.
(Reading database ... 57855 files and directories currently installed.)
Preparing to unpack
Thanks. There are no changes from my end, just tried building the packages to
test internally with the latest code base.
0. 94-2.2 is intentional?
Varada
-Original Message-
From: Ken Dreyer [mailto:kdre...@redhat.com]
Sent: Tuesday, August 18, 2015 3:42 AM
To: Varada Kari varada.k
(Adding devel list to the CC)
Hi Eric,
To add more context to the problem:
Min_size was set to 1 and replication size is 2.
There was a flaky power connection to one of the enclosures. With min_size 1,
we were able to continue the IO's, and recovery was active once the power comes
back. But
Sure. Will break them into multiple patches for easy review and merge.
Varada
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Friday, July 10, 2015 1:57 PM
To: Varada Kari
Cc: Sage Weil; Samuel Just; ceph-devel
Subject: Re: Patches for review on keyvaluestore
I
Hi Sage/Sam/Haomai,
Sent pull requests for two enhancement for key value store. Can you please
review the changes?
https://github.com/ceph/ceph/pull/5136
Contains changes for short circuiting op_wq and osr in key value store. We have
observed good performance gains with the above pull
(0x7f5e49c12000)
Edited the above output just show the dependencies.
Did anyone face this issue before?
Any help would be much appreciated.
Thanks,
Varada
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Varada Kari
Sent
-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Varada Kari
Sent: Monday, June 22, 2015 8:57 PM
To: Matt W. Benjamin
Cc: Loic Dachary; ceph-devel; Sage Weil
Subject: RE: loadable objectstore
Hi Matt,
Majority of the changes are segregating the files
, and I think we are in the
way of making them on-demand (if possible) loadable modules.
Varada
-Original Message-
From: Matt W. Benjamin [mailto:m...@cohortfs.com]
Sent: Monday, June 22, 2015 8:37 PM
To: Varada Kari
Cc: Loic Dachary; ceph-devel; Sage Weil
Subject: Re: loadable objectstore
Hi
implementation?
We can extend this implementation to messenger and different backend of the
OSD. Will make those incremental once the base framework is approved.
Thanks,
Varada
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Tuesday, May 12, 2015 5:11 AM
To: Varada Kari
Hi,
We are trying to build CEPH on RHEL7.1. But facing some issues with the build
with Giant branch.
Enabled the redhat server rpms and redhat ceph storage rpm channels along with
optional, extras and supplementary. But we are not able to find gperftools,
leveldb and yasm rpms in the channels.
Thanks Sage and Loic. Will add the missing pieces and integrate the object
factory and plugin changes.
Varada
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Saturday, April 11, 2015 9:19 PM
To: Loic Dachary
Cc: Varada Kari; Matt W. Benjamin; ceph-devel
Subject: Re
: Varada Kari; ceph-devel
Subject: Re: loadable objectstore
On Fri, 10 Apr 2015, Matt W. Benjamin wrote:
Hi Varada,
I pushed branch hammer-osfactory to
https://github.com/linuxbox2/ceph.git
It can at least provide a starting point for discussion, if not a
jumping off point; I do think it's
Kari
Cc: Ceph Development
Subject: Re: Adding a proprietary key value store to CEPH
Hi Varada,
On Tue, 24 Feb 2015, Varada Kari wrote:
Hi Sage,
We are trying to integrate a new proprietary key value store to CEPH. To
integrate this KV-store, which is a closed source shared library, we propose
Hi Matt,
Can you please upstream the implementation you are mentioning?
Varada
-Original Message-
From: Matt W. Benjamin [mailto:m...@cohortfs.com]
Sent: Thursday, February 26, 2015 5:26 AM
To: Somnath Roy
Cc: Varada Kari; Ceph Development; Sage Weil
Subject: Re: Adding a proprietary
Sure Loic. Will attend the CDS next week.
Thanks,
Varada
-Original Message-
From: Loic Dachary [mailto:l...@dachary.org]
Sent: Wednesday, February 25, 2015 2:49 AM
To: Varada Kari; Miyamae, Takeshi
Cc: Ceph Development
Subject: generic plugin support
Hi,
I proposed a blueprint to add
Hi Sage,
Thanks. Will wait for your reply.
Meanwhile, Can I submit a blue print for this for next release?
Thanks,
Varada
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Tuesday, February 24, 2015 10:21 PM
To: Varada Kari
Cc: Ceph Development
Subject: Re: Adding
Hi Sage,
We are trying to integrate a new proprietary key value store to CEPH. To
integrate this KV-store, which is a closed source shared library, we propose a
new class to CEPH called PropDBStore which does a dlopen and imports the
required symbols. This framework will help in integrating
Hi Loic,
Yes, db is designed to optimize the workloads on flash backends and uses only
standard interfaces and system calls to achieve that.
Varada
-Original Message-
From: Loic Dachary [mailto:l...@dachary.org]
Sent: Tuesday, February 24, 2015 9:57 PM
To: Somnath Roy; Varada Kari
Hi all,
We have a test cluster with ~500 OSD's. This cluster has multiple rbd images
mapped to multiple client machines. But from past two days, only one rbd image
was serving the IO's rest of the images were not used.
We are running the cluster with debug 1/1 for most of the sub modules. But
I am not sure, if Rocksdb/LevelDB can work on a raw device. When I looked at
code they were doing write to mount point/directory.
Varada
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Haomai Wang
Sent: Friday, October
I am not sure, if Rocksdb/LevelDB can work on a raw device. When I looked at
code they were doing write to mount point/directory.
Varada
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Haomai Wang
Sent: Friday, October
Yes, these are recent changes from John. Because of these changes:
commit 90e6daec9f3fe2a3ba051301ee50940278ade18b
Author: John Spray john.sp...@inktank.com
Date: Tue Apr 29 15:39:45 2014 +0100
osdmap: Don't create FS pools by default
Because many Ceph users don't use the
33 matches
Mail list logo