Hello.
After some furious "ceph-deploy osd prepare/osd zap" cycles to figure out a
correct command for ceph-deploy to create a bluestore HDD with wal/db SSD,
I now have orphant OSDs, which are nowhere to be found in CRUSH map!
$ ceph health detail
HEALTH_WARN 4 osds exist in the crush map but
Hi,
I'm currently trying to set up a brand new home cluster :
- 5 nodes, with each :
- 1 HCA Mellanox ConnectX-2
- 1 GB Ethernet (Proxmox 5.1 Network Admin)
- 1 CX4 to CX4 cable
All together connected to a SDR Flextronics IB Switch.
This setup should back a Ceph Luminous (V12.2.2 included in
I can see, that the io/read ops come from the pool where we
store VM volumes, but i can't source this issue to a particular volume.
You can use this script
https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl
This is for filestore only. I adapted it to use bluestore for
hi
my ceph cluster is using Jewel on centos 7.3, kernel 3.10;
while our business running on centos 6.8, kernel 2.6.32, want to use rbd;
is it ok to use Hammer on client?
or which version of ceph should be installed on client?
thanks
13605702...@163.com
er, yeah, i didn't read before i replied. that's fair, though it is
only some of the integration test binaries that tax that limit in a
single compile step.
On Mon, Dec 18, 2017 at 4:52 PM, Peter Woodman wrote:
> not the larger "intensive" instance types! they go up to 128gb
not the larger "intensive" instance types! they go up to 128gb ram.
On Mon, Dec 18, 2017 at 4:46 PM, Ean Price wrote:
> The problem with the native build on armhf is the compilation exceeds the 2
> GB of memory that ARMv7 (armhf) supports. Scaleway is pretty awesome but
>
Hi all,
Allthough undocumented, i just tried:
"rbd -p rbd copy disk1 disk1ec --data-pool ecpool"
And it worked! :)
The copy is now on the erasure coded pool.
Kind regards,
Caspar
2017-12-18 22:32 GMT+01:00 Caspar Smit :
> Hi all,
>
>
The problem with the native build on armhf is the compilation exceeds the 2 GB
of memory that ARMv7 (armhf) supports. Scaleway is pretty awesome but their 32
bit ARM systems have the same 2 GB limit. I haven’t tried the cross-compile on
the 64 bit ARMv8 they offer and that might be easier than
https://www.scaleway.com/
they rent access to arm servers with gobs of ram.
i've been building my own, but with some patches (removal of some
asserts that were unnecessarily causing crashes while i try and track
down the bug) that make it unsuitable for public consumption
On Mon, Dec 18, 2017
I have no idea what this response means.
I have tried building the armhf and arm64 package on my raspberry pi 3 to
no avail. Would love to see someone post Debian packages for stretch on
arm64 or armhf.
On Dec 18, 2017 4:12 PM, "Peter Woodman" wrote:
> YMMV, but I've been
Hello,
I tried to install Ceph 12.2.2 (Luminous) on Ubuntu 16.04.3 LTS (kernel
4.4.0-104-generic), but I am having trouble starting radosgw service:
# systemctl status ceph-rado...@rgw.ceph-rgw1
â ceph-rado...@rgw.ceph-rgw1.service - Ceph rados gateway
Loaded: loaded
Hi all,
http://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
Since it is possible in Luminous to use RBD directly on erasure coded pools
the question arises how i can migrate an RBD image from a replicated pool
to an erasure coded pool.
I've got two pools configured, one replicated
YMMV, but I've been using Scaleway instances to build packages for
arm64- AFAIK you should be able to run any armhf distro on those
machines as well.
On Mon, Dec 18, 2017 at 4:02 PM, Andrew Knapp wrote:
> I would also love to see these packages!!!
>
> On Dec 18, 2017 3:46
I would also love to see these packages!!!
On Dec 18, 2017 3:46 PM, "Ean Price" wrote:
Hi everyone,
I have a test cluster of armhf arch SoC systems running Xenial and Jewel
(10.2). I’m looking to do a clean rebuild with Luminous (12.2) but there
are no 32 bit armhf
Hi everyone,
I have a test cluster of armhf arch SoC systems running Xenial and Jewel
(10.2). I’m looking to do a clean rebuild with Luminous (12.2) but there are no
32 bit armhf binaries available. This is just a toy cluster and not in
production.
I have tried, unsuccessfully, to compile
James,
If your replication factor is 3, for every 1GB added, your GB avail
with decrease by 3GB.
Cary
-Dynamic
On Mon, Dec 18, 2017 at 6:18 PM, James Okken wrote:
> Thanks David.
> Thanks again Cary.
>
> If I have
> 682 GB used, 12998 GB / 13680 GB avail,
> then I
I think what happened is this :
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
Note
Sometimes, typically in a “small” cluster with few hosts (for instance with
a small testing cluster), the fact to take out the OSD can spawn a CRUSH
corner case where some PGs remain stuck
Another strange thing I'm seeing is that two of the nodes in the cluster
have some OSD's with almost no activity. If I watch top long enough I'll
eventually see cpu utilization on these osds but for the most part they sit
a 0% cpu utilization. I'm not sure if this is expected behavior or not
Hi,
If the problem is not severe and you can wait, then according to this:
http://ceph.com/community/new-luminous-pg-overdose-protection/
there is a pg merge feature coming.
Regards,
Denes.
On 12/18/2017 02:18 PM, Jens-U. Mozdzen wrote:
Hi *,
facing the problem to reduce the number of
Thanks David.
Thanks again Cary.
If I have
682 GB used, 12998 GB / 13680 GB avail,
then I still need to divide 13680/3 (my replication setting) to get what my
total storage really is, right?
Thanks!
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel:
A possible option. They do not recommend using cppool.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011460.html
**COMPLETELY UNTESTED AND DANGEROUS**
stop all MDS daemons
delete your filesystem (but leave the pools)
use "rados export" and "rados import" to do a full copy of the
As that is a small cluster I hope you still don't have a lot of
instances running...
You can add "admin socket" to the client configuration part and then
read performance information via that. IIRC that prints total bytes
and IOPS, but it should be simple to read/calculate difference. This
will
Quoting Josef Zelenka (josef.zele...@cloudevelops.com):
> Hi everyone,
>
> we have recently deployed a Luminous(12.2.1) cluster on Ubuntu - three osd
> nodes and three monitors, every osd has 3x 2TB SSD + an NVMe drive for a
> blockdb. We use it as a backend for our Openstack cluster, so we store
Hi David,
Thanks for the info. The controller in the server (perc h730) was just
replaced and the battery is at full health. Prior to replacing the
controller I was seeing very high iowait when running iostat but I no
longer see that behavior - just apply latency when running ceph osd perf.
Since
Hi everyone,
we have recently deployed a Luminous(12.2.1) cluster on Ubuntu - three
osd nodes and three monitors, every osd has 3x 2TB SSD + an NVMe drive
for a blockdb. We use it as a backend for our Openstack cluster, so we
store volumes there. IN the last few days, the read op/s rose to
Hi,
On 12/18/2017 05:28 PM, Andre Goree wrote:
I'm working on setting up a cluster for testing purposes and I can't
see to install luminos. All nodes are runnind Ubuntu 16.04.
[cephadmin][DEBUG ] Err:7 https://download.ceph.com/debian-luminos
xenial/main amd64 Packages
[cephadmin][DEBUG ]
you have typo in apt source
it must be
https://download.ceph.com/debian-luminous/
not
https://download.ceph.com/debian-luminos/
On Mon, Dec 18, 2017 at 7:58 PM, Andre Goree wrote:
> I'm working on setting up a cluster for testing purposes and I can't see
> to install
On 2017/12/18 11:28 am, Andre Goree wrote:
I'm working on setting up a cluster for testing purposes and I can't
see to install luminos. All nodes are runnind Ubuntu 16.04.
[cephadmin][DEBUG ] Err:7 https://download.ceph.com/debian-luminos
xenial/main amd64 Packages
[cephadmin][DEBUG ] 404
I'm working on setting up a cluster for testing purposes and I can't see
to install luminos. All nodes are runnind Ubuntu 16.04.
[cephadmin][DEBUG ] Err:7 https://download.ceph.com/debian-luminos
xenial/main amd64 Packages
[cephadmin][DEBUG ] 404 Not Found
[cephadmin][DEBUG ] Ign:8
Hi ceph-users!
I'm trying to integrate Swift in OpenStack with ceph rgw(12.2.2) as a
backend.
I'm facing problem with creating bucket. I see return code -34. Has
anybody similiar issue? My config and log below.
ceph.conf
rgw keystone verify ssl = false
rgw keystone accepted roles =
Hello!
According to the documentation at
http://docs.ceph.com/docs/master/radosgw/admin/#quota-management
there's a way to set the default quota for all RGW users, if I
understand it correctly it'll apply the quota for all users created
after the default quota is set. For instance, I want to all
Thanks for the information, but I think that is not my case because I am using
only hdd in my cluster.
From the command you provide I found the db_used_bytes is quite large, but I am
not sure how the db used bytes is related to the amount of stored data and the
performance.
ceph daemon osd.0
Hi *,
facing the problem to reduce the number of PGs for a pool, I've found
various information and suggestions, but no "definite guide" to handle
pool migration with Ceph 12.2.x. This seems to be a fairly common
problem when having to deal with "teen-age clusters", so consolidated
Hi,
We have ceph cluster in version luminous 12.2.2. It has public network and
cluster network configured.
Cluster provides services for two big groups of clients and some individual
clients
One group uses RGW and another uses RBD.
Ceph's public network and two mentioned groups are located in
Hi,
Am 17.12.2017 um 10:40 schrieb Martin Preuss:
[...]
> is there a way to find out which files on CephFS is are using a given
> pg? I'd like to check whether those files are corrupted...
[...]
Nobody? Any hint, maybe?
Failing checksums for no apparent reason seem to me like quite a serious
On 17-12-15 03:58 PM, Sage Weil wrote:
On Fri, 15 Dec 2017, Piotr Dałek wrote:
On 17-12-14 05:31 PM, David Turner wrote:
I've tracked this in a much more manual way. I would grab a random subset
[..]
This was all on a Hammer cluster. The changes to the snap trimming queues
going into the
36 matches
Mail list logo