I've done something similar. I used a process like this:
ceph osd set noout
ceph osd set nodown
ceph osd set nobackfill
ceph osd set norebalance
ceph osd set norecover
Then I did my work to manually remove/destroy the OSDs I was replacing,
brought the replacements online, and unset all of those o
Hello,
I have a cluster of 3 nodes, 3 OSD per nodes (so 9 OSD in total),
replication set to 3 (os each node has a copy).
For some reason, I would like to recreate the node 1. What I have done :
1. out the 3 OSDs of node 1, stop then, then destroy them (almost in the
same time)
2. recreate the new
No, it doesn't. In fact, I'm not aware of any client that sets this
flag, I think it's more for custom applications.
Paul
2018-09-18 21:41 GMT+02:00 Kevin Olbrich :
> Hi!
>
> is the compressible hint / incompressible hint supported on qemu+kvm?
>
> http://docs.ceph.com/docs/mimic/rados/configura
Hi!
is the compressible hint / incompressible hint supported on qemu+kvm?
http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
If not, only aggressive would work in this case for rbd, right?
Kind regards
Kevin
___
ceph-users maili
help
end
Hello there,
I'm trying to reduce recovery impact on client operations and using mclock
for this purpose. I've tested different weights for queues but didn't see
any impacts on real performance.
ceph version 12.2.8 luminous (stable)
Last tested config:
"osd_op_queue": "mclock_opclass",
"
Title: Ceph OSD fails to startup with bluefs error
Content:
The crash has happened for three times with the same reason:
direct_read_unaligned .. error(5) Input/Output err.
while I use ceph-bluestore-tool repair/fsck, it reports:
# ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-1/ --l
unsubscribe ceph-users
The information contained in this transmission may be confidential. Any
disclosure, copying, or further distribution of confidential information is not
permitted unless such privilege is explicitly granted in writing by Quantum.
Quantum reserves the right to have electroni
unsub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Nathan,
this indeed appears to be a Gentoo-specific issue.
They install the file at:
/usr/libexec/ceph/ceph-osd-prestart.sh
instead of
/usr/lib/ceph/ceph-osd-prestart.sh
It depends on how you strongly you follow FHS (
http://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s07.html )
which is th
Trying to create an OSD:
gentooserver ~ # ceph-volume lvm create --data /dev/sdb
Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e70500fe-0d51-48c3-a607-414957886726
Runn
I have a 3 node production cluster. All works fine. but i have one failing
node. i replaced one disk on sunday. everyting went fine. last night there
was another disk broken. Ceph nicely maks it as down. but when i wanted to
reboot this node now. all remaining osd's are being kept in and not marked
Your client needs to tell the cluster that the objects have been deleted.
'-o discard' is my goto because I'm lazy and it works well enough for me.
If you're in need of more performance, then fstrim is the other option.
Nothing on the Ceph side can be configured to know when a client no longer
need
Hay all
I am new to ceph and made an test ceph cluster that supports
s3 and rbd's (rbd's are linked using iscsi)
I been looking about and notice that the space is not decreasing when i
delete a file and in turn filled up my cluster osd's
I have been doing some reading and see people recomand
add
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 3
On Thu, Aug 31, 2017 at 11:51 AM, Marc Roos wrote:
>
> Should these messages not be gone in 12.2.0?
>
> 2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
> dangerous and experimental features are enabled: bluestore
> 2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
Should these messages not be gone in 12.2.0?
2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
unsubscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Imran Hossain Shaon | http://shaon.me/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Have a cluster and I want a radosGW user to have access on a bucket objects
only like /* but user should not be able to create new or
remove this bucket
-
Parveen Kumar Sharma
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/
Use qemu-img-convert to convert from one format to another.
Regards,
Anand
On Mon, Jul 11, 2016 at 9:37 PM, Gaurav Goyal
wrote:
> Thanks!
>
> I need to create a VM having qcow2 image file as 6.7 GB but raw image as
> 600GB which is too big.
> Is there a way that i need not to convert qcow2 file
Thanks!
I need to create a VM having qcow2 image file as 6.7 GB but raw image as
600GB which is too big.
Is there a way that i need not to convert qcow2 file to raw and it works
well with rbd?
Regards
Gaurav Goyal
On Mon, Jul 11, 2016 at 11:46 AM, Kees Meijs wrote:
> Glad to hear it works now
Glad to hear it works now! Good luck with your setup.
Regards,
Kees
On 11-07-16 17:29, Gaurav Goyal wrote:
> Hello it worked for me after removing the following parameter from
> /etc/nova/nova.conf file
___
ceph-users mailing list
ceph-users@lists.ceph
Hello it worked for me after removing the following parameter from
/etc/nova/nova.conf file
[root@OSKVM1 ~]# cat /etc/nova/nova.conf|grep hw_disk_discard
#hw_disk_discard=unmap
Though as per ceph documentation, for KILO version we must set this
parameter. I am using Liberty but i am not sure if
Hi,
I think there's still something misconfigured:
> Invalid: 400 Bad Request: Unknown scheme 'file' found in URI (HTTP 400)
It seems the RBD backend is not used as expected.
Have you configured both Cinder _and_ Glance to use Ceph?
Regards,
Kees
On 08-07-16 17:33, Gaurav Goyal wrote:
>
> I re
I even tried with bare .raw file but error is still the same
016-07-08 16:29:40.931 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total memory: 19316
[root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$
[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
rbd_user=cinder
rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb
rpc_backend
Hi Kees,
I regenerated the UUID as per your suggestion.
Now i have same UUID in host1 and host2.
I could create volumes and attach them to existing VMs.
I could create new glance images.
But still finding the same error while instance launch via GUI.
2016-07-08 11:23:25.067 86007 INFO nova.com
Hi Kees,
Thanks for your help!
Node 1 controller + compute
-rw-r--r-- 1 root root63 Jul 5 12:59 ceph.client.admin.keyring
-rw-r--r-- 1 glance glance 64 Jul 5 14:51 ceph.client.glance.keyring
-rw-r--r-- 1 cinder cinder 64 Jul 5 14:53 ceph.client.cinder.keyring
-rw-r--r-- 1 cinder ci
Hi,
I'd recommend generating an UUID and use it for all your compute nodes.
This way, you can keep your configuration in libvirt constant.
Regards,
Kees
On 08-07-16 16:15, Gaurav Goyal wrote:
>
> For below section, should i generate separate UUID for both compte hosts?
>
__
Hi Gaurav,
Have you distributed your Ceph authentication keys to your compute
nodes? And, do they have the correct permissions in terms of Ceph?
K.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
Hello,
Thanks i could restore my cinder service.
But while trying to launch an instance, i am getting same error.
Can you please help me to know what am i doing wrong?
2016-07-08 09:28:31.368 31909 INFO nova.compute.manager
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41b
Hello,
You only need a create a pool and authentication in Ceph for cinder.
Your configuration should be like this (This is an example configuration
with Ceph Jewel and Openstack Mitaka):
[DEFAULT]
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = vol
Hi Gaurav,
The following snippets should suffice (for Cinder, at least):
> [DEFAULT]
> enabled_backends=rbd
>
> [rbd]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> rbd_pool = cinder-volumes
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot = false
> rbd_max_clone_d
Thanks for the verification!
Yeah i didnt find additional section for [ceph] in my cinder.conf file.
Should i create that manually?
As i didnt find [ceph] section so i modified same parameters in [DEFAULT]
section.
I will change that as per your suggestion.
Moreoevr checking some other links i go
These lines from your log output indicates you are configured to use LVM as
a cinder backend.
> 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
[req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
LVMVolumeDriver (3.0.0)
> 2016-07-07 16:20:32.067 32549 ERROR cinder.
Hi Kees/Fran,
Do you find any issue in my cinder.conf file?
it says Volume group "cinder-volumes" not found. When to configure this
volume group?
I have done ceph configuration for nova creation.
But i am still facing the same error .
*/var/log/cinder/volume.log*
2016-07-07 16:20:13.765 136
Hi Fran,
Here is my cinder.conf file. Please help to analyze it.
Do i need to create volume group as mentioned in this link
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html
[root@OSKVM1 ~]# grep -v "^#" /etc/cinder/cinder.conf|grep -v ^$
[DEFAULT]
rpc_backend =
Hello,
Are you configured these two paremeters in cinder.conf?
rbd_user
rbd_secret_uuid
Regards.
2016-07-07 15:39 GMT+02:00 Gaurav Goyal :
> Hello Mr. Kees,
>
> Thanks for your response!
>
> My setup is
>
> Openstack Node 1 -> controller + network + compute1 (Liberty Version)
> Openstack node
Hello Mr. Kees,
Thanks for your response!
My setup is
Openstack Node 1 -> controller + network + compute1 (Liberty Version)
Openstack node 2 --> Compute2
Ceph version Hammer
I am using dell storage with following status
DELL SAN storage is attached to both hosts as
[root@OSKVM1 ~]# iscsiadm
Hi Gaurav,
Unfortunately I'm not completely sure about your setup, but I guess it
makes sense to configure Cinder and Glance to use RBD for a backend. It
seems to me, you're trying to store VM images directly on an OSD filesystem.
Please refer to http://docs.ceph.com/docs/master/rbd/rbd-openstack
Hi,
I am installing ceph hammer and integrating it with openstack Liberty for
the first time.
My local disk has only 500 GB but i need to create 600 GB VM. SO i have
created a soft link to ceph filesystem as
lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
/var/lib/ceph/osd/ceph-0/instances [r
unsubscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi there,
I'm currently following the Ceph QSGs and have currently finished the
Storage Cluster Quick Start and have the current topology of
admin-node - node1 (mon, mds)
- node2 (osd0)
- node3 (osd1)
I am now looking to continue creating a block device and th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On the monitor node, `netstat | grep 6789` show the monitor process running?
On the OSD node, `telnet 192.168.43.11 6789` and `telnet
192.168.107.11 6789` work? It is not enough to just ping, that does
not test if you have properly opened up the fir
Hi, I'm having issues activating my OSDs. I have provided the output of the
fault. I can see that the error message has said that the connection is
timing out however, I am struggling to understand why as I have followed
each stage within the quick start guide. For example, I can ping node1
(which
"unsubscribe ceph-users"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for the information.
-Sreenath
-
Date: Wed, 25 Mar 2015 04:11:11 +0100
From: Francois Lafont
To: ceph-users
Subject: Re: [ceph-users] PG calculator queries
Message-ID: <5512274f.1000...@free.fr>
Content-Type: text/plain; charset=utf-8
Hi,
Sreenath BH wrote :
>
Hello,
I see following message on calamari GUI.
--
New Calamari Installation
This appears to be the first time you have started Calamari and there are no
clusters currently configured.
3 Ceph servers are connected to Calamari, but no Ceph cluster has been created
I am trying to integrate Openstack keystone with radosgw. I have followed the
instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/.
But for some reason, keystone flags under [client.radosgw.gateway] section are
not being honored. That means, presence of these flags never
We discourage users from using `root` to call ceph-deploy or to call
it with `sudo` for this reason.
We have a warning in the docs about it if you are getting started in
the Ceph Node Setup section:
http://ceph.com/docs/v0.80.5/start/quick-start-preflight/#ceph-deploy-setup
The reason for this is
Hi,
I'm getting the below error while installing ceph on node using
ceph-deploy. I'm executing the command in admin node as
[root@ceph-admin ~]$ ceph-deploy install ceph-mds
[ceph-mds][DEBUG ] Loaded plugins: fastestmirror, security
[ceph-mds][WARNIN] You need to be root to perform this command.
subscribe ceph-us...@ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ments.
>
> From: Gregory Farnum [g...@inktank.com]
> Sent: Thursday, October 17, 2013 3:13 PM
> To: Whittington, Paul
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] (no subject)
>
> On Thu, Oct 17, 2013 at 12:40 PM, wrote:
On Thu, Oct 17, 2013 at 12:40 PM, wrote:
> I'd like to experiment with the ceph class methods technology. I've looked
> at the cls_hello sample but I'm having trouble figuring out how to compile,
> like and install. Are there any step-by-step documents on how to compile,
> link and deploy the m
I'd like to experiment with the ceph class methods technology. I've looked at
the cls_hello sample but I'm having trouble figuring out how to compile, like
and install. Are there any step-by-step documents on how to compile, link and
deploy the method .so files?
Paul Whittington
Chief Archite
发自我的 Windows Phone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
subscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi there!
Are Chris Holcombe and Robert Blair here? Please answer me about your
awesome job http://ceph.com/community/ceph-over-fibre-for-vmware/ .
Thanks!
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@l
Thank you.
--- On Mon, 3/11/13, Wolfgang Hennerbichler
wrote:
From: Wolfgang Hennerbichler
Subject: Re: [ceph-users] (no subject)
To: ceph-users@lists.ceph.com
Date: Monday, March 11, 2013, 11:17 AM
no.
depending on what you want you don't even need a MDS.
OSD + MON are the most
no.
depending on what you want you don't even need a MDS.
OSD + MON are the most basic thing that is necessary.
On 03/11/2013 12:02 PM, waed Albataineh wrote:
> Hello there,
> For the quick start of Ceph, do i need to continue the RESTful Gateway
> quick start ??
> even when i will just be testi
Hello there,
For the quick start of Ceph, do i need to continue the RESTful Gateway quick
start ??
even when i will just be testing the basic functions of Ceph!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/cep
70 matches
Mail list logo