Hi,
News. I tried activate disk without --dmcrypt and there is no problem. After
activate on sdb are two partitions (sdb2 for jounral and sdb1 for data).
In my opinion there is a bug with switch --dmcrypt and activating
journal on disk (partitions are created, but mounting done by ceph-disk
Hello Joseph
This sounds like a solution , BTW how to set replication level to 1 , is there
any direct command or need to edit configuration file.
Many Thanks
Karan Singh
- Original Message -
From: Joseph R Gruher joseph.r.gru...@intel.com
To: ceph-users@lists.ceph.com
Sent: Thursday,
Hi Karan,
There's info on http://ceph.com/docs/master/rados/operations/pools/
But primarily you need to check your replication levels: ceph osd dump
-o -|grep 'rep size'
Then alter the pools that are stuck unclean: ceph osd pool set
size/min_size #
If you're new to ceph it's probably a good
Apologies, that should have been: ceph osd dump | grep 'rep size'
What I get from blindly copying from a wiki!
-Michael
On 08/11/2013 11:38, Michael wrote:
Hi Karan,
There's info on http://ceph.com/docs/master/rados/operations/pools/
But primarily you need to check your replication levels:
Hi All,
I am able to Add a Ceph Monitor (step 3) as per the link
http://ceph.com/docs/master/start/quick-ceph-deploy/ (Setting Up Ceph
Storage Cluster)
But when I am executing the gatherkey command, I am getting the
warnings(highlighted in yellow). Please find the details –
Command –
Hello Vikrant
You can try creating directories manually on the monitor node
mkdir -p /var/lib/ceph/{tmp,mon,mds,bootstrap-osd}
* Important Do not call ceph-deploy with sudo or run it as root if you are
logged in as a different user, because it will not issue sudo commands needed
on
I try to dump perf counter via admin socket, but I don't know what does
these numbers actual mean or does these numbers have any thing to do with
the different memory usage between arm and amd processors, so I attach the
dump log as attachment(mon.a runs on AMD processor, mon.c runs on ARM
All,
I have configured a rados gateway as per the Dumpling quick instructions on a
Red Hat 6 server. The idea is to use Swift API to access my cluster via this
interface.
Have configured FastCGI, httpd, as per the guides, did all the user
creations/authtool commands for the Swift
Hi,
I'm trying to set public ACLs to an object, so that I can access the object via
Web-browser.
unfortunately without success:
s3cmd setacl --acl-public s3://test/hosts
ERROR: S3 error: 403 (AccessDenied):
The radosgw log says:
x-amz-date:Fri, 08 Nov 2013 12:56:55 +
/test/hosts?acl
On 11/08/2013 08:58 AM, Josh Durgin wrote:
On 11/08/2013 03:13 PM, ja...@peacon.co.uk wrote:
On 2013-11-08 03:20, Haomai Wang wrote:
On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin josh.dur...@inktank.com
wrote:
I just list commands below to help users to understand:
cinder qos-create
Hi guys
This is probably a configuration error, but I just can't find it.
The following reproduceable happens on my cluster [1].
15:52:15 On Host1 one disk is being removed on the RAID Controller (to
ceph it looks as if the disk died)
15:52:52 OSD Reported missing (osd.47)
15:52:53 osdmap
On Fri, Nov 8, 2013 at 7:41 AM, Vikrant Verma vikrantverm...@gmail.com wrote:
Hi All,
I am able to Add a Ceph Monitor (step 3) as per the link
http://ceph.com/docs/master/start/quick-ceph-deploy/ (Setting Up Ceph
Storage Cluster)
But when I am executing the gatherkey command, I am
On 11/08/2013 04:56 AM, Gregory Farnum wrote:
I don't remember how this has come up or been dealt with in the past,
but I believe it has been. Have you tried just doing it via the ceph
or rados CLI tools with an empty pool name?
Yes, that worked!
root@rgw1:~# rados rmpool
Hi Josh
Using libvirt_image_type=rbd to replace ephemeral disks is new with
Havana, and unfortunately some bug fixes did not make it into the
release. I've backported the current fixes on top of the stable/havana
branch here:
https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
that
Hi Alfredo,
See the steps I executed below and the weird error I am getting when trying to
activate OSDs- the last series of error messages are in an infinite loop- still
printing 2 days . FYI, /etc/ceph existed on all nodes after ceph-deploy
install. I checked after doing ceph-deploy
Thanks Gregory,
One point that was a bit unclear in documentation is whether or not this
equation for PGs applies to a single pool, or the entirety of pools.
Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should
all the pools ADD UP to 3000 PGs? Thanks!
--
Kevin Weiler
IT
Using libvirt_image_type=rbd to replace ephemeral disks is new with
Havana, and unfortunately some bug fixes did not make it into the
release. I've backported the current fixes on top of the stable/havana
branch here:
https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
that looks
On Fri, Nov 8, 2013 at 11:04 AM, Trivedi, Narendra
narendra.triv...@savvis.com wrote:
Hi Alfredo,
See the steps I executed below and the weird error I am getting when trying
to activate OSDs- the last series of error messages are in an infinite loop-
still printing 2 days . FYI, /etc/ceph
and one more:
boot from image (create a new volume) doesn't work either: it leads to a VM
that complains about a non-bootable disk (just like the ISO case). This is
actually and improvement: earlier, nova was waiting for ages for an image to be
created (I guess that this is the result of the
Hi !
I have clusters (IMAP service) with 2 members configured with Ubuntu +
Drbd + Ext4. Intend to migrate to the use of Ceph and begin to allow
distributed access to the data.
Does Ceph provides the distributed filesystem and block device?
Does Ceph work fine in clusters of two members?
It's not a hard value; you should adjust based on the size of your pools
(many of then are quite small when used with RGW, for instance). But in
general it is better to have more than fewer, and if you want to check you
can look at the sizes of each PG (ceph pg dump) and increase the counts for
After you increase the number of PGs, *and* increase the pgp_num to do
the rebalancing (this is all described in the docs; do a search), data will
move around and the overloaded OSD will have less stuff on it. If it's
actually marked as full, though, this becomes a bit trickier. Search the
list
On Fri, Nov 8, 2013 at 5:09 AM, Micha Krause mi...@krausam.de wrote:
Hi,
I'm trying to set public ACLs to an object, so that I can access the object
via Web-browser.
unfortunately without success:
s3cmd setacl --acl-public s3://test/hosts
ERROR: S3 error: 403 (AccessDenied):
The radosgw
On 11/08/2013 12:59 PM, Gruher, Joseph R wrote:
-Original Message-
From: Dinu Vlad [mailto:dinuvla...@gmail.com]
Sent: Thursday, November 07, 2013 10:37 AM
To: ja...@peacon.co.uk; Gruher, Joseph R; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph cluster performance
I was under
On Fri, Nov 8, 2013 at 8:49 AM, Listas lis...@adminlinux.com.br wrote:
Hi !
I have clusters (IMAP service) with 2 members configured with Ubuntu + Drbd
+ Ext4. Intend to migrate to the use of Ceph and begin to allow distributed
access to the data.
Does Ceph provides the distributed
Hrm, there's nothing too odd in those dumps. I asked around and it
sounds like the last time we saw this sort of strange memory use it
was a result of leveldb not being able to compact quickly enough. Joao
can probably help diagnose that faster than I can.
-Greg
Software Engineer #42 @
One thing to try is run the mon and then attach to it with perf and see
what it's doing. If CPU usage is high and leveldb is doing tons of
compaction work that could indicate that this is the same or a similar
problem to what we were seeing back around cuttlefish.
Mark
On 11/08/2013 04:53
On Sat, Nov 9, 2013 at 7:53 AM, Mark Nelson mark.nel...@inktank.com wrote:
One thing to try is run the mon and then attach to it with perf and see
what it's doing. If CPU usage is high and leveldb is doing tons of
compaction work that could indicate that this is the same or a similar
problem
This is the fifth major release of Ceph, the fourth since adopting a
3-month development cycle. This release brings several new features,
including multi-datacenter replication for the radosgw, improved
usability, and lands a lot of incremental performance and internal
refactoring work to support
29 matches
Mail list logo