Hi list
Does dumpling v0.67.4 or v0.71 now support Multi-region / Disaster Recovery
function ? if v0.67.4/0.71 support that which doc can i refer to configure
regions/zones/agents ? may anyone give a link ?
thanks ___
ceph-users mailing list
Hi there
I am interested in the following questions:
1.Does the amount of HDD performance cluster?
not quite sure I understand your question, but: Adding more disks and more
servers in general helps performance, because the requests will be spread out
among more spindles. We see 1
Hi Derek,
I've the same problem with the usage. How does your pools list look like
now ('radosgw-admin pools list') ?
Cheers,
Valery
On 23/10/13 22:32 , Derek Yarnell wrote:
Hi,
So the problem was that '.usage' pool was not created. I haven't
traversed the code well enough yet to know
On 29/10/13 20:53, lixuehui wrote:
Hi,list
From the document that a radosgw-agent's right info should like this
INFO:radosgw_agent.sync:Starting incremental sync
INFO:radosgw_agent.worker:17910 is processing shard number 0
INFO:radosgw_agent.worker:shard 0 has 0 entries
Hello AlistairIf i recall correctly my problem , i added my monitor manually at this stage and things started working for me.You should followhttp://ceph.com/docs/master/rados/operations/add-or-rm-mons/ and you should crack this problemIf you need my help come to #ceph ( IRC ) , my id is ksingh,
I've been perusing the content on slideshare and see some really
interesting and creatively composed presentations! Was there any recording
done (and plans to make it generally available)?
--
Cheers,
~Blairo
___
ceph-users mailing list
Hey Nabil
Reinstallation would not be a solution , during my ceph installation i done 8
time reinstallation of ceph in just 3 days , and then realised its not a
solution.
Anyway , lets dig your problem if you like :-)
Your Logs says that some problem with connecting with cluster
s=1 pgs=0
Hello RZK
Would you like to share your experience on this problem and your way of solving
it . This sounds interesting.
Regards
Karan Singh
- Original Message -
From: Rzk konoha.sharin...@gmail.com
To: ceph-users@lists.ceph.com
Sent: Wednesday, 30 October, 2013 4:17:32 AM
Hi Derek,
I'm wondering if your radosgw client does not have read-write (rw) caps on
mon? See the section Monitor Key CAPS on
http://ceph.com/docs/next/radosgw/config/ for more info.
-Matt
On Wed, Oct 23, 2013 at 9:32 PM, Derek Yarnell de...@umiacs.umd.edu wrote:
Hi,
So the problem was
Guys , do you know where is the next ceph public event is planned and what are
the dates.
Also what will be focused in the session .
Might be Inktank genius can answer this well.
Regards
Karan Singh
System Specialist Storage | CSC IT center for Science.
Hi
Im trying to mount a ceph rdb on a ubuntu 12.04 LTS with kernel 3.8.0
While our ceph cluster is healty running on 0.67.4 I'm not able to map the rbd
device
The command:
rbd map archiveadmin --pool rbd --name client.admin --log-to-stderr
rbd: add failed: (5) Input/output error
In syslog:
Dear List Moderator
Below are the two email that i sent to ceph list but didnt delivered , can you
please check for the blockage here.
Regards
Karan Singh
- Forwarded Message -
From: ceph-users-boun...@lists.ceph.com
To: ksi...@csc.fi
Sent: Wednesday, 30 October, 2013 11:31:00 AM
Andi
Dont know much about but checking on keyrings side will give you more
information. Socket read problem usually caused by keyrings.
Regards
Karan
- Original Message -
From: Andreas Fuchs (SwissTXT) andreas.fu...@swisstxt.ch
To: ceph-users@lists.ceph.com
Sent: Wednesday, 30
Hi all,
Today I tried to add a new OSD into the cluster and immediately it get the
monitors crashed.
Platform: RHEL6.4
Steps to add new monitor:
1. sudo ceph-disk zap /dev/sdh
2. sudo ceph-disk activate /dev/sdh
Then the monitor got crashed with the following logs:
013-10-30
I think keyring is fine, as i can run other commands like:
rbd ls --pool rbd --name client.admin
archiveadmin
and was able to create the image
rbd info archiveadmin --pool rbd --name client.admin
rbd image 'archiveadmin':
size 4096 MB in 1024 objects
order 22 (4096 KB objects)
Hi Karan,
Thank you for reply and help, to keep names simple let us use the installation
guide naming
http://ceph.com/docs/master/_images/ditaa-ab0a88be6a09668151342b36da8ceabaf0528f79.png
so I
Copy cluster_name.client.admin.keyring from ceph-node1 (Monitor node) to
/etc/ceph at ceph-node2
Hi,
While adding secondary storage configuration in CS what did you specify as
bucket and endpoint ?
Is your Ceph storage listening on port 80 or 8080?
Thanks,
Sanjeev
-Original Message-
From: Andrei Mikhailovsky [mailto:and...@arhont.com]
Sent: Tuesday, October 29, 2013 4:34 PM
To:
Hi Nabil
1) I hope you have taken ceph services bounce back after copying keyring files
2) from your OSD node ( ceph-node2 ) are you able to check your cluster status
#ceph status , it should return output similarly to ceph-node1 ( monitor node )
if not then there is connectivity problem
On 10/30/13, 5:48 AM, Matt Thompson wrote:
Hi Derek,
I'm wondering if your radosgw client does not have read-write (rw) caps
on mon? See the section Monitor Key CAPS
on http://ceph.com/docs/next/radosgw/config/ for more info.
Hi Matt,
You are right. I had forgotten that I didn't allow
On 10/30/13, 4:53 AM, Valery Tschopp wrote:
radosgw-admin pools list
Hi Valery,
That command lists the data placement pool(s) within radosgw (different
than what I was having problems with). My problem was that I didn't
have the correct underlying rados pool(s) created. They are listed here:
Good to see 2 OSD UP and 2 OSD IN :-)
Now with respect to your questions i just know one thing.
admin key are used by admin node and your Ceph Nodes so that you can use the
ceph CLI without having to specify the monitor address and
ceph.client.admin.keyring each time you execute a command.
I just found the trick..
When I am using a default crush, which use straw bucket type, things are good.
However, for the error I posted below, it is using tree bucket type.
Is it related?
Thanks,
Guang
On Oct 30, 2013, at 6:52 PM, Guang wrote:
Hi all,
Today I tried to add a new OSD into the
Hello,
I've been doing some tests on a newly installed ceph cluster:
# ceph osd create bench1 2048 2048
# ceph osd create bench2 2048 2048
# rbd -p bench1 create test
# rbd -p bench1 bench-write test --io-pattern rand
elapsed: 483 ops: 396579 ops/sec: 820.23 bytes/sec: 2220781.36
#
All,
My ceph cluster is failing to respond to calls for health status etc. It
simply hangs at each command and then tells me there is an error connecting to
the cluster. I assume that it is because I tried to add a second monitor
which seemed to hang during the hunting phase. I eventually
Vernon,
You can use the rbd command bench-write documented here:
http://ceph.com/docs/next/man/8/rbd/#commands
The command might looks something like:
rbd --pool test-pool bench-write --io-size 4096 --io-threads 16
--io-total 1GB test-image
Some other interesting flags are --rbd-cache,
Hi AW
Did you check on firewall ( iptables ) between the nodes. If this is a test
cluster disable iptables and try.
Regards
Karan Singh
- Original Message -
From: alistair whittle alistair.whit...@barclays.com
To: ceph-users@lists.ceph.com
Sent: Wednesday, 30 October, 2013
I wanted to know, does the OSD numbering half to be sequential and what is the
highest usable number (2^16 or 2^32)?
The reason is, I would like to use a numbering convention that reflects the
cluster number (assuming I will have more than one down the road; test, dev,
prod), the host and disk
You really should, I believe the osd number is used in computing crush. Bad
things will happen if you don't use sequential numbers.
On Oct 30, 2013, at 11:37 AM, Glen Aidukas gaidu...@behaviormatrix.com wrote:
I wanted to know, does the OSD numbering half to be sequential and what is
the
Thanks Karan.
I have checked and there are no iptables chains/rules configured. Any other
ideas?
From: Karan Singh [mailto:ksi...@csc.fi]
Sent: Wednesday, October 30, 2013 2:49 PM
To: Whittle, Alistair: Investment Bank (LDN)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph
Never mind, problem solved. Cluster is now healthy again. ☺
Tried the recovery process again and managed to remove the hung monitor. Not
sure why it didn’t work last time, but hey.
From: Karan Singh [mailto:ksi...@csc.fi]
Sent: Wednesday, October 30, 2013 2:49 PM
To: Whittle, Alistair:
OSD numbers are 2^32
On 30/10/2013 16:37, Glen Aidukas wrote: I wanted to know, does the OSD
numbering half to be sequential and what is the highest usable number (2^16 or
2^32)?
The reason is, I would like to use a numbering convention that reflects the
cluster number (assuming I will
On 10/30/2013 09:05 AM, Dinu Vlad wrote:
Hello,
I've been doing some tests on a newly installed ceph cluster:
# ceph osd create bench1 2048 2048
# ceph osd create bench2 2048 2048
# rbd -p bench1 create test
# rbd -p bench1 bench-write test --io-pattern rand
elapsed: 483 ops: 396579
It sounds like you tried to go from 1 monitor to 2 monitors, which is an
unsupported configuration as far as I am aware. You must have either 1, or
3 or more monitors for a quorum to be possible.
More information is available here:
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
On
Salutations Ceph-ers,
As many of you have noticed, Inktank has taken the wraps off the
latest and greatest magic for enterprise customers. Wanted to share a
few thoughts from a community perspective on Ceph.com and answer any
questions/concerns folks might have.
Hi Patrick,
I wish Inktank was able to base its strategy and income on Free Software. Like
RedHat does, for instance. In addition, as long as Inktank employs the majority
of Ceph developers, publishing Calamari as a proprietary software is a conflict
of interest. Should someone from the
On 10/30/2013 01:54 AM, Mark Kirkwood wrote:
On 29/10/13 20:53, lixuehui wrote:
Hi,list
From the document that a radosgw-agent's right info should like this
INFO:radosgw_agent.sync:Starting incremental sync
INFO:radosgw_agent.worker:17910 is processing shard number 0
I actually started a django app (no code pushed yet) for this purpose. I
guessed that Inktank might come out with a commercial offering and thought a
FOSS dashboard would be a good thing for the community too.
https://github.com/dontalton/kraken
I'd much rather contribute to a Inktank-backed
That did it! I didn't have to do -no-adjust-repos FYI.
Thanks a lot everyone esp. JG and AW. You guys are just phenomenal! Now I am
onto the next step of adding MONs and OSDs.
Thanks!
Narendra
From: Gruher, Joseph R [mailto:joseph.r.gru...@intel.com]
Sent: Tuesday, October 29, 2013 2:53 PM
To:
I wanted to report back on this since I've made some progress on
fixing this issue.
After converting every OSD on a single server to use a 2K block size,
I've been able to cross 90% utilization without running into the 'No
space left on device' problem. They're currently between 51% and 75%,
but
Now that my ceph cluster seems to be happy and stable, I have been looking at
different ways of using it. Object, block and file.
Object is relatively easy and I will use different ones to test with Ceph.
When I look at block, I'm getting the impression from a lot of Googling that
deploying
Mark,
The SSDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/enterprise-sata-ssd/?sku=ST240FN0021
and the HDDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/constellation/?sku=ST91000640SS.
The chasis is a SiliconMechanics C602 - but
On 10/30/2013 01:51 PM, Dinu Vlad wrote:
Mark,
The SSDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/enterprise-sata-ssd/?sku=ST240FN0021
and the HDDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/constellation/?sku=ST91000640SS.
Loic, Don,
From an Inktank perspective, we're keen to see a wide variety of ecosystem
tools built around Ceph, especially from developers outside of Inktank. We
welcome open source or other commercial tools that people want to build and
if a more compelling or popular community tool comes along,
The only thing that would be needed would be a good plugin/addon/agent to
graphite, collectd, snmpd or anything along these lines, giving the correct
metrics that identify the performance and status of the platform, well
documented.
Dashboards are up to the users themselfs (although a good
Hi all-
Trying to set up object storage on CentOS. I've done this successfully on
Ubuntu but I'm having some trouble on CentOS. I think I have everything
configured but when I try to start the radosgw service it reports starting, but
then the status is not running, with no helpful output as
I welcome this step. For me, more important than open-sourcing the fried
calamari is to see inktank succeed, make money and become even more independent
(from investors). Once this is done, and this young company is rock solid in
business, you can think about open sourcing tools that you sell
I have CentOS 6.4 running with the 3.11.6 kernel from elrepo and it includes
the rbd module. I think you could make the same update on RHEL 6.4 and get
rbd. From there it is very simple to mount an rbd device. Here are a few
notes on what I did.
Update kernel:
sudo rpm --import
Hey thats a good news , what how did you managed to fixed this , curious to knowRegardsKaran SinghFrom: "alistair whittle" alistair.whit...@barclays.comTo: ksi...@csc.fiCc: ceph-users@lists.ceph.comSent: Wednesday, 30 October, 2013 5:47:49 PMSubject: RE: [ceph-users] Ceph monitor problemsNever
On 10/30/2013 04:39 PM, Aaron Ten Clay wrote:
It sounds like you tried to go from 1 monitor to 2 monitors, which is an
unsupported configuration as far as I am aware. You must have either 1,
or 3 or more monitors for a quorum to be possible.
A quorum of 2 monitors is completely fine as long as
On Wed, Oct 30, 2013 at 1:43 PM, Joao Eduardo Luis joao.l...@inktank.comwrote:
A quorum of 2 monitors is completely fine as long as both monitors are up.
A quorum is always possible regardless of how many monitors you have, as
long as a majority is up and able to form it (1 out of 1, 2 out
Hi All,
I had a pretty good run until I issued a command to activate OSDs. Now I am
back with some more problems:(. My setup is exactly like the one in the
official ceph documentation:
http://ceph.com/docs/master/start/quick-ceph-deploy/
That means, I am just using node2:/tmp/osd0 and
On 10/30/2013 08:46 PM, Aaron Ten Clay wrote:
On Wed, Oct 30, 2013 at 1:43 PM, Joao Eduardo Luis
joao.l...@inktank.com mailto:joao.l...@inktank.com wrote:
A quorum of 2 monitors is completely fine as long as both monitors
are up. A quorum is always possible regardless of how many
A release candidate for 0.72 emperor is ready! This candidate squashes
the last of the blocker bugs and includes all of the functionality that
will be present in the final 0.72. Looking back to dumpling, overall I
think we are also in much better shape than before: better overall
stability
There is a provsion in the startup scripts that will move your osd to the
correct position in the crush map on startup. By default, this sets the
host based on hostname and then pulls in any fields defined in 'osd crush
location' in ceph.conf.
This is useful but not sufficiently flexible for
Aaron,
Don't mistake valid for advisable.
For documentation purposes, three monitors is the advisable initial
configuration for multi-node ceph clusters. If there is a valid need for
more than three monitors, it is advisable to add them two at a time to
maintain an odd number of total
On 10/30/2013 02:35 PM, Gruher, Joseph R wrote:
I have CentOS 6.4 running with the 3.11.6 kernel from elrepo and it
includes the rbd module. I think you could make the same update on RHEL
6.4 and get rbd.
Mmm... I think RHEL means paid support means you can't run an elrepo
kernel. Plus I didn't
You've enabled some feature on your cluster which is not supported by
that kernel client. It's probably the crush tunables (you can find
info on them in the docs).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Oct 30, 2013 at 3:59 AM, Fuchs, Andreas (SwissTXT)
On 31/10/13 06:31, Josh Durgin wrote:
Note that the wip in the url means it's a work-in-progress branch,
so it's not totally ready yet either. If anything is confusing or
missing, let us know.
It's great people are interested in trying this early. It's very
helpful to find issues sooner (like
Along those lines, you might want to use something similar to the
attached to check for any failed/partial uploads that are taking up
space (note cannot be gc'd away automatically). I just got caught by this.
In fact the previous code I posted should probably use a try:... except:
block to
On 29/10/13 18:08, Mark Kirkwood wrote:
On 29/10/13 17:46, Yehuda Sadeh wrote:
The multipart abort operation is supposed to remove the objects (no gc
needed for these). Were there any other issues during the run, e.g.,
restarted gateways, failed requests, etc.?
Note that the objects here are
Hi,
How to install a Ceph node?
Please let me know the install steps, preparing node in Ubuntu 12.04 LTS.
Do we need a separate Server / Admin node and what needs to be installed for
Ceph. I would integrate with Openstack Grizzly.
Regards,
R
___
Hi,
How to install a Ceph node?
Please let me know the install steps, preparing node in Ubuntu 12.04 LTS.
Do we need a separate Server / Admin node and what needs to be installed for
Ceph. I would integrate with Openstack Grizzly.
Regards,
R ___
The quick start guide is linked below, it should help you hit the ground
running.
http://ceph.com/docs/master/start/quick-ceph-deploy/
Let us know if you have questions or bump into trouble!
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello, List.
I met very big trouble during ceph upgrade from bobtail to cuttlefish.
My OSDs started to crash to stale so LA went to 100+ on node, after I
stop OSD I unable to launch it again because of errors. So, I started to
reformat OSDs and eventually meet that I have incomplete PGs and
Hello, List.
I met very big trouble during ceph upgrade from bobtail to cuttlefish.
My OSDs started to crash to stale so LA went to 100+ on node, after I
stop OSD I unable to launch it again because of errors. So, I started to
reformat OSDs and eventually meet that I have incomplete PGs and
2013/10/31 Иван Кудрявцев kudryavtsev...@bw-sw.com
Hello, List.
I met very big trouble during ceph upgrade from bobtail to cuttlefish.
My OSDs started to crash to stale so LA went to 100+ on node, after I stop
OSD I unable to launch it again because of errors. So, I started to
reformat
31.10.2013 11:39, Ирек Фасихов пишет:
2013/10/31 Иван Кудрявцев kudryavtsev...@bw-sw.com
mailto:kudryavtsev...@bw-sw.com
Hello, List.
I met very big trouble during ceph upgrade from bobtail to
cuttlefish.
My OSDs started to crash to stale so LA went to 100+ on node,
Hello, Irek.
Look at this please:
root@ceph-osd-1-2:~# rbd -p rbd ls
rbd: pool rbd doesn't contain rbd images
root@ceph-osd-1-2:~# ceph osd dump | grep pool
pool 0 'data' rep size 3 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 448 pgp_num 448 last_change 23 owner 0
68 matches
Mail list logo