Hi,list
From the document that a radosgw-agent's right info should like this
INFO:radosgw_agent.sync:Starting incremental sync
INFO:radosgw_agent.worker:17910 is processing shard number 0
INFO:radosgw_agent.worker:shard 0 has 0 entries after ''
INFO:radosgw_agent.worker:finished processing shard 0
Hi,
maybe you want to have a look at the following thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html
Could be that you suffer from the same problems.
best regards,
Kurt
Rzk schrieb:
Hi all,
I have the same problem, just curious.
could it be caused by poor
Hello Nabil
1) Please check all the logs from /var/log/ceph ( ceph-deploy node )
2) Before doing OSD Activate , did OSD prepare command went fine ?
3) After this error from OSD activate , did you check on your node , is device
/dev/sdb1 getting mounted ?
Regards
Karan Singh
CSC IT Centre for
Hello guys,
I am doing a test ACS setup to see how we can use Ceph for both Primary and
Secondary storage services. I have now successfully added both Primary (cluster
wide) and Secondary storage. However, I've noticed that my SSVM and CPVM are
not being created, so digging in the logs
Hello all,
I am getting some issues when activating OSD's on my Red Hat 6.4 Ceph cluster.
I am using the quick start mechanism so mounted a new xfs filesystem and ran
the osd prepare command.
The prepare seemed to be successful as per the log output below:
[ceph_deploy.cli][INFO ] Invoked
That's unfortunate; hopefully 2nd-gens will improve and open things up.
Some numbers:
- Commercial grid-style SAN is maybe £1.70 per usable GB
- Ceph cluster of about 1PB built on Dell hardware is maybe £1.25 per
usable GB
- Bare drives like WD RE4 3TB are about £0.21/GB (assuming 1/3rd
I've found nothing related in Apache logs,
I believe it's something related to Radosgw,
Anyone else tested the same thing on owned Radosgw?
Regards
On Mon, Oct 28, 2013 at 11:52 PM, Mark Nelson mark.nel...@inktank.comwrote:
I'm not really an apache expert, but you could try looking at the
Hello guys,
I am doing a test ACS setup to see how we can use Ceph for both Primary and
Secondary storage services. I have now successfully added both Primary (cluster
wide) and Secondary storage. However, I've noticed that my SSVM and CPVM are
not being created, so digging in the logs
On 10/28/2013 06:31 PM, Yehuda Sadeh wrote:
On Mon, Oct 28, 2013 at 9:24 AM, Wido den Hollander w...@42on.com wrote:
Hi,
I'm testing with some multipart uploads to RGW and I'm hitting a problem
when trying to upload files larger then 1159MB.
The tool I'm using is s3cmd 1.5.1
Ceph version:
Hi James,
Message: 2
Date: Tue, 29 Oct 2013 11:23:14 +
From: ja...@peacon.co.uk
To: Gregory Farnum g...@inktank.com
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Seagate Kinetic
Message-ID: 81dbc7ae324ac5bc6afd85aef080f...@peacon.co.uk
Content-Type: text/plain;
Hello Alistar
I also faced exactly same issue with one of my OSD , after OSD Activate ,
progress got hanged but finally OSD gets added in cluster with no problem.
My cluster is running without knows issues as of now. If this is a test setup ,
you can ignore this , but keep an eye on this.
Thanks. It does seem to be working ok and I can create / remove objects it
seems without issues.
I am however having another problem. In trying to add additional monitors to
my cluster I am getting the following errors (note I did not see this when
doing the first and currently only
I can't remember how the ceph-deploy behaves in this case but it might
be worth trying manually installing the epel repo on one of the nodes
and then just doing a simple. ceph-deploy install #node# on a single
node to see if it behaves.
Otherwise you can try installing ceph manually using
Recovering from a degraded state by copying existing replicas to other OSDs
is going to cause reads on existing replicas and writes to the new
locations. If you have slow media then this is going to be felt more
acutely. Tuning the backfill options I posted is one way to lessen the
impact, another
Hi All,
I am a newbie to ceph. I am installing ceph (dumpling release) using
ceph-deploy (issued from my admin node) on one monitor and two OSD nodes
running CentOS 6.4 (64-bit) using followed instructions in the link below:
http://ceph.com/docs/master/start/quick-ceph-deploy/
My setup looks
On Tue, Oct 29, 2013 at 12:28 PM, Michael mich...@onlinefusion.co.uk wrote:
I can't remember how the ceph-deploy behaves in this case but it might be
worth trying manually installing the epel repo on one of the nodes and then
just doing a simple. ceph-deploy install #node# on a single node to
If you are behind a proxy try configuring the wget proxy through /etc/wgetrc.
I had a similar problem where I could complete wget commands manually but they
would fail in ceph-deploy until I configured the wget proxy in that manner.
From: ceph-users-boun...@lists.ceph.com
I was able to add a public_network line to the config on the admin host and
push the config to the nodes with a ceph-deploy --overwrite-conf config push
rc-ceph-node1 rc-ceph-node2 rc-ceph-node3. I was able to follow the
quickstart after that without further incident. Rzk had to take
To answer myself - there was a problem with my api secret key which rados
generated. It has escaped the /, which for some reason CloudStack couldn't
understand. Removing the escape (\) character has solved the problem.
Andrei
- Original Message -
From: Andrei Mikhailovsky
You also want to make sure that if you are using a proxy your proxy settings
are maintained through sudo.
With my deployment I had to add a line to my sudoers file to specify that the
https_proxy and http_proxy settings are maintained. Didn't work otherwise.
Defaults env_keep += http_proxy
Thanks a lot Joseph and Alistair... I have the following questions based on
your inputs:
1) Do I need to make changes to all the nodes or just the admin node? I
guess all the nodes since ceph-deploy issues commands via ssh on all nodes...
2) The installation guide recommends using
From: Trivedi, Narendra [mailto:narendra.triv...@savvis.com]
Sent: Tuesday, October 29, 2013 5:33 PM
To: Whittle, Alistair: Investment Bank (LDN); joseph.r.gru...@intel.com;
ceph-users@lists.ceph.com
Subject: RE: ceph-deploy problems on CentOS-6.4
Thanks a lot Joseph and Alistair... I have the
On Tue, Oct 29, 2013 at 2:00 PM, alistair.whit...@barclays.com wrote:
From: Trivedi, Narendra [mailto:narendra.triv...@savvis.com]
Sent: Tuesday, October 29, 2013 5:33 PM
To: Whittle, Alistair: Investment Bank (LDN); joseph.r.gru...@intel.com;
ceph-users@lists.ceph.com
Subject: RE:
Nothing on the ceph-server02 log
ceph-deploy osd activate ceph-server02:/dev/sdb1
s=1 pgs=0 cs=0 l=1 c=0x7f0da8013a80).fault
[ceph-server02][ERROR ] 2013-10-29 21:54:38.712639 7f0db81e8700 0 -- :/1002801
192.168.115.91:6789/0 pipe(0x7f0da800b350 sd=10 :0 s=1 pgs=0 cs=0 l=1
Also the prepare step done successfully
[ceph@ceph-deploy my-cluster]$ ceph-deploy disk list ceph-server02
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy disk list
ceph-server02
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.osd][INFO ]
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)
thanks again All.
On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader kyle.ba...@gmail.com wrote:
Recovering from a degraded state by copying existing replicas to
Hi, All.
I am interested in the following questions:
1.Does the amount of HDD performance cluster?
2.Is there any experience of implementing KVM virtualization and Ceph on
the same server?
Thank!
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
27 matches
Mail list logo