Hi list
I have tested suse enterprise storage3 using 2 iscsi gateway attached to
vmware. The performance is bad. I have turn off VAAI following the
(https://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=1033665)
Thanks Somnath and Christian,
Yes, it looks like the latest version of XenServer still runs on an old kernel
(3.10). I know the method Christian linked, but it doesn’t work if XenServer
is installed from iso. It is really annoying there has been no movement on
this for 3 years… I really like
It seems your client kernel is pretty old ?
Either upgrade your kernel to 3.15 or later or you need to disable
CRUSH_TUNABLES3.
ceph osd crush tunables bobtail or ceph osd crush tunables legacy should help.
This will start rebalancing and also you will lose improvement added in
Firefly. So,
Hello,
On Thu, 30 Jun 2016 19:27:05 -0700 Mike Jacobacci wrote:
> Thanks Jake! I enabled the epel 7 repo and was able to get ceph-common
> installed. Here is what happens when I try to map the drive:
>
> rbd map rbd/enterprise-vm0 --name client.admin -m mon0
> -k
Thanks Jake! I enabled the epel 7 repo and was able to get ceph-common
installed. Here is what happens when I try to map the drive:
rbd map rbd/enterprise-vm0 --name client.admin -m mon0 -k
/etc/ceph/ceph.client.admin.keyring
rbd: sysfs write failed
In some cases useful info is found in
Hello,
On Thu, 30 Jun 2016 08:34:12 + (GMT) m.da...@bluewin.ch wrote:
> Thank you all for your prompt answers.
>
> >firstly, wall of text, makes things incredibly hard to read.
> >Use paragraphs/returns liberally.
>
> I actually made sure to use paragraphs. For some reason, the formatting
See https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17112.html
On Thursday, June 30, 2016, Mike Jacobacci wrote:
> So after adding the ceph repo and enabling the cents-7 repo… It fails
> trying to install ceph-common:
>
> Loaded plugins: fastestmirror
> Loading
Hi Greg
Opened this one
http://tracker.ceph.com/issues/16567
Let us see what they say.
Cheers
G.
On 07/01/2016 04:09 AM, Gregory Farnum wrote:
On Wed, Jun 29, 2016 at 10:50 PM, Goncalo Borges
wrote:
Hi Shinobu
Sorry probably I don't understand your
On Thu, Jun 30, 2016 at 11:34 PM, Brian Felton wrote:
> Sure. Here's a complete query dump of one of the 30 pgs:
> http://pastebin.com/NFSYTbUP
Looking at that something immediately stands out.
There are a lot of entries in "past intervals" like so.
"past_intervals": [
Can you check the permissions on "/var/run/ceph/" and ensure that the
user your client runs under has permissions to access the directory?
If the permissions are OK, do you have SElinux or AppArmor enabled and
enforcing?
On Thu, Jun 30, 2016 at 5:37 PM, Deneau, Tom wrote:
> I
I was following the instructions in
https://www.sebastien-han.fr/blog/2015/09/02/ceph-validate-that-the-rbd-cache-is-active/
because I wanted to look at some of the rbd cache state and possibly flush and
invalidate it
My ceph.conf has
[client]
rbd default features = 1
rbd
So after adding the ceph repo and enabling the cents-7 repo… It fails trying to
install ceph-common:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.web-ster.com
Resolving Dependencies
--> Running transaction check
---> Package ceph-common.x86_64
Hi Jake,
I will give that a try and see if that helps, thank you!
Yes I have that open in a browser tab, it gave me the idea of using ceph-deploy
to install on the xenserver.
I will update with the results.
Cheers,
Mike
> On Jun 30, 2016, at 12:42 PM, Jake Young wrote:
>
On Thu, Jun 30, 2016 at 1:03 PM, Dzianis Kahanovich wrote:
> Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global
> stop/restart everything oneshot.
>
> Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 up:standby
>
> Now after upgrade start and
Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global
stop/restart everything oneshot.
Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 up:standby
Now after upgrade start and next mon restart, active monitor falls with
"assert(info.state ==
Can you install the ceph client tools on your server? They may give you a
more obvious error. Try to install the package and config/keys manually
instead of with ceph-deploy.
Also see this:
http://xenserver.org/blog/entry/tech-preview-of-xenserver-libvirt-ceph.html
Jake
On Thursday, June 30,
Just adding some more info in case it helps… looking at the ceph-osd.admin.log
and I see this on every disk:
2016-06-30 09:47:03.326176 7f15353aa800 1 journal _open /dev/sdb1 fd 4:
24006098944 bytes, block size 4096 bytes, directio = 0, aio = 0 2016-06-30
09:47:03.326472 7f15353aa800 1 journal
On Wed, Jun 29, 2016 at 2:02 PM, Daniel Davidson
wrote:
> I am starting to work with and benchmark our ceph cluster. While throughput
> is so far looking good, metadata performance so far looks to be suffering.
> Is there anything that can be done to speed up the
On Wed, Jun 29, 2016 at 10:50 PM, Goncalo Borges
wrote:
> Hi Shinobu
>
>> Sorry probably I don't understand your question properly.
>> Is what you're worry about that object mapped to specific pg could be
>> overwritten on different osds?
>
> Not really. I was
On Thu, Jun 30, 2016 at 9:09 AM, Mauricio Garavaglia
wrote:
> Hello,
>
> What's the expected behavior of a host that has a cephfs mounted and is then
> blacklisted? It doesn't seem to fail in a consistent way. Thanks
Well, once blacklisted it won't be allowed to
I am not sure why the mapping is failing, so I tried to install ceph on
XenServer with ceph-deploy but got the following error:
[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: XenServer
xenenterprise 7.0.0
fddf
I feel like I am close but I am not sure where to go from
I was talking on IRC and we're guessing it was a memory issue. I've woken
up every morning now with some sort of scrub errors, with must (but not
all) spawning from the one system with the now dead osds. This morning I
didn't wake up to find any scrub errors, (but I can't tell if it has
anything
Hi Jake,
Interesting… XenServer 7 does has rbd installed but trying to map the rbd image
with this command:
# echo {ceph_monitor_ip} name={ceph_admin},secret={ceph_key} {ceph_pool}
{ceph_image} >/sys/bus/rbd/add
It fails with just an i/o error… I am looking into now. My cluster health is
Hello,
What's the expected behavior of a host that has a cephfs mounted and is
then blacklisted? It doesn't seem to fail in a consistent way. Thanks
Mauricio
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi Luis,
I think you are looking for that: http://ceph.com/planet/ceph-pool-migration/
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Luis
Periquito
Sent: Thursday, June 30, 2016 11:31
To: Christian Balzer
Cc: Ceph Users
Subject: Re:
Sure. Here's a complete query dump of one of the 30 pgs:
http://pastebin.com/NFSYTbUP
Brian
On Wed, Jun 29, 2016 at 6:25 PM, Brad Hubbard wrote:
> On Thu, Jun 30, 2016 at 3:22 AM, Brian Felton wrote:
> > Greetings,
> >
> > I have a lab cluster running
Hi Greg,
thanks, highly appreciated. And yes, that was on an osd with btrfs. We
switched back to xfs because of btrfs instabilities.
Regards,
-Mike
On 6/27/16 10:13 PM, Gregory Farnum wrote:
On Sat, Jun 25, 2016 at 11:22 AM, Mike Miller wrote:
Hi,
what is the
Hi,
You could actually managed every osd and mon and mds through docker swarm,
since all just software it make sense to deploy it through docker where you
add the disk that is needed.
Mons does not need permanent storage either. Not that a restart of the
docker instance would remove the but
It make sense to me to run MDS inside docker or k8s as MDS is stateless.But Mon
and OSD do have data in local , what's the motivation to run it in docker?
> To: ceph-users@lists.ceph.com
> From: d...@redhat.com
> Date: Thu, 30 Jun 2016 08:36:45 -0400
> Subject: Re: [ceph-users] Running ceph in
On 06/30/2016 02:05 AM, F21 wrote:
Hey all,
I am interested in running ceph in docker containers. This is extremely
attractive given the recent integration of swarm into the docker engine,
making it really easy to set up a docker cluster.
When running ceph in docker, should monitors, radosgw
Hi,
I noticed in the jewel release notes:
"You can now access radosgw buckets via NFS (experimental)."
Are there any docs that explain the configuration of NFS to access RADOSGW
buckets?
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I had 4 osds and 2 of the servers have been halted.(so no data access cause
I had ECpool with 3+1 and replica 2)
After power up the servers those osds was down and out.
I don't know how to bring them back.
Another error I get, maybe it is not relevance but I saw , osd/ECUtil.h:
43: FAILED
Hello,
See this thread,
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23852.html
And the author has replied himself,
I just resolved this issue. It was probably due to a faulty region map
configuration, where more than 1 regions were marked as default. After
updating the
Hi,
If i try to create a bucket (using s3cmd) im getting this error:
WARNING: 500 (UnknownError):
The rados-gateway server says:
ERROR: endpoints not configured for upstream zone
The Servers where updated to jewel, but I'm not sure the error wasn't
there before.
Micha Krause
>> I have created an Erasure Coded pool and would like to change the K
>> and M of it. Is there any way to do it without destroying the pool?
>>
> No.
>
> http://docs.ceph.com/docs/master/rados/operations/erasure-code/
>
> "Choosing the right profile is important because it cannot be modified
>
hi Moïn,
two suggestions, based on my experience:
1. max HDD size of GOOD QUALITY 7200 RPM spinning SATA/SAS HDD's is 4 TB.
Everything else will ruin ur performance ( as long as you dont do pure
archiving of files ( writing one time, "never" touching them again )
If you have 8 TB HDDs, just
Thank you all for your prompt answers.
>firstly, wall of text, makes things incredibly hard to read.
>Use paragraphs/returns liberally.
I actually made sure to use paragraphs. For some reason, the formatting was
removed.
>Is that your entire experience with Ceph, ML archives and docs?
Of
On Thu, 30 Jun 2016 09:16:50 +0100 Luis Periquito wrote:
> Hi all,
>
> I have created an Erasure Coded pool and would like to change the K
> and M of it. Is there any way to do it without destroying the pool?
>
No.
http://docs.ceph.com/docs/master/rados/operations/erasure-code/
"Choosing the
Hi all,
I have created an Erasure Coded pool and would like to change the K
and M of it. Is there any way to do it without destroying the pool?
The cluster doesn't have much IO, but the pool (rgw data) has just
over 10T, and I didn't wanted to lose it.
thanks,
Hi Mario
Perhaps its covered under proxmox support, Do you have support on your
proxmox install from the guys in Proxmox?
Otherwise you can always buy from Redhat
https://www.redhat.com/en/technologies/storage/ceph
On Thu, Jun 30, 2016 at 7:37 AM, Mario Giammarco
With pool size=3 Ceph still should be able to recover from 2 failed
OSDs. It will however disallow client access to the PGs that have only 1
copy until they are replicated at least min_size times. Such PGs are not
marked as "active".
As to the reason of your problems it seems hardware related.
Thank you for your clarification.
On Thu, Jun 30, 2016 at 2:50 PM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Hi Shinobu
>
> > Sorry probably I don't understand your question properly.
> > Is what you're worry about that object mapped to specific pg could be
> overwritten on
Last two questions:
1) I have used other systems in the past. In case of split brain or serious
problems they offered me to choose which copy is "good" and then work
again. Is there a way to tell ceph that all is ok? This morning again I
have 19 incomplete pgs after recovery
2) Where can I find
I've had two osds fail and I'm pretty sure they wont recover from this. I'm
looking for help trying to get them back online if possible...
terminate called after throwing an instance of
'ceph::buffer::malformed_input'
what(): buffer::malformed_input: bad checksum on pg_log_entry_t
- I'm
Hey all,
I am interested in running ceph in docker containers. This is extremely
attractive given the recent integration of swarm into the docker engine,
making it really easy to set up a docker cluster.
When running ceph in docker, should monitors, radosgw and OSDs all be on
separate
I've had two osds fail and I'm pretty sure they wont recover from
this. I'm looking for help trying to get them back online if
possible...
terminate called after throwing an instance of 'ceph::buffer::malformed_input'
what(): buffer::malformed_input: bad checksum on pg_log_entry_t
- I'm
46 matches
Mail list logo