Hi to all,
We have 1 region and 2 zone (1 for site) with radosgw that replica data and
metadata.
Question: if I lost object in master zone, is it possible reverte a replica for
a single object from secondary zone to master zone?
If the lost is only rbd (but radosgw is working) ...is possible co
On Tue, 16 Feb 2016 11:52:17 +0800 (CST) maoqi1982 wrote:
> Hi lists
> Is there any solution or documents that ceph as xenserver or xen backend
> storage?
>
>
Not really.
There was a project to natively support Ceph (RBD) in Xenserver but that
seems to have gone nowhere.
There was also a threa
You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram. Performance is
excellent.
I would recommend a cache tier for sure if your data is busy for reads.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis.
Hi lists
Is there any solution or documents that ceph as xenserver or xen backend
storage?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 02/15/2016 03:29 AM, Dominik Zalewski wrote:
> "Status:
> This code is now being ported to the upstream linux kernel reservation
> API added in this commit:
>
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/block/ioctl.c?id=bbd3e064362e5057cc4799ba2e4d68c7593e490b
>
>
Hey Cephers,
Just a reminder, that this is the last week to vote for your favorite
OpenStack talks. I have compiled a list here in case you don’t want to
wade through the website to find Ceph talks that appeal to you. This
is a pretty huge list (which is awesome!), so I have pulled the talks
by C
For most of you CEPH on ARMv7 might not sound good. This is our setup and our
FIO testing report. I am not able to understand ….
1) Are these results good or bad
2) Write is much better than read, where as read should be better.
Hardware:
8 x ARMv7 MicroServer with 4 x 10G Uplink
Each MicroSe
Hi Bryan,
You were right: we’ve modified our PG weights a little (from 1 to around 0.85
on some OSDs) and once I’ve changed them back to 1, the remapped PGs and
misplaced objects were gone.
So thank you for the tip.
For the inconsistent ones and scrub errors, I’m a little wary to use pg repair
[forwarding to the list so people know how to solve the problem]
-- Forwarded message --
From: John Hogenmiller (yt)
Date: Fri, Feb 12, 2016 at 6:48 PM
Subject: Re: [ceph-users] ceph-disk activate fails (after 33 osd drives)
To: Alexey Sheplyakov
That was it, thank you. Defini
On February 15, 2016 at 1:35:45 AM, Burkhard Linke
(burkhard.li...@computational.bio.uni-giessen.de) wrote:
> Hi,
>
> I would like to provide access to a bunch of large files (bio sequence
> databases) to our cloud users. Putting the files in a RBD instance
> requires special care if several VMs
On 02/15/2016 07:34 AM, Mario Giammarco wrote:
Karan Singh writes:
Agreed to Ferhat.
Recheck your network ( bonds , interfaces , network switches , even cables
)
I use gigabit ethernet, I am checking the network.
But I am using another pool on the same cluster and it works perfectly: why?
Karan Singh writes:
> Agreed to Ferhat.
>
> Recheck your network ( bonds , interfaces , network switches , even cables
)
I use gigabit ethernet, I am checking the network.
But I am using another pool on the same cluster and it works perfectly: why?
Thanks again,
Mario
_
Hello Wido,
Than let me solve the IPv6 problem and get back to you.
Thx
On Mon, Feb 15, 2016 at 2:16 PM, Wido den Hollander wrote:
>
> > Op 15 februari 2016 om 11:41 schreef Ferhat Ozkasgarli <
> ozkasga...@gmail.com>:
> >
> >
> > Hello Wido,
> >
> > I have just talked with our network admin.
> Op 15 februari 2016 om 11:41 schreef Ferhat Ozkasgarli :
>
>
> Hello Wido,
>
> I have just talked with our network admin. He said we are not ready for
> IPv6 yet.
>
> So, if it is ok with IPv4 only, I will start the process.
>
The requirement is set for all mirrors to be available over IPv
Hello Wido,
I have just talked with our network admin. He said we are not ready for
IPv6 yet.
So, if it is ok with IPv4 only, I will start the process.
On Mon, Feb 15, 2016 at 12:28 PM, Wido den Hollander wrote:
> Hi,
>
> > Op 15 februari 2016 om 11:00 schreef Ferhat Ozkasgarli <
> ozkasga...@
Hi,
> Op 15 februari 2016 om 11:00 schreef Ferhat Ozkasgarli :
>
>
> Hello Wido,
>
> As Radore Datacenter we also want to become mirror for Ceph project.
>
Goed news!
> Our URL will be http://ceph-mirros.radore.com
>
> We would be happy to become tr.ceph.com
>
Even better. Are you able to
Hello Wido,
As Radore Datacenter we also want to become mirror for Ceph project.
Our URL will be http://ceph-mirros.radore.com
We would be happy to become tr.ceph.com
The server will be ready tomorrow or the day after.
On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop wrote:
> http://ceph.mirror.
Hi,
I would like to provide access to a bunch of large files (bio sequence
databases) to our cloud users. Putting the files in a RBD instance
requires special care if several VMs need to access the files; creating
an individual RBD snapshot for each instance requires more effort in the
cloud
"Status:
This code is now being ported to the upstream linux kernel reservation API
added in this commit:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/block/ioctl.c?id=bbd3e064362e5057cc4799ba2e4d68c7593e490b
When this is completed, LIO will call into the iblock backend
Hey Mario
Agreed to Ferhat.
Recheck your network ( bonds , interfaces , network switches , even cables ) I
have seen this several times before and in most of the cases its because of
network.
BTW are you using Mellanox ?
- Karan -
> On 15 Feb 2016, at 10:12, Mario Giammarco wrote:
>
> kouko
koukou73gr writes:
>
> Have you tried restarting osd.0 ?
>
Yes I have restarted all osds many times.
Also launched repair and scrub.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
21 matches
Mail list logo