[ceph-users] Command for reverte object from secondary zone to primary zone

2016-02-15 Thread Andrea Annoè
Hi to all,
We have 1 region and 2 zone (1 for site) with radosgw that replica data and 
metadata.

Question: if I lost object in master zone, is it possible reverte a replica for 
a single object from secondary zone to master zone?

If the lost is only rbd (but radosgw is working) ...is possible copy object 
from secondary to master?

Someone have experience with replica from zone on radosgw? 
If I would use this architecture for test DR ...someone have a trace of 
procedure to follow?

Thank to all for your reply.

Best regards 
Andrea.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] xenserver or xen ceph

2016-02-15 Thread Christian Balzer
On Tue, 16 Feb 2016 11:52:17 +0800 (CST) maoqi1982 wrote:

> Hi lists
> Is there any solution or documents that ceph as xenserver or xen backend
> storage?
> 
> 
Not really.

There was a project to natively support Ceph (RBD) in Xenserver but that
seems to have gone nowhere.

There was also a thread last year here "RBD hard crash on kernel
3.10" (google for it) wher Shawn Edwards was working on something similar,
but that seems to have died off silently as well.

While you could of course do a NFS (some pains) or iSCSI (major pains)
head for Ceph the pains and reduced performance make it not an attractive
proposition.

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-15 Thread Tyler Bishop
You should look at a 60 bay 4U chassis like a Cisco UCS C3260.

We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.  Performance is 
excellent.

I would recommend a cache tier for sure if your data is busy for reads.

Tyler Bishop 
Chief Technical Officer 
513-299-7108 x10 



tyler.bis...@beyondhosting.net 


If you are not the intended recipient of this transmission you are notified 
that disclosing, copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited.

- Original Message -
From: "Василий Ангапов" 
To: "ceph-users" 
Sent: Friday, February 12, 2016 7:44:07 AM
Subject: [ceph-users] Recomendations for building 1PB RadosGW with Erasure  
Code

Hello,

We are planning to build 1PB Ceph cluster for RadosGW with Erasure
Code. It will be used for storing online videos.
We do not expect outstanding write performace, something like
200-300MB/s of sequental write will be quite enough, but data safety
is very important.
What are the most popular hardware and software recomendations?
1) What EC profile is best to use? What values of K/M do you recommend?
2) Do I need to use Cache Tier for RadosGW or it is only needed for
RBD? Is it still an overall good practice to use Cache Tier for
RadosGW?
3) What hardware is recommended for EC? I assume higher-clocked CPUs
are needed? What about RAM?
4) What SSDs for Ceph journals are the best?

Thanks a lot!

Regards, Vasily.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] xenserver or xen ceph

2016-02-15 Thread maoqi1982
Hi lists
Is there any solution or documents that ceph as xenserver or xen backend 
storage?


thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-02-15 Thread Mike Christie
On 02/15/2016 03:29 AM, Dominik Zalewski wrote:
> "Status:
> This code is now being ported to the upstream linux kernel reservation
> API added in this commit:
> 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/block/ioctl.c?id=bbd3e064362e5057cc4799ba2e4d68c7593e490b
> 
> When this is completed, LIO will call into the iblock backend which will
> then call rbd's pr_ops."
> 
> 
> Does anyone know how up to date this page?
> http://tracker.ceph.com/projects/ceph/wiki/Clustered_SCSI_target_using_RBD


I just updated it about two weeks ago, so it is current.

> 
> 
> Is currently only Suse supporting active/active multipath for RBD over
> iSCSI?  https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
> 

Yes.

> 
> I'm trying to configure active/passive iSCSI gateway on OSD nodes
> serving RBD image. Clustering is done using pacemaker/corosync. Does
> anyone have a similar  working setup? Anything I should be aware of?

If you have a application that uses SCSI persistent reservations, then
the iscsi target scripts for upstream pacemaker will not work correctly,
because they do not copy over the reservation state.

If your app does clustering using its own fencing like oracle RAC, then
it might be ok, but it is not formally supported or tested by Red Hat.
You have to be careful about using correct settings. For example, you
cannot have write-caching turned on.

There are other issues you can find on various lists. Some people on
this list have got it working ok for their specific application or at
least have made other workarounds for any issues they were hitting.

> 
> 
> Thanks
> 
> 
> Dominik
> 
> 
> 
> On 21 January 2016 at 12:08, Dominik Zalewski  > wrote:
> 
> Thanks Mike.
> 
> Would you not recommend using iscsi and ceph under Redhat based
> distros untill new code is in place?
> 
> Dominik
> 
> On 21 January 2016 at 03:11, Mike Christie  > wrote:
> 
> On 01/20/2016 06:07 AM, Nick Fisk wrote:
> > Thanks for your input Mike, a couple of questions if I may
> >
> > 1. Are you saying that this rbd backing store is not in mainline 
> and is only in SUSE kernels? Ie can I use this lrbd on Debian/Ubuntu/CentOS?
> 
> The target_core_rbd backing store is not upstream and only in
> SUSE kernels.
> 
> lrbd is the management tool that basically distributes the
> configuration
> info to the nodes you want to run LIO on. In that README you see
> it uses
> the target_core_rbd module by default, but last I looked there
> is code
> to support iblock too. So you should be able to use this with other
> distros that do not have target_core_rbd.
> 
> When I was done porting my code to a iblock based approach I was
> going
> to test out the lrbd iblock support and fix it up if it needed
> anything.
> 
> > 2. Does this have any positive effect on the abort/reset death loop 
> a number of us were seeing when using LIO+krbd and ESXi?
> 
> The old code and my new approach does not really help. However, on
> Monday, Ilya and I were talking about this problem, and he gave
> me some
> hints on how to add code to cancel/cleanup commands so we will
> be able
> to handle aborts/resets properly and so we will not fall into
> that problem.
> 
> 
> > 3. Can you still use something like bcache over the krbd?
> 
> Not initially. I had been doing active/active across nodes by
> default,
> so you cannot use bcache and krbd as is like that.
> 
> 
> 
> 
> >
> >
> >
> >> -Original Message-
> >> From: Mike Christie [mailto:mchri...@redhat.com 
> ]
> >> Sent: 19 January 2016 21:34
> >> To: Василий Ангапов  >; Ilya Dryomov
> >> >
> >> Cc: Nick Fisk >; Tyler 
> Bishop
> >>  >; Dominik Zalewski
> >> >;
> ceph-users  >
> >> Subject: Re: [ceph-users] CentOS 7 iscsi gateway using lrbd
> >>
> >> Everyone is right - sort of :)
> >>
> >> It is that target_core_rbd module that I made that was
> rejected upstream,
> >> along with modifications from SUSE which added persistent
> reservations
> >> support. I also made some modifications to rbd so
> target_core_rbd and krbd
> >> could share code. target_core_rbd uses 

[ceph-users] Last week for OpenStack voting

2016-02-15 Thread Patrick McGarry
Hey Cephers,

Just a reminder, that this is the last week to vote for your favorite
OpenStack talks. I have compiled a list here in case you don’t want to
wade through the website to find Ceph talks that appeal to you.  This
is a pretty huge list (which is awesome!), so I have pulled the talks
by Component Technical Leads (CTLs) to the top. There are tons of good
talks from board members, partners, and other experienced Ceph users
though, so definitely take a minute to peruse the list in this email,
or visit the url below and search for ‘Ceph.’

https://www.openstack.org/summit/austin-2016/vote-for-speakers/SearchForm

And as with any political election, remember to vote early and vote often! :)


—=== CTL Talks ===-—

Building a next-gen multiprotocol, tiered, and globally distributed
storage platform with Ceph
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7879

Disaster recovery and Ceph block storage: Introducing multi-site mirroring
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/6908

Micro Storage Servers at multi-PetaByte scale running Ceph
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8656

CephFS as a service with OpenStack Manila
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/6935

CephFS in Jewel: Stable at last
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7545



—-=== All Talks ===-—

Cache Rules Everything Around Me
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/6827

Persistent Containers for Transactional Workloads
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/6974

IOArbiter: Dynamic Provisioning of Backend Block Storage in OpenStack Cloud
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7039

Stop taking database backups and just use a block drive as a dB
partition in your Openstack Cloud
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7054

Ceph at Scale - Bloomberg Cloud Storage Platform
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7061

How-to build out a Ceph Cluster with Chef
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7063

From Hardware to Application - NFV@OpenStack and Ceph
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7106

New Ceph Configurations - High Performance Without High Costs
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7163

Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7280

Challenges, Opportunities and lessons learned from real life clusters
in China for open source storage in OpenStack clouds
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7300

Userspace only for Ceph: Boost performance from network stack to disk
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7347

Managing resources in hyper-converged infrastructures
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7558

Building cost-efficient, millions IOPS all-flash block storage backend
for your OpenStack cloud with Ceph
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7595

Ceph performance tuning with feedback from a custom dashboard.
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7656

Cloud Hardware Lifecycle Management
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7675

Designing for High Performance Ceph at Scale
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7687

Multi-backend Cinder, All Flash Ceph, and More! Year two of block storage at TWC
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7773

One for All: Deploying OpenStack, Ceph or Cloud Foundry with a Unified
Deployment Tool
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7787

End-To-End Monitoring of OpenStack Cloud
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7793

Squeezing more performance out of your network for free
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7869

Deploying OpenStack, Astara and Ceph: from concept to public cloud
(and hell in the middle)
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8018

Performance and stability comparison of Openstack running on CEPH
cluster with journals on NVMe and HDD
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8044

Ceph in the Real World: Examples and Advice
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8076

Study the performance characteristics of an Openstack installation
running on a CEPH cluster with 

[ceph-users] Performance Testing of CEPH on ARM MicroServer

2016-02-15 Thread Swapnil Jain
For most of you CEPH on ARMv7 might not sound good. This is our setup and our 
FIO testing report.  I am not able to understand ….

1) Are these results good or bad
2) Write is much better than read, where as read should be better.

Hardware:

8 x ARMv7 MicroServer with 4 x 10G Uplink

Each MicroServer with:
2GB RAM
Dual Core 1.6 GHz processor
2 x 2.5 Gbps Ethernet (1 for Public / 1 for Cluster Network)
1 x 3TB SATA HDD
1 x 128GB MSata Flash

Software:
Debian 8.3 32bit
ceph version 9.2.0-25-gf480cea

Setup:

3 MON (Shared with 3 OSD)
8 OSD
Data on 3TB SATA with XFS
Journal on 128GB MSata Flash

pool with replica 1
500GB image with 4M object size

FIO command: fio --name=unit1 --filename=/dev/rbd1 --bs=4k --runtime=300 
--readwrite=write

Client:

Ubuntu on Intel 24core/16GB RAM 10G Ethernet

Result for different tests

128k-randread.txt:  read : io=2587.4MB, bw=8830.2KB/s, iops=68, runt=300020msec
128k-randwrite.txt:  write: io=48549MB, bw=165709KB/s, iops=1294, 
runt=35msec
128k-read.txt:  read : io=26484MB, bw=90397KB/s, iops=706, runt=32msec
128k-write.txt:  write: io=89538MB, bw=305618KB/s, iops=2387, runt=34msec
16k-randread.txt:  read : io=383760KB, bw=1279.2KB/s, iops=79, runt=31msec
16k-randwrite.txt:  write: io=8720.7MB, bw=29764KB/s, iops=1860, runt=32msec
16k-read.txt:  read : io=27444MB, bw=93676KB/s, iops=5854, runt=31msec
16k-write.txt:  write: io=87811MB, bw=299726KB/s, iops=18732, runt=31msec
1M-randread.txt:  read : io=10439MB, bw=35631KB/s, iops=34, runt=38msec
1M-randwrite.txt:  write: io=98943MB, bw=337721KB/s, iops=329, runt=34msec
1M-read.txt:  read : io=25717MB, bw=87779KB/s, iops=85, runt=37msec
1M-write.txt:  write: io=74264MB, bw=253487KB/s, iops=247, runt=31msec
4k-randread.txt:  read : io=116920KB, bw=399084B/s, iops=97, runt=32msec
4k-randwrite.txt:  write: io=5579.2MB, bw=19043KB/s, iops=4760, runt=34msec
4k-read.txt:  read : io=27032MB, bw=92271KB/s, iops=23067, runt=31msec
4k-write.txt:  write: io=92955MB, bw=317284KB/s, iops=79320, runt=31msec
64k-randread.txt:  read : io=1400.2MB, bw=4778.2KB/s, iops=74, runt=300020msec
64k-randwrite.txt:  write: io=27676MB, bw=94467KB/s, iops=1476, runt=35msec
64k-read.txt:  read : io=27805MB, bw=94909KB/s, iops=1482, runt=32msec
64k-write.txt:  write: io=95484MB, bw=325917KB/s, iops=5092, runt=33msec


—
Swapnil Jain | swap...@linux.com 
Solution Architect & Red Hat Certified Instructor
RHC{A,DS,E,I,SA,SA-RHOS,VA}, CE{H,I}, CC{DA,NA}, MCSE, CNE




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] pg repair behavior? (Was: Re: getting rid of misplaced objects)

2016-02-15 Thread Zoltan Arnold Nagy
Hi Bryan,

You were right: we’ve modified our PG weights a little (from 1 to around 0.85 
on some OSDs) and once I’ve changed them back to 1, the remapped PGs and 
misplaced objects were gone.
So thank you for the tip.

For the inconsistent ones and scrub errors, I’m a little wary to use pg repair 
as that - if I understand correctly - only copies the primary PG’s data to the 
other PGs thus can easily corrupt the whole object if the primary is corrupted.

I haven’t seen an update on this since last May where this was brought up as a 
concern from several people and there were mentions of adding checksumming to 
the metadata and doing a checksum-comparison on repair.

Can anybody update on the current status on how exactly pg repair works in 
Hammer or will work in Jewel?

> On 11 Feb 2016, at 22:17, Stillwell, Bryan  
> wrote:
> 
> What does 'ceph osd tree' look like for this cluster?  Also have you done
> anything special to your CRUSH rules?
> 
> I've usually found this to be caused by modifying OSD weights a little too
> much.
> 
> As for the inconsistent PG, you should be able to run 'ceph pg repair' on
> it:
> 
> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#
> pgs-inconsistent
> 
> 
> Bryan
> 
> On 2/11/16, 11:21 AM, "ceph-users on behalf of Zoltan Arnold Nagy"
> 
> wrote:
> 
>> Hi,
>> 
>> Are there any tips and tricks around getting rid of misplaced objects? I
>> did check the archive but didn¹t find anything.
>> 
>> Right now my cluster looks like this:
>> 
>> pgmap v43288593: 16384 pgs, 4 pools, 45439 GB data, 10383 kobjects
>>   109 TB used, 349 TB / 458 TB avail
>>   330/25160461 objects degraded (0.001%)
>>   31280/25160461 objects misplaced (0.124%)
>>  16343 active+clean
>> 40 active+remapped
>>  1 active+clean+inconsistent
>> 
>> This is how it has been for a while and I thought for sure that the
>> misplaced would converge down to 0, but nevertheless, it didn¹t.
>> 
>> Any pointers on how I could get it back to all active+clean?
>> 
>> Cheers,
>> Zoltan
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> This E-mail and any of its attachments may contain Time Warner Cable 
> proprietary information, which is privileged, confidential, or subject to 
> copyright belonging to Time Warner Cable. This E-mail is intended solely for 
> the use of the individual or entity to which it is addressed. If you are not 
> the intended recipient of this E-mail, you are hereby notified that any 
> dissemination, distribution, copying, or action taken in relation to the 
> contents of and attachments to this E-mail is strictly prohibited and may be 
> unlawful. If you have received this E-mail in error, please notify the sender 
> immediately and permanently delete the original and any copy of this E-mail 
> and any printout.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: [SOLVED] ceph-disk activate fails (after 33 osd drives)

2016-02-15 Thread Alexey Sheplyakov
[forwarding to the list so people know how to solve the problem]

-- Forwarded message --
From: John Hogenmiller (yt) 
Date: Fri, Feb 12, 2016 at 6:48 PM
Subject: Re: [ceph-users] ceph-disk activate fails (after 33 osd drives)
To: Alexey Sheplyakov 


That was it, thank you.  Definitely documenting that item.  I'll
proceed a bit slower, 1 node at a time and wait for health_ok.

I was actually looking at the supermicro 72-drive ceph nodes myself.
We have what is effectively supermicro "white label" 2UTwin hardware
attached to a 60 drive DAE. The hardware I'm testing on has 6TB
drives, though our newer models have 8TB.  In the near future, I'll
have to really dive into the placement group docs and try and work out
bottlenecks and optimizations for configurations between 1 and 4 racks
(480, 960, 1440, and 1920 OSDs) as well as monitoring nodes. We're
doing strictly object store.

Thanks again,
John

On Fri, Feb 12, 2016 at 10:25 AM, Alexey Sheplyakov
 wrote:
>
> John,
>
> > 2016-02-12 12:53:43.340526 7f149bc71940 -1 journal FileJournal::_open: 
> > unable to setup io_context (0) Success
>
> Try increasing aio-max-nr:
>
> echo 131072 > /proc/sys/fs/aio-max-nr
>
> Best regards,
>   Alexey
>
>
> On Fri, Feb 12, 2016 at 4:51 PM, John Hogenmiller (yt)  
> wrote:
> >
> >
> > I have 7 servers, each containing 60 x 6TB drives in jbod mode. When I first
> > started, I only activated a couple drives on 3 nodes as Ceph OSDs.
> > Yesterday, I went to expand to the remaining nodes as well as prepare and
> > activate all the drives.
> >
> > ceph-disk prepare worked just fine. However, ceph-disk activate-all managed
> > to only activate 33 drives and failed on the rest.  This is consistent all 7
> > nodes (existing and newly installed). At the end of the day, I have 33 Ceph
> > OSDs activated per server and can't activate any more. I did have to bump up
> > the pg_num and pgp_num on the pool in order to accommodate the drives that
> > did activate. I don't know if having a low pg number during the mass influx
> > of OSDs caused an issue or not within the pool. I don't think so because I
> > can only set the pg_num to a maximum value determined by the number of known
> > OSDs. But maybe you have to expand slowly, increase pg's, expand osds,
> > increase pgs in a slow fashion.  I certainly have not seen anything to
> > suggest a magic "33/node limit", and I've seen references to servers with up
> > to 72 Ceph OSDs on them.
> >
> > I then attempted to activate individual ceph osd's and got the same set of
> > errors. I even wiped a drive, re-ran `ceph-disk prepare` and `ceph-disk
> > activate` to have it fail in the same way.
> >
> > status:
> > ```
> > root@ljb01:/home/ceph/rain-cluster# ceph status
> > cluster 4ebe7995-6a33-42be-bd4d-20f51d02ae45
> >  health HEALTH_OK
> >  monmap e5: 5 mons at
> > {hail02-r01-06=172.29.4.153:6789/0,hail02-r01-08=172.29.4.155:6789/0,rain02-r01-01=172.29.4.148:6789/0,rain02-r01-03=172.29.4.150:6789/0,rain02-r01-04=172.29.4.151:6789/0}
> > election epoch 12, quorum 0,1,2,3,4
> > rain02-r01-01,rain02-r01-03,rain02-r01-04,hail02-r01-06,hail02-r01-08
> >  osdmap e1116: 420 osds: 232 up, 232 in
> > flags sortbitwise
> >   pgmap v397198: 10872 pgs, 14 pools, 101 MB data, 8456 objects
> > 38666 MB used, 1264 TB / 1264 TB avail
> >10872 active+clean
> > ```
> >
> >
> >
> > Here is what I get when I run ceph-disk prepare on a blank drive:
> >
> > ```
> > root@rain02-r01-01:/etc/ceph# ceph-disk  prepare  /dev/sdbh1
> > The operation has completed successfully.
> > The operation has completed successfully.
> > meta-data=/dev/sdbh1 isize=2048   agcount=6, agsize=268435455
> > blks
> >  =   sectsz=512   attr=2, projid32bit=0
> > data =   bsize=4096   blocks=1463819665, imaxpct=5
> >  =   sunit=0  swidth=0 blks
> > naming   =version 2  bsize=4096   ascii-ci=0
> > log  =internal log   bsize=4096   blocks=521728, version=2
> >  =   sectsz=512   sunit=0 blks, lazy-count=1
> > realtime =none   extsz=4096   blocks=0, rtextents=0
> > The operation has completed successfully.
> >
> > root@rain02-r01-01:/etc/ceph# parted /dev/sdh print
> > Model: ATA HUS726060ALA640 (scsi)
> > Disk /dev/sdh: 6001GB
> > Sector size (logical/physical): 512B/512B
> > Partition Table: gpt
> >
> > Number  Start   End SizeFile system  Name  Flags
> >  2  1049kB  5369MB  5368MB   ceph journal
> >  1  5370MB  6001GB  5996GB  xfs  ceph data
> > ```
> >
> > And finally the errors from attempting to activate the drive.
> >
> > ```
> > root@rain02-r01-01:/etc/ceph# ceph-disk activate /dev/sdbh1
> > got monmap epoch 5
> > 2016-02-12 12:53:43.340526 7f149bc71940 

Re: [ceph-users] CephFS: read-only access

2016-02-15 Thread Gregory Farnum
On February 15, 2016 at 1:35:45 AM, Burkhard Linke 
(burkhard.li...@computational.bio.uni-giessen.de) wrote:
> Hi,
> 
> I would like to provide access to a bunch of large files (bio sequence
> databases) to our cloud users. Putting the files in a RBD instance
> requires special care if several VMs need to access the files; creating
> an individual RBD snapshot for each instance requires more effort in the
> cloud setup.
> 
> I'm currently thinking about given the users read-only access to a
> CephFS subdirectory. The Jewel release includes a path based access
> restriction for CephFS, which fits nicely into the concept. But the
> documentation does not state whether it is possible to have a read-only
> CephFS mount point.

Yep; that should be all it takes. :)
-Greg

> Is it sufficient to skip the write flag from the
> user's capabilities, e.g.
> 
> caps: [mds] allow rp, allow rp path=/foo
> caps: [mon] allow r
> caps: [osd] allow r pool=data
> 
> Regards,
> Burkhard
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help: pool not responding

2016-02-15 Thread Mark Nelson

On 02/15/2016 07:34 AM, Mario Giammarco wrote:

Karan Singh  writes:



Agreed to Ferhat.

Recheck your network ( bonds , interfaces , network switches , even cables

)

I use gigabit ethernet, I am checking the network.
But I am using another pool on the same cluster and it works perfectly: why?


PGs are pool specific, so the other pool may be totally healthy while 
the first is not.  If it turns out it's a hardware problem, it's also 
possible that the 2nd pool may not hit all of the same OSDs as the first 
pool, especially if it has a low PG count.


Mark



Thanks again,
Mario

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help: pool not responding

2016-02-15 Thread Mario Giammarco
Karan Singh  writes:


> Agreed to Ferhat.
> 
> Recheck your network ( bonds , interfaces , network switches , even cables 
) 

I use gigabit ethernet, I am checking the network.
But I am using another pool on the same cluster and it works perfectly: why?

Thanks again,
Mario

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Ferhat Ozkasgarli
Hello Wido,

Than let me solve the IPv6 problem and get back to you.

Thx

On Mon, Feb 15, 2016 at 2:16 PM, Wido den Hollander  wrote:

>
> > Op 15 februari 2016 om 11:41 schreef Ferhat Ozkasgarli <
> ozkasga...@gmail.com>:
> >
> >
> > Hello Wido,
> >
> > I have just talked with our network admin. He said we are not ready for
> > IPv6 yet.
> >
> > So, if it is ok with IPv4 only, I will start the process.
> >
>
> The requirement is set for all mirrors to be available over IPv4 and IPv6.
> This
> means that the future of storage can be downloaded over the protocol of the
> future.
>
> I really do appreciate the offer, but on IPv4-only I can't make it
> "tr.ceph.com".
>
> Wido
>
> > On Mon, Feb 15, 2016 at 12:28 PM, Wido den Hollander 
> wrote:
> >
> > > Hi,
> > >
> > > > Op 15 februari 2016 om 11:00 schreef Ferhat Ozkasgarli <
> > > ozkasga...@gmail.com>:
> > > >
> > > >
> > > > Hello Wido,
> > > >
> > > > As Radore Datacenter we also want to become mirror for Ceph project.
> > > >
> > >
> > > Goed news!
> > >
> > > > Our URL will be http://ceph-mirros.radore.com
> > > >
> > > > We would be happy to become tr.ceph.com
> > > >
> > > Even better. Are you able to meet the requirements for IPv4 and IPv6?
> > >
> > > If so, I'll add you to the ceph-mirrors list.
> > >
> > > Wido
> > >
> > > > The server will be ready tomorrow or the day after.
> > > >
> > > > On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop <
> > > tyler.bis...@beyondhosting.net
> > > > > wrote:
> > > >
> > > > > http://ceph.mirror.beyondhosting.net/
> > > > >
> > > > > I need to know what server will be keeping the master copy for
> rsync to
> > > > > pull changes from.
> > > > >
> > > > > Tyler Bishop
> > > > > Chief Technical Officer
> > > > > 513-299-7108 x10
> > > > >
> > > > >
> > > > >
> > > > > tyler.bis...@beyondhosting.net
> > > > >
> > > > >
> > > > > If you are not the intended recipient of this transmission you are
> > > > > notified that disclosing, copying, distributing or taking any
> action in
> > > > > reliance on the contents of this information is strictly
> prohibited.
> > > > >
> > > > > - Original Message -
> > > > > From: "Wido den Hollander" 
> > > > > To: "Tyler Bishop" 
> > > > > Cc: "ceph-users" 
> > > > > Sent: Sunday, February 7, 2016 4:22:13 AM
> > > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > > >
> > > > > > Op 6 februari 2016 om 15:48 schreef Tyler Bishop
> > > > > > :
> > > > > >
> > > > > >
> > > > > > Covered except that the dreamhost mirror is constantly down or
> > > broken.
> > > > > >
> > > > >
> > > > > Yes. Working on that.
> > > > >
> > > > > > I can add ceph.mirror.beyondhosting.net for it.
> > > > > >
> > > > >
> > > > > Great. Would us-east.ceph.com work for you? I can CNAME that to
> > > > > ceph.mirror.beyondhosting.net.
> > > > >
> > > > > I see that mirror.beyondhosting.net has IPv4 and IPv6, so that is
> > > good.
> > > > >
> > > > > If you are OK, I'll add you to the ceph-mirrors list so we can get
> > > this up
> > > > > and
> > > > > running.
> > > > >
> > > > > Wido
> > > > >
> > > > > > Tyler Bishop
> > > > > > Chief Technical Officer
> > > > > > 513-299-7108 x10
> > > > > >
> > > > > >
> > > > > >
> > > > > > tyler.bis...@beyondhosting.net
> > > > > >
> > > > > >
> > > > > > If you are not the intended recipient of this transmission you
> are
> > > > > notified
> > > > > > that disclosing, copying, distributing or taking any action in
> > > reliance
> > > > > on the
> > > > > > contents of this information is strictly prohibited.
> > > > > >
> > > > > > - Original Message -
> > > > > > From: "Wido den Hollander" 
> > > > > > To: "Tyler Bishop" 
> > > > > > Cc: "ceph-users" 
> > > > > > Sent: Saturday, February 6, 2016 2:46:50 AM
> > > > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > > > >
> > > > > > > Op 6 februari 2016 om 0:08 schreef Tyler Bishop
> > > > > > > :
> > > > > > >
> > > > > > >
> > > > > > > I have ceph pulling down from eu.   What *origin* should I
> setup
> > > rsync
> > > > > to
> > > > > > > automatically pull from?
> > > > > > >
> > > > > > > download.ceph.com is consistently broken.
> > > > > > >
> > > > > >
> > > > > > download.ceph.com should be your best guess, since that is
> closest.
> > > > > >
> > > > > > The US however seems covered with download.ceph.com although we
> > > might
> > > > > set up
> > > > > > us-east and us-west.
> > > > > >
> > > > > > I see that Ceph is currently in a subfolder called 'Ceph' and
> that
> > > is not
> > > > > > consistent with the other mirrors.
> > > > > >
> > > > > > Can that be fixed so that it matches the original directory
> > > structure?
> > > > > >
> > > > > > Wido
> > > > > >
> > > > > > > - Original Message -
> > > > > > > From: "Tyler Bishop" 

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Wido den Hollander

> Op 15 februari 2016 om 11:41 schreef Ferhat Ozkasgarli :
> 
> 
> Hello Wido,
> 
> I have just talked with our network admin. He said we are not ready for
> IPv6 yet.
> 
> So, if it is ok with IPv4 only, I will start the process.
> 

The requirement is set for all mirrors to be available over IPv4 and IPv6. This
means that the future of storage can be downloaded over the protocol of the
future.

I really do appreciate the offer, but on IPv4-only I can't make it
"tr.ceph.com".

Wido

> On Mon, Feb 15, 2016 at 12:28 PM, Wido den Hollander  wrote:
> 
> > Hi,
> >
> > > Op 15 februari 2016 om 11:00 schreef Ferhat Ozkasgarli <
> > ozkasga...@gmail.com>:
> > >
> > >
> > > Hello Wido,
> > >
> > > As Radore Datacenter we also want to become mirror for Ceph project.
> > >
> >
> > Goed news!
> >
> > > Our URL will be http://ceph-mirros.radore.com
> > >
> > > We would be happy to become tr.ceph.com
> > >
> > Even better. Are you able to meet the requirements for IPv4 and IPv6?
> >
> > If so, I'll add you to the ceph-mirrors list.
> >
> > Wido
> >
> > > The server will be ready tomorrow or the day after.
> > >
> > > On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop <
> > tyler.bis...@beyondhosting.net
> > > > wrote:
> > >
> > > > http://ceph.mirror.beyondhosting.net/
> > > >
> > > > I need to know what server will be keeping the master copy for rsync to
> > > > pull changes from.
> > > >
> > > > Tyler Bishop
> > > > Chief Technical Officer
> > > > 513-299-7108 x10
> > > >
> > > >
> > > >
> > > > tyler.bis...@beyondhosting.net
> > > >
> > > >
> > > > If you are not the intended recipient of this transmission you are
> > > > notified that disclosing, copying, distributing or taking any action in
> > > > reliance on the contents of this information is strictly prohibited.
> > > >
> > > > - Original Message -
> > > > From: "Wido den Hollander" 
> > > > To: "Tyler Bishop" 
> > > > Cc: "ceph-users" 
> > > > Sent: Sunday, February 7, 2016 4:22:13 AM
> > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > >
> > > > > Op 6 februari 2016 om 15:48 schreef Tyler Bishop
> > > > > :
> > > > >
> > > > >
> > > > > Covered except that the dreamhost mirror is constantly down or
> > broken.
> > > > >
> > > >
> > > > Yes. Working on that.
> > > >
> > > > > I can add ceph.mirror.beyondhosting.net for it.
> > > > >
> > > >
> > > > Great. Would us-east.ceph.com work for you? I can CNAME that to
> > > > ceph.mirror.beyondhosting.net.
> > > >
> > > > I see that mirror.beyondhosting.net has IPv4 and IPv6, so that is
> > good.
> > > >
> > > > If you are OK, I'll add you to the ceph-mirrors list so we can get
> > this up
> > > > and
> > > > running.
> > > >
> > > > Wido
> > > >
> > > > > Tyler Bishop
> > > > > Chief Technical Officer
> > > > > 513-299-7108 x10
> > > > >
> > > > >
> > > > >
> > > > > tyler.bis...@beyondhosting.net
> > > > >
> > > > >
> > > > > If you are not the intended recipient of this transmission you are
> > > > notified
> > > > > that disclosing, copying, distributing or taking any action in
> > reliance
> > > > on the
> > > > > contents of this information is strictly prohibited.
> > > > >
> > > > > - Original Message -
> > > > > From: "Wido den Hollander" 
> > > > > To: "Tyler Bishop" 
> > > > > Cc: "ceph-users" 
> > > > > Sent: Saturday, February 6, 2016 2:46:50 AM
> > > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > > >
> > > > > > Op 6 februari 2016 om 0:08 schreef Tyler Bishop
> > > > > > :
> > > > > >
> > > > > >
> > > > > > I have ceph pulling down from eu.   What *origin* should I setup
> > rsync
> > > > to
> > > > > > automatically pull from?
> > > > > >
> > > > > > download.ceph.com is consistently broken.
> > > > > >
> > > > >
> > > > > download.ceph.com should be your best guess, since that is closest.
> > > > >
> > > > > The US however seems covered with download.ceph.com although we
> > might
> > > > set up
> > > > > us-east and us-west.
> > > > >
> > > > > I see that Ceph is currently in a subfolder called 'Ceph' and that
> > is not
> > > > > consistent with the other mirrors.
> > > > >
> > > > > Can that be fixed so that it matches the original directory
> > structure?
> > > > >
> > > > > Wido
> > > > >
> > > > > > - Original Message -
> > > > > > From: "Tyler Bishop" 
> > > > > > To: "Wido den Hollander" 
> > > > > > Cc: "ceph-users" 
> > > > > > Sent: Friday, February 5, 2016 5:59:20 PM
> > > > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > > > >
> > > > > > We would be happy to mirror the project.
> > > > > >
> > > > > > http://mirror.beyondhosting.net
> > > > > >
> > > > > >
> > > > > > - Original Message -
> > > > > > From: "Wido den Hollander" 

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Ferhat Ozkasgarli
Hello Wido,

I have just talked with our network admin. He said we are not ready for
IPv6 yet.

So, if it is ok with IPv4 only, I will start the process.

On Mon, Feb 15, 2016 at 12:28 PM, Wido den Hollander  wrote:

> Hi,
>
> > Op 15 februari 2016 om 11:00 schreef Ferhat Ozkasgarli <
> ozkasga...@gmail.com>:
> >
> >
> > Hello Wido,
> >
> > As Radore Datacenter we also want to become mirror for Ceph project.
> >
>
> Goed news!
>
> > Our URL will be http://ceph-mirros.radore.com
> >
> > We would be happy to become tr.ceph.com
> >
> Even better. Are you able to meet the requirements for IPv4 and IPv6?
>
> If so, I'll add you to the ceph-mirrors list.
>
> Wido
>
> > The server will be ready tomorrow or the day after.
> >
> > On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop <
> tyler.bis...@beyondhosting.net
> > > wrote:
> >
> > > http://ceph.mirror.beyondhosting.net/
> > >
> > > I need to know what server will be keeping the master copy for rsync to
> > > pull changes from.
> > >
> > > Tyler Bishop
> > > Chief Technical Officer
> > > 513-299-7108 x10
> > >
> > >
> > >
> > > tyler.bis...@beyondhosting.net
> > >
> > >
> > > If you are not the intended recipient of this transmission you are
> > > notified that disclosing, copying, distributing or taking any action in
> > > reliance on the contents of this information is strictly prohibited.
> > >
> > > - Original Message -
> > > From: "Wido den Hollander" 
> > > To: "Tyler Bishop" 
> > > Cc: "ceph-users" 
> > > Sent: Sunday, February 7, 2016 4:22:13 AM
> > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > >
> > > > Op 6 februari 2016 om 15:48 schreef Tyler Bishop
> > > > :
> > > >
> > > >
> > > > Covered except that the dreamhost mirror is constantly down or
> broken.
> > > >
> > >
> > > Yes. Working on that.
> > >
> > > > I can add ceph.mirror.beyondhosting.net for it.
> > > >
> > >
> > > Great. Would us-east.ceph.com work for you? I can CNAME that to
> > > ceph.mirror.beyondhosting.net.
> > >
> > > I see that mirror.beyondhosting.net has IPv4 and IPv6, so that is
> good.
> > >
> > > If you are OK, I'll add you to the ceph-mirrors list so we can get
> this up
> > > and
> > > running.
> > >
> > > Wido
> > >
> > > > Tyler Bishop
> > > > Chief Technical Officer
> > > > 513-299-7108 x10
> > > >
> > > >
> > > >
> > > > tyler.bis...@beyondhosting.net
> > > >
> > > >
> > > > If you are not the intended recipient of this transmission you are
> > > notified
> > > > that disclosing, copying, distributing or taking any action in
> reliance
> > > on the
> > > > contents of this information is strictly prohibited.
> > > >
> > > > - Original Message -
> > > > From: "Wido den Hollander" 
> > > > To: "Tyler Bishop" 
> > > > Cc: "ceph-users" 
> > > > Sent: Saturday, February 6, 2016 2:46:50 AM
> > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > >
> > > > > Op 6 februari 2016 om 0:08 schreef Tyler Bishop
> > > > > :
> > > > >
> > > > >
> > > > > I have ceph pulling down from eu.   What *origin* should I setup
> rsync
> > > to
> > > > > automatically pull from?
> > > > >
> > > > > download.ceph.com is consistently broken.
> > > > >
> > > >
> > > > download.ceph.com should be your best guess, since that is closest.
> > > >
> > > > The US however seems covered with download.ceph.com although we
> might
> > > set up
> > > > us-east and us-west.
> > > >
> > > > I see that Ceph is currently in a subfolder called 'Ceph' and that
> is not
> > > > consistent with the other mirrors.
> > > >
> > > > Can that be fixed so that it matches the original directory
> structure?
> > > >
> > > > Wido
> > > >
> > > > > - Original Message -
> > > > > From: "Tyler Bishop" 
> > > > > To: "Wido den Hollander" 
> > > > > Cc: "ceph-users" 
> > > > > Sent: Friday, February 5, 2016 5:59:20 PM
> > > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > > >
> > > > > We would be happy to mirror the project.
> > > > >
> > > > > http://mirror.beyondhosting.net
> > > > >
> > > > >
> > > > > - Original Message -
> > > > > From: "Wido den Hollander" 
> > > > > To: "ceph-users" 
> > > > > Sent: Saturday, January 30, 2016 9:14:59 AM
> > > > > Subject: [ceph-users] Ceph mirrors wanted!
> > > > >
> > > > > Hi,
> > > > >
> > > > > My PR was merged with a script to mirror Ceph properly:
> > > > > https://github.com/ceph/ceph/tree/master/mirroring
> > > > >
> > > > > Currently there are 3 (official) locations where you can get Ceph:
> > > > >
> > > > > - download.ceph.com (Dreamhost, US)
> > > > > - eu.ceph.com (PCextreme, Netherlands)
> > > > > - au.ceph.com (Digital Pacific, Australia)
> > > > >
> > > > > I'm looking for more mirrors to become official mirrors so 

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Wido den Hollander
Hi,

> Op 15 februari 2016 om 11:00 schreef Ferhat Ozkasgarli :
> 
> 
> Hello Wido,
> 
> As Radore Datacenter we also want to become mirror for Ceph project.
> 

Goed news!

> Our URL will be http://ceph-mirros.radore.com
> 
> We would be happy to become tr.ceph.com
> 
Even better. Are you able to meet the requirements for IPv4 and IPv6?

If so, I'll add you to the ceph-mirrors list.

Wido

> The server will be ready tomorrow or the day after.
> 
> On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop  > wrote:
> 
> > http://ceph.mirror.beyondhosting.net/
> >
> > I need to know what server will be keeping the master copy for rsync to
> > pull changes from.
> >
> > Tyler Bishop
> > Chief Technical Officer
> > 513-299-7108 x10
> >
> >
> >
> > tyler.bis...@beyondhosting.net
> >
> >
> > If you are not the intended recipient of this transmission you are
> > notified that disclosing, copying, distributing or taking any action in
> > reliance on the contents of this information is strictly prohibited.
> >
> > - Original Message -
> > From: "Wido den Hollander" 
> > To: "Tyler Bishop" 
> > Cc: "ceph-users" 
> > Sent: Sunday, February 7, 2016 4:22:13 AM
> > Subject: Re: [ceph-users] Ceph mirrors wanted!
> >
> > > Op 6 februari 2016 om 15:48 schreef Tyler Bishop
> > > :
> > >
> > >
> > > Covered except that the dreamhost mirror is constantly down or broken.
> > >
> >
> > Yes. Working on that.
> >
> > > I can add ceph.mirror.beyondhosting.net for it.
> > >
> >
> > Great. Would us-east.ceph.com work for you? I can CNAME that to
> > ceph.mirror.beyondhosting.net.
> >
> > I see that mirror.beyondhosting.net has IPv4 and IPv6, so that is good.
> >
> > If you are OK, I'll add you to the ceph-mirrors list so we can get this up
> > and
> > running.
> >
> > Wido
> >
> > > Tyler Bishop
> > > Chief Technical Officer
> > > 513-299-7108 x10
> > >
> > >
> > >
> > > tyler.bis...@beyondhosting.net
> > >
> > >
> > > If you are not the intended recipient of this transmission you are
> > notified
> > > that disclosing, copying, distributing or taking any action in reliance
> > on the
> > > contents of this information is strictly prohibited.
> > >
> > > - Original Message -
> > > From: "Wido den Hollander" 
> > > To: "Tyler Bishop" 
> > > Cc: "ceph-users" 
> > > Sent: Saturday, February 6, 2016 2:46:50 AM
> > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > >
> > > > Op 6 februari 2016 om 0:08 schreef Tyler Bishop
> > > > :
> > > >
> > > >
> > > > I have ceph pulling down from eu.   What *origin* should I setup rsync
> > to
> > > > automatically pull from?
> > > >
> > > > download.ceph.com is consistently broken.
> > > >
> > >
> > > download.ceph.com should be your best guess, since that is closest.
> > >
> > > The US however seems covered with download.ceph.com although we might
> > set up
> > > us-east and us-west.
> > >
> > > I see that Ceph is currently in a subfolder called 'Ceph' and that is not
> > > consistent with the other mirrors.
> > >
> > > Can that be fixed so that it matches the original directory structure?
> > >
> > > Wido
> > >
> > > > - Original Message -
> > > > From: "Tyler Bishop" 
> > > > To: "Wido den Hollander" 
> > > > Cc: "ceph-users" 
> > > > Sent: Friday, February 5, 2016 5:59:20 PM
> > > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > > >
> > > > We would be happy to mirror the project.
> > > >
> > > > http://mirror.beyondhosting.net
> > > >
> > > >
> > > > - Original Message -
> > > > From: "Wido den Hollander" 
> > > > To: "ceph-users" 
> > > > Sent: Saturday, January 30, 2016 9:14:59 AM
> > > > Subject: [ceph-users] Ceph mirrors wanted!
> > > >
> > > > Hi,
> > > >
> > > > My PR was merged with a script to mirror Ceph properly:
> > > > https://github.com/ceph/ceph/tree/master/mirroring
> > > >
> > > > Currently there are 3 (official) locations where you can get Ceph:
> > > >
> > > > - download.ceph.com (Dreamhost, US)
> > > > - eu.ceph.com (PCextreme, Netherlands)
> > > > - au.ceph.com (Digital Pacific, Australia)
> > > >
> > > > I'm looking for more mirrors to become official mirrors so we can
> > easily
> > > > distribute Ceph.
> > > >
> > > > Mirrors do go down and it's always nice to have a mirror local to you.
> > > >
> > > > I'd like to have one or more mirrors in Asia, Africa and/or South
> > > > Ameirca if possible. Anyone able to host there? Other locations are
> > > > welcome as well!
> > > >
> > > > A few things which are required:
> > > >
> > > > - 1Gbit connection or more
> > > > - Native IPv4 and IPv6
> > > > - HTTP access
> > > > - rsync access
> > > > - 2TB of storage or more
> > > > - Monitoring of the 

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Ferhat Ozkasgarli
Hello Wido,

As Radore Datacenter we also want to become mirror for Ceph project.

Our URL will be http://ceph-mirros.radore.com

We would be happy to become tr.ceph.com

The server will be ready tomorrow or the day after.

On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop  wrote:

> http://ceph.mirror.beyondhosting.net/
>
> I need to know what server will be keeping the master copy for rsync to
> pull changes from.
>
> Tyler Bishop
> Chief Technical Officer
> 513-299-7108 x10
>
>
>
> tyler.bis...@beyondhosting.net
>
>
> If you are not the intended recipient of this transmission you are
> notified that disclosing, copying, distributing or taking any action in
> reliance on the contents of this information is strictly prohibited.
>
> - Original Message -
> From: "Wido den Hollander" 
> To: "Tyler Bishop" 
> Cc: "ceph-users" 
> Sent: Sunday, February 7, 2016 4:22:13 AM
> Subject: Re: [ceph-users] Ceph mirrors wanted!
>
> > Op 6 februari 2016 om 15:48 schreef Tyler Bishop
> > :
> >
> >
> > Covered except that the dreamhost mirror is constantly down or broken.
> >
>
> Yes. Working on that.
>
> > I can add ceph.mirror.beyondhosting.net for it.
> >
>
> Great. Would us-east.ceph.com work for you? I can CNAME that to
> ceph.mirror.beyondhosting.net.
>
> I see that mirror.beyondhosting.net has IPv4 and IPv6, so that is good.
>
> If you are OK, I'll add you to the ceph-mirrors list so we can get this up
> and
> running.
>
> Wido
>
> > Tyler Bishop
> > Chief Technical Officer
> > 513-299-7108 x10
> >
> >
> >
> > tyler.bis...@beyondhosting.net
> >
> >
> > If you are not the intended recipient of this transmission you are
> notified
> > that disclosing, copying, distributing or taking any action in reliance
> on the
> > contents of this information is strictly prohibited.
> >
> > - Original Message -
> > From: "Wido den Hollander" 
> > To: "Tyler Bishop" 
> > Cc: "ceph-users" 
> > Sent: Saturday, February 6, 2016 2:46:50 AM
> > Subject: Re: [ceph-users] Ceph mirrors wanted!
> >
> > > Op 6 februari 2016 om 0:08 schreef Tyler Bishop
> > > :
> > >
> > >
> > > I have ceph pulling down from eu.   What *origin* should I setup rsync
> to
> > > automatically pull from?
> > >
> > > download.ceph.com is consistently broken.
> > >
> >
> > download.ceph.com should be your best guess, since that is closest.
> >
> > The US however seems covered with download.ceph.com although we might
> set up
> > us-east and us-west.
> >
> > I see that Ceph is currently in a subfolder called 'Ceph' and that is not
> > consistent with the other mirrors.
> >
> > Can that be fixed so that it matches the original directory structure?
> >
> > Wido
> >
> > > - Original Message -
> > > From: "Tyler Bishop" 
> > > To: "Wido den Hollander" 
> > > Cc: "ceph-users" 
> > > Sent: Friday, February 5, 2016 5:59:20 PM
> > > Subject: Re: [ceph-users] Ceph mirrors wanted!
> > >
> > > We would be happy to mirror the project.
> > >
> > > http://mirror.beyondhosting.net
> > >
> > >
> > > - Original Message -
> > > From: "Wido den Hollander" 
> > > To: "ceph-users" 
> > > Sent: Saturday, January 30, 2016 9:14:59 AM
> > > Subject: [ceph-users] Ceph mirrors wanted!
> > >
> > > Hi,
> > >
> > > My PR was merged with a script to mirror Ceph properly:
> > > https://github.com/ceph/ceph/tree/master/mirroring
> > >
> > > Currently there are 3 (official) locations where you can get Ceph:
> > >
> > > - download.ceph.com (Dreamhost, US)
> > > - eu.ceph.com (PCextreme, Netherlands)
> > > - au.ceph.com (Digital Pacific, Australia)
> > >
> > > I'm looking for more mirrors to become official mirrors so we can
> easily
> > > distribute Ceph.
> > >
> > > Mirrors do go down and it's always nice to have a mirror local to you.
> > >
> > > I'd like to have one or more mirrors in Asia, Africa and/or South
> > > Ameirca if possible. Anyone able to host there? Other locations are
> > > welcome as well!
> > >
> > > A few things which are required:
> > >
> > > - 1Gbit connection or more
> > > - Native IPv4 and IPv6
> > > - HTTP access
> > > - rsync access
> > > - 2TB of storage or more
> > > - Monitoring of the mirror/source
> > >
> > > You can easily mirror Ceph yourself with this script I wrote:
> > > https://github.com/ceph/ceph/blob/master/mirroring/mirror-ceph.sh
> > >
> > > eu.ceph.com and au.ceph.com use it to sync from download.ceph.com. If
> > > you want to mirror Ceph locally, please pick a mirror local to you.
> > >
> > > Please refer to these guidelines:
> > > https://github.com/ceph/ceph/tree/master/mirroring#guidelines
> > >
> > > --
> > > Wido den Hollander
> > > 42on B.V.
> > > Ceph trainer and consultant
> > >
> > > 

[ceph-users] CephFS: read-only access

2016-02-15 Thread Burkhard Linke

Hi,

I would like to provide access to a bunch of large files (bio sequence 
databases) to our cloud users. Putting the files in a RBD instance 
requires special care if several VMs need to access the files; creating 
an individual RBD snapshot for each instance requires more effort in the 
cloud setup.


I'm currently thinking about given the users read-only access to a 
CephFS subdirectory. The Jewel release includes a path based access 
restriction for CephFS, which fits nicely into the concept. But the 
documentation does not state whether it is possible to have a read-only 
CephFS mount point. Is it sufficient to skip the write flag from the 
user's capabilities, e.g.


caps: [mds] allow rp, allow rp path=/foo
caps: [mon] allow r
caps: [osd] allow r pool=data

Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-02-15 Thread Dominik Zalewski
"Status:
This code is now being ported to the upstream linux kernel reservation API
added in this commit:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/block/ioctl.c?id=bbd3e064362e5057cc4799ba2e4d68c7593e490b

When this is completed, LIO will call into the iblock backend which will
then call rbd's pr_ops."


Does anyone know how up to date this page?
http://tracker.ceph.com/projects/ceph/wiki/Clustered_SCSI_target_using_RBD


Is currently only Suse supporting active/active multipath for RBD over
iSCSI?  https://www.susecon.com/doc/2015/sessions/TUT16512.pdf


I'm trying to configure active/passive iSCSI gateway on OSD nodes serving
RBD image. Clustering is done using pacemaker/corosync. Does anyone have a
similar  working setup? Anything I should be aware of?


Thanks

Dominik

On Mon, Jan 18, 2016 at 11:35 AM, Dominik Zalewski 
wrote:

> Hi,
>
> I'm looking into implementing iscsi gateway with MPIO using lrbd -
> https://github.com/swiftgist/lrb
>
>
>
> https://www.suse.com/docrep/documents/kgu61iyowz/suse_enterprise_storage_2_and_iscsi.pdf
>
> https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
>
> From above examples:
>
> *For iSCSI failover and load-balancing,*
>
> *these servers must run a kernel supporting the target_core_*
>
> *rbd module. This also requires that the target servers run at*
>
> *least the version 3.12.48-52.27.1 of the kernel-default ­package.*
>
> *Updates packages are available from the SUSE Linux*
>
> *Enterprise Server maintenance channel.*
>
>
> I understand that lrbd is basically a nice way to configure LIO and rbd
> across ceph osd nodes/iscsi gatways. Does CentOS 7 have same
> target_core_rbd module in the kernel or this is something Suse Enterprise
> Storage specific only?
>
>
> Basically will LIO+rbd work the same way on CentOS 7? Has anyone using it
> with CentOS?
>
>
> Thanks
>
>
> Dominik
>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help: pool not responding

2016-02-15 Thread Karan Singh
Hey Mario

Agreed to Ferhat.

Recheck your network ( bonds , interfaces , network switches , even cables ) I 
have seen this several times before and in most of the cases its because of 
network.
BTW are you using Mellanox ?

- Karan -

> On 15 Feb 2016, at 10:12, Mario Giammarco  wrote:
> 
> koukou73gr  writes:
> 
>> 
>> Have you tried restarting  osd.0 ?
>> 
> Yes I have restarted all osds many times.
> Also launched repair and scrub.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help: pool not responding

2016-02-15 Thread Mario Giammarco
koukou73gr  writes:

> 
> Have you tried restarting  osd.0 ?
> 
Yes I have restarted all osds many times.
Also launched repair and scrub.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com