Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-25 Thread WD_Hwang
Dear Cephers:
  I got the same issue under Ubuntu 14.04, even I try to use the image format 
‘1’.
# modinfo rbd
filename:   /lib/modules/3.13.0-85-generic/kernel/drivers/block/rbd.ko
license:GPL
author: Jeff Garzik 
description:rados block device
author: Yehuda Sadeh 
author: Sage Weil 
author: Alex Elder 
srcversion: 48BFBD5C3D31D799F01D218
depends:libceph
intree: Y
vermagic:   3.13.0-85-generic SMP mod_unload modversions
signer: Magrathea: Glacier signing key
sig_key:C6:33:E9:BF:A6:CA:49:D5:3D:2E:B5:25:6A:35:87:7D:04:F1:64:F8
sig_hashalgo:   sha512

##
# rbd info block_data/data01
rbd image 'data01':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.540752ae8944a
format: 2
features:
flags:
# rbd map block_data/data01
rbd: sysfs write failed
rbd: map failed: (5) Input/output error

##
# rbd info block_data/data02
rbd image 'data02':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.2aac0238e1f29
format: 2
features: layering
   flags:
# rbd map block_data/data02
rbd: sysfs write failed
rbd: map failed: (5) Input/output error

  Is there any new idea to solve this issue?

Thanks a lot,
WD

---
This email contains confidential or legally privileged information and is for 
the sole use of its intended recipient. 
Any unauthorized review, use, copying or distribution of this email or the 
content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should 
delete this e-mail immediately.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-25 Thread Richard Chan
Quick questions:

1. Should this script be run on a pre-Jewel setup (e.g. revert test VMs) or
 *after* Jewel attempted to read the no-zone/no-region Hammer  and created
the default.* pools?

2. Should the radosgw daemon be running when executing the script?

Thanks!



On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub 
wrote:

> I managed to reproduce the issue, and there seem to be multiple
> problems. Specifically we have an issue when upgrading a default
> cluster that hasn't had a zone (and region) explicitly configured
> before. There is another bug that I found
> (http://tracker.ceph.com/issues/15597) that makes things even a bit
> more complicated.
>
> I created the following script that might be able to fix things for you:
>
> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
>
> For future reference, this script shouldn't be used if there are any
> zones configured other than the default one. It also makes some ninja
> patching to the zone config because of a bug that exists currently,
> but will probably not apply to any next versions.
>
> Please let me know if you have any issues, or if this actually does its
> magic.
>
> Thanks,
> Yehuda
>
> On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan
>  wrote:
> >
> >> > How do you actually do that?
> >>
> >> What does 'radosgw-admin zone get' return?
> >>
> >> Yehuda
> >
> >
> >
> > [root@node1 ceph]# radosgw-admin zone get
> > unable to initialize zone: (2) No such file or directory
> >
> > (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from
> a
> > clean
> >
> > ceph-deploy rgw create node1
> >
> > ## user created under Hammer
> > [root@node1 ceph]# radosgw-admin user info --uid=testuser
> > 2016-04-26 07:07:06.159497 7f410c33ca40  0 RGWZoneParams::create(): error
> > creating default zone params: (17) File exists
> > could not fetch user info: no user info saved
> >
> > "rgw_max_chunk_size": "524288",
> > "rgw_max_put_size": "5368709120",
> > "rgw_override_bucket_index_max_shards": "0",
> > "rgw_bucket_index_max_aio": "8",
> > "rgw_enable_quota_threads": "true",
> > "rgw_enable_gc_threads": "true",
> > "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1",
> > "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin",
> > "rgw_cache_enabled": "true",
> > "rgw_cache_lru_size": "1",
> > "rgw_socket_path": "",
> > "rgw_host": "",
> > "rgw_port": "",
> > "rgw_dns_name": "",
> > "rgw_dns_s3website_name": "",
> > "rgw_content_length_compat": "false",
> > "rgw_script_uri": "",
> > "rgw_request_uri": "",
> > "rgw_swift_url": "",
> > "rgw_swift_url_prefix": "swift",
> > "rgw_swift_auth_url": "",
> > "rgw_swift_auth_entry": "auth",
> > "rgw_swift_tenant_name": "",
> > "rgw_swift_account_in_url": "false",
> > "rgw_swift_enforce_content_length": "false",
> > "rgw_keystone_url": "",
> > "rgw_keystone_admin_token": "",
> > "rgw_keystone_admin_user": "",
> > "rgw_keystone_admin_password": "",
> > "rgw_keystone_admin_tenant": "",
> > "rgw_keystone_admin_project": "",
> > "rgw_keystone_admin_domain": "",
> > "rgw_keystone_api_version": "2",
> > "rgw_keystone_accepted_roles": "Member, admin",
> > "rgw_keystone_token_cache_size": "1",
> > "rgw_keystone_revocation_interval": "900",
> > "rgw_keystone_verify_ssl": "true",
> > "rgw_keystone_implicit_tenants": "false",
> > "rgw_s3_auth_use_rados": "true",
> > "rgw_s3_auth_use_keystone": "false",
> > "rgw_ldap_uri": "ldaps:\/\/",
> > "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com",
> > "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com",
> > "rgw_ldap_dnattr": "uid",
> > "rgw_ldap_secret": "\/etc\/openldap\/secret",
> > "rgw_s3_auth_use_ldap": "false",
> > "rgw_admin_entry": "admin",
> > "rgw_enforce_swift_acls": "true",
> > "rgw_swift_token_expiration": "86400",
> > "rgw_print_continue": "true",
> > "rgw_remote_addr_param": "REMOTE_ADDR",
> > "rgw_op_thread_timeout": "600",
> > "rgw_op_thread_suicide_timeout": "0",
> > "rgw_thread_pool_size": "100",
> > "rgw_num_control_oids": "8",
> > "rgw_num_rados_handles": "1",
> > "rgw_nfs_lru_lanes": "5",
> > "rgw_nfs_lru_lane_hiwat": "911",
> > "rgw_nfs_fhcache_partitions": "3",
> > "rgw_nfs_fhcache_size": "2017",
> > "rgw_zone": "",
> > "rgw_zone_root_pool": ".rgw.root",
> > "rgw_default_zone_info_oid": "default.zone",
> > "rgw_region": "",
> > "rgw_default_region_info_oid": "default.region",
> > "rgw_zonegroup": "",
> > "rgw_zonegroup_root_pool": ".rgw.root",
> > "rgw_default_zonegroup_info_oid": "default.zonegroup",
> > "rgw_realm": "",
> > "rgw_realm_root_pool": ".rgw.root",
> > "rgw_default_realm_info_oid": "default.realm",
> > "rgw_period_root_pool": 

[ceph-users] rgw bucket tenant in jewel

2016-04-25 Thread David Wang
Hi Cephers:
I saw the rgw source code of jewel edition and I found a new field
"std::string tenant" in struct rgw_bucket. Where used the new field tenant?
Is it used in S3 API?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-25 Thread Yehuda Sadeh-Weinraub
I managed to reproduce the issue, and there seem to be multiple
problems. Specifically we have an issue when upgrading a default
cluster that hasn't had a zone (and region) explicitly configured
before. There is another bug that I found
(http://tracker.ceph.com/issues/15597) that makes things even a bit
more complicated.

I created the following script that might be able to fix things for you:
https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone

For future reference, this script shouldn't be used if there are any
zones configured other than the default one. It also makes some ninja
patching to the zone config because of a bug that exists currently,
but will probably not apply to any next versions.

Please let me know if you have any issues, or if this actually does its magic.

Thanks,
Yehuda

On Mon, Apr 25, 2016 at 4:10 PM, Richard Chan
 wrote:
>
>> > How do you actually do that?
>>
>> What does 'radosgw-admin zone get' return?
>>
>> Yehuda
>
>
>
> [root@node1 ceph]# radosgw-admin zone get
> unable to initialize zone: (2) No such file or directory
>
> (I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from a
> clean
>
> ceph-deploy rgw create node1
>
> ## user created under Hammer
> [root@node1 ceph]# radosgw-admin user info --uid=testuser
> 2016-04-26 07:07:06.159497 7f410c33ca40  0 RGWZoneParams::create(): error
> creating default zone params: (17) File exists
> could not fetch user info: no user info saved
>
> "rgw_max_chunk_size": "524288",
> "rgw_max_put_size": "5368709120",
> "rgw_override_bucket_index_max_shards": "0",
> "rgw_bucket_index_max_aio": "8",
> "rgw_enable_quota_threads": "true",
> "rgw_enable_gc_threads": "true",
> "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1",
> "rgw_enable_apis": "s3, s3website, swift, swift_auth, admin",
> "rgw_cache_enabled": "true",
> "rgw_cache_lru_size": "1",
> "rgw_socket_path": "",
> "rgw_host": "",
> "rgw_port": "",
> "rgw_dns_name": "",
> "rgw_dns_s3website_name": "",
> "rgw_content_length_compat": "false",
> "rgw_script_uri": "",
> "rgw_request_uri": "",
> "rgw_swift_url": "",
> "rgw_swift_url_prefix": "swift",
> "rgw_swift_auth_url": "",
> "rgw_swift_auth_entry": "auth",
> "rgw_swift_tenant_name": "",
> "rgw_swift_account_in_url": "false",
> "rgw_swift_enforce_content_length": "false",
> "rgw_keystone_url": "",
> "rgw_keystone_admin_token": "",
> "rgw_keystone_admin_user": "",
> "rgw_keystone_admin_password": "",
> "rgw_keystone_admin_tenant": "",
> "rgw_keystone_admin_project": "",
> "rgw_keystone_admin_domain": "",
> "rgw_keystone_api_version": "2",
> "rgw_keystone_accepted_roles": "Member, admin",
> "rgw_keystone_token_cache_size": "1",
> "rgw_keystone_revocation_interval": "900",
> "rgw_keystone_verify_ssl": "true",
> "rgw_keystone_implicit_tenants": "false",
> "rgw_s3_auth_use_rados": "true",
> "rgw_s3_auth_use_keystone": "false",
> "rgw_ldap_uri": "ldaps:\/\/",
> "rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com",
> "rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com",
> "rgw_ldap_dnattr": "uid",
> "rgw_ldap_secret": "\/etc\/openldap\/secret",
> "rgw_s3_auth_use_ldap": "false",
> "rgw_admin_entry": "admin",
> "rgw_enforce_swift_acls": "true",
> "rgw_swift_token_expiration": "86400",
> "rgw_print_continue": "true",
> "rgw_remote_addr_param": "REMOTE_ADDR",
> "rgw_op_thread_timeout": "600",
> "rgw_op_thread_suicide_timeout": "0",
> "rgw_thread_pool_size": "100",
> "rgw_num_control_oids": "8",
> "rgw_num_rados_handles": "1",
> "rgw_nfs_lru_lanes": "5",
> "rgw_nfs_lru_lane_hiwat": "911",
> "rgw_nfs_fhcache_partitions": "3",
> "rgw_nfs_fhcache_size": "2017",
> "rgw_zone": "",
> "rgw_zone_root_pool": ".rgw.root",
> "rgw_default_zone_info_oid": "default.zone",
> "rgw_region": "",
> "rgw_default_region_info_oid": "default.region",
> "rgw_zonegroup": "",
> "rgw_zonegroup_root_pool": ".rgw.root",
> "rgw_default_zonegroup_info_oid": "default.zonegroup",
> "rgw_realm": "",
> "rgw_realm_root_pool": ".rgw.root",
> "rgw_default_realm_info_oid": "default.realm",
> "rgw_period_root_pool": ".rgw.root",
> "rgw_period_latest_epoch_info_oid": ".latest_epoch",
> "rgw_log_nonexistent_bucket": "false",
> "rgw_log_object_name": "%Y-%m-%d-%H-%i-%n",
> "rgw_log_object_name_utc": "false",
> "rgw_usage_max_shards": "32",
> "rgw_usage_max_user_shards": "1",
> "rgw_enable_ops_log": "false",
> "rgw_enable_usage_log": "false",
> "rgw_ops_log_rados": "true",
> "rgw_ops_log_socket_path": "",
> "rgw_ops_log_data_backlog": "5242880",
> "rgw_usage_log_flush_threshold": "1024",
> "rgw_usage_log_tick_interval": "30",
> 

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-25 Thread Richard Chan
> > How do you actually do that?
>
> What does 'radosgw-admin zone get' return?
>
> Yehuda
>


[root@node1 ceph]# radosgw-admin zone get
unable to initialize zone: (2) No such file or directory

(I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from a
clean

ceph-deploy rgw create node1

## user created under Hammer
[root@node1 ceph]# radosgw-admin user info --uid=testuser
2016-04-26 07:07:06.159497 7f410c33ca40  0 RGWZoneParams::create(): error
creating default zone params: (17) File exists
could not fetch user info: no user info saved

"rgw_max_chunk_size": "524288",
"rgw_max_put_size": "5368709120",
"rgw_override_bucket_index_max_shards": "0",
"rgw_bucket_index_max_aio": "8",
"rgw_enable_quota_threads": "true",
"rgw_enable_gc_threads": "true",
"rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-rgw.node1",
"rgw_enable_apis": "s3, s3website, swift, swift_auth, admin",
"rgw_cache_enabled": "true",
"rgw_cache_lru_size": "1",
"rgw_socket_path": "",
"rgw_host": "",
"rgw_port": "",
"rgw_dns_name": "",
"rgw_dns_s3website_name": "",
"rgw_content_length_compat": "false",
"rgw_script_uri": "",
"rgw_request_uri": "",
"rgw_swift_url": "",
"rgw_swift_url_prefix": "swift",
"rgw_swift_auth_url": "",
"rgw_swift_auth_entry": "auth",
"rgw_swift_tenant_name": "",
"rgw_swift_account_in_url": "false",
"rgw_swift_enforce_content_length": "false",
"rgw_keystone_url": "",
"rgw_keystone_admin_token": "",
"rgw_keystone_admin_user": "",
"rgw_keystone_admin_password": "",
"rgw_keystone_admin_tenant": "",
"rgw_keystone_admin_project": "",
"rgw_keystone_admin_domain": "",
"rgw_keystone_api_version": "2",
"rgw_keystone_accepted_roles": "Member, admin",
"rgw_keystone_token_cache_size": "1",
"rgw_keystone_revocation_interval": "900",
"rgw_keystone_verify_ssl": "true",
"rgw_keystone_implicit_tenants": "false",
"rgw_s3_auth_use_rados": "true",
"rgw_s3_auth_use_keystone": "false",
"rgw_ldap_uri": "ldaps:\/\/",
"rgw_ldap_binddn": "uid=admin,cn=users,dc=example,dc=com",
"rgw_ldap_searchdn": "cn=users,cn=accounts,dc=example,dc=com",
"rgw_ldap_dnattr": "uid",
"rgw_ldap_secret": "\/etc\/openldap\/secret",
"rgw_s3_auth_use_ldap": "false",
"rgw_admin_entry": "admin",
"rgw_enforce_swift_acls": "true",
"rgw_swift_token_expiration": "86400",
"rgw_print_continue": "true",
"rgw_remote_addr_param": "REMOTE_ADDR",
"rgw_op_thread_timeout": "600",
"rgw_op_thread_suicide_timeout": "0",
"rgw_thread_pool_size": "100",
"rgw_num_control_oids": "8",
"rgw_num_rados_handles": "1",
"rgw_nfs_lru_lanes": "5",
"rgw_nfs_lru_lane_hiwat": "911",
"rgw_nfs_fhcache_partitions": "3",
"rgw_nfs_fhcache_size": "2017",
"rgw_zone": "",
"rgw_zone_root_pool": ".rgw.root",
"rgw_default_zone_info_oid": "default.zone",
"rgw_region": "",
"rgw_default_region_info_oid": "default.region",
"rgw_zonegroup": "",
"rgw_zonegroup_root_pool": ".rgw.root",
"rgw_default_zonegroup_info_oid": "default.zonegroup",
"rgw_realm": "",
"rgw_realm_root_pool": ".rgw.root",
"rgw_default_realm_info_oid": "default.realm",
"rgw_period_root_pool": ".rgw.root",
"rgw_period_latest_epoch_info_oid": ".latest_epoch",
"rgw_log_nonexistent_bucket": "false",
"rgw_log_object_name": "%Y-%m-%d-%H-%i-%n",
"rgw_log_object_name_utc": "false",
"rgw_usage_max_shards": "32",
"rgw_usage_max_user_shards": "1",
"rgw_enable_ops_log": "false",
"rgw_enable_usage_log": "false",
"rgw_ops_log_rados": "true",
"rgw_ops_log_socket_path": "",
"rgw_ops_log_data_backlog": "5242880",
"rgw_usage_log_flush_threshold": "1024",
"rgw_usage_log_tick_interval": "30",
"rgw_intent_log_object_name": "%Y-%m-%d-%i-%n",
"rgw_intent_log_object_name_utc": "false",
"rgw_init_timeout": "300",
"rgw_mime_types_file": "\/etc\/mime.types",
"rgw_gc_max_objs": "32",
"rgw_gc_obj_min_wait": "7200",
"rgw_gc_processor_max_time": "3600",
"rgw_gc_processor_period": "3600",
"rgw_s3_success_create_obj_status": "0",
"rgw_resolve_cname": "false",
"rgw_obj_stripe_size": "4194304",
"rgw_extended_http_attrs": "",
"rgw_exit_timeout_secs": "120",
"rgw_get_obj_window_size": "16777216",
"rgw_get_obj_max_req_size": "4194304",
"rgw_relaxed_s3_bucket_names": "false",
"rgw_defer_to_bucket_acls": "",
"rgw_list_buckets_max_chunk": "1000",
"rgw_md_log_max_shards": "64",
"rgw_num_zone_opstate_shards": "128",
"rgw_opstate_ratelimit_sec": "30",
"rgw_curl_wait_timeout_ms": "1000",
"rgw_copy_obj_progress": "true",
"rgw_copy_obj_progress_every_bytes": "1048576",
"rgw_data_log_window": "30",
"rgw_data_log_changes_size": "1000",
"rgw_data_log_num_shards": "128",
"rgw_data_log_obj_prefix": "data_log",
"rgw_replica_log_obj_prefix": 

Re: [ceph-users] Using s3 (radosgw + ceph) like a cache

2016-04-25 Thread Ben Hines
This is how we use ceph/ radosgw.  I'd say our cluster is not that
reliable, but it's probably mostly our fault (no SSD journals, etc).

However, note that deletes are very slow in ceph. We put millions of
objects in very quickly and they are verrry slow to delete again especially
from RGW because it has to update the index too.

-Ben


On Mon, Apr 25, 2016 at 2:15 AM, Dominik Mostowiec <
dominikmostow...@gmail.com> wrote:

> Hi,
> I thought that xfs fragmentation or leveldb(gc list growing, locking,
> ...) could be a problem.
> Do you have any experience with this ?
>
> ---
> Regards
> Dominik
>
> 2016-04-24 13:40 GMT+02:00  :
> > I do not see any issue with that
> >
> > On 24/04/2016 12:39, Dominik Mostowiec wrote:
> >> Hi,
> >> I'm curious if using s3 like a cache -  frequent put/delete in the
> >> long term   may cause some problems in radosgw or OSD(xfs)?
> >>
> >> -
> >> Regards
> >> Dominik
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Pozdrawiam
> Dominik
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD image mounted by command "rbd-nbd" the status is read-only.

2016-04-25 Thread Ilya Dryomov
On Mon, Apr 25, 2016 at 7:47 PM, Stefan Lissmats  wrote:
> Hello again!
>
> I understand that it's not recommended running osd and rbd-nbd on the same 
> host and i actually moved my rbd-nbd to a completely clean host (same kernel 
> and OS though), but with same result.
>
> I hope someone can resolve this and you seem to indicate it is some kind of 
> known error but i didn't really understand the github commit that you linked.

Yes, it is a bug.  rbd-nbd code expects writes to have rval (return
code) equal to the size of the write.  I'm pretty sure that's wrong,
because rval for writes should be 0 or a negative error.

I think what happens is your writes complete successfully, but rbd-nbd
then throws an -EIO to the kernel because 0 != write size.  I could be
wrong, so let's wait for Mykola to chime in - he added that check to
fix discards.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD image mounted by command "rbd-nbd" the status is read-only.

2016-04-25 Thread Stefan Lissmats
Hello again!

I understand that it's not recommended running osd and rbd-nbd on the same host 
and i actually moved my rbd-nbd to a completely clean host (same kernel and OS 
though), but with same result.

I hope someone can resolve this and you seem to indicate it is some kind of 
known error but i didn't really understand the github commit that you linked. 

If other logs or info is needed i'm happy to provide it.

//Stefan

  

Från: Ilya Dryomov [idryo...@gmail.com]
Skickat: den 25 april 2016 17:31
Till: Stefan Lissmats; Mykola Golub
Kopia: Mika c; ceph-users
Ämne: Re: [ceph-users] RBD image mounted by command "rbd-nbd" the status is 
read-only.

On Mon, Apr 25, 2016 at 1:53 PM, Stefan Lissmats  wrote:
> Hello!
>
> Running a completely new testcluster with status HEALTH_OK i get the same
> error.
> I'm running Ubuntu 14.04 with kernel  3.16.0-70-generic and ceph 10.2.0 on
> all hosts.
> The rbd-nbd mapping was done on the same host having one osd and mon. (This
> is a small cluster with 4 virtual hosts and one osd per host).
>
> Steps after creating cluster.
>
> Created rbd device with standard options.
> #rbd create --size 50G nbd2
>
> Map the device (became device /dev/nbd2)
> #rbd-mbd map nbd2
>
> Create ext4 filesystem
> #mkfs.ext4 /dev/nbd2
>
> During creation of filesystem there was alot of errors in dmesg but mkfs
> indicated done.
> The errors was block nbd2: Other side returned error (5)
>
> I was able to mount the ext4 filesystem but that created even more errors in
> dmesg.
>
> Here is a selection of dmesg that probably contains the intresting bits.
>
> [13864.102569] block nbd2: Other side returned error (5)
> [13951.186296] block nbd2: Other side returned error (5)
> [13951.186443] blk_update_request: 2157 callbacks suppressed
> [13951.186445] end_request: I/O error, dev nbd2, sector 0
> [13951.186598] quiet_error: 271152 callbacks suppressed
> [13951.186600] Buffer I/O error on device nbd2, logical block 0
> [13951.186780] lost page write due to I/O error on nbd2
> [13951.187816] EXT4-fs (nbd2): mounted filesystem with ordered data mode.
> Opts: (null)
> [13952.049103] block nbd2: Other side returned error (5)
> [13952.049323] end_request: I/O error, dev nbd2, sector 8464
> [13952.070722] block nbd2: Other side returned error (5)
> [13952.071009] end_request: I/O error, dev nbd2, sector 8720
> [13952.074069] block nbd2: Other side returned error (5)
> [13952.074392] end_request: I/O error, dev nbd2, sector 8976
> [13952.075283] block nbd2: Other side returned error (5)
> [13952.075635] end_request: I/O error, dev nbd2, sector 9232
> [13952.076249] block nbd2: Other side returned error (5)
> [13952.076636] end_request: I/O error, dev nbd2, sector 9488
> [13952.077108] block nbd2: Other side returned error (5)
> [13952.077606] end_request: I/O error, dev nbd2, sector 9744
> [13952.078064] block nbd2: Other side returned error (5)
> [13952.078537] end_request: I/O error, dev nbd2, sector 1
> [13952.079038] block nbd2: Other side returned error (5)
> [13952.079583] end_request: I/O error, dev nbd2, sector 10256
> [13952.080301] block nbd2: Other side returned error (5)
> [13952.080869] end_request: I/O error, dev nbd2, sector 10512
> [13952.081474] block nbd2: Other side returned error (5)
> [13952.082088] block nbd2: Other side returned error (5)
> [13952.082701] block nbd2: Other side returned error (5)
> [13952.083316] block nbd2: Other side returned error (5)
> [13952.083943] block nbd2: Other side returned error (5)
> [13952.084654] block nbd2: Other side returned error (5)
> [13952.085301] block nbd2: Other side returned error (5)

Looks like this has come up before:


https://github.com/ceph/ceph/pull/7215/commits/3ff60a61bf68516983c0b6ea6791ce712c98a073

Do we set rval to the length of the request for aio writes?  I thought
we did this only for reads and that it's always <= 0 on writes.
Mykola, could you look into this?

I certainly wouldn't advise running rbd-nbd on OSD hosts.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-25 Thread Yehuda Sadeh-Weinraub
(sorry for resubmission, adding ceph-users)

On Mon, Apr 25, 2016 at 9:47 AM, Richard Chan
 wrote:
> Hi Yehuda
>
> I created a test 3xVM setup with Hammer and one radosgw on the (separate)
> admin node; creating one user and buckets.
>
> I upgraded the VMs to jewel and created a new radosgw on one of the nodes.
>
> The object store didn't seem to survive the upgrade
>
> # radosgw-admin user info --uid=testuser
> 2016-04-26 00:41:50.713069 7fcdcc6fca40  0 RGWZoneParams::create(): error
> creating default zone params: (17) File exists
> could not fetch user info: no user info saved
>
> rados lspools
> rbd
> .rgw.root
> .rgw.control
> .rgw
> .rgw.gc
> .users.uid
> .users
> .rgw.buckets.index
> .rgw.buckets
> default.rgw.control
> default.rgw.data.root
> default.rgw.gc
> default.rgw.log
> default.rgw.users.uid
> default.rgw.users.keys
>
> Do I have to configure radosgw to use the pools with default.*?

No. Need to get it to play along nicely with the old pools.

> How do you actually do that?

What does 'radosgw-admin zone get' return?

Yehuda
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-25 Thread Richard Chan
Hi Yehuda

I created a test 3xVM setup with Hammer and one radosgw on the (separate)
admin node; creating one user and buckets.

I upgraded the VMs to jewel and created a new radosgw on one of the nodes.

The object store didn't seem to survive the upgrade

# radosgw-admin user info --uid=testuser
2016-04-26 00:41:50.713069 7fcdcc6fca40  0 RGWZoneParams::create(): error
creating default zone params: (17) File exists
could not fetch user info: no user info saved

rados lspools
rbd
.rgw.root
.rgw.control
.rgw
.rgw.gc
.users.uid
.users
.rgw.buckets.index
.rgw.buckets
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.users.uid
default.rgw.users.keys

Do I have to configure radosgw to use the pools with default.*?
How do you actually do that?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph cache tier, flushed objects does not appear to be written on disk

2016-04-25 Thread Gregory Farnum
On Thursday, April 21, 2016, Benoît LORIOT  wrote:

> Hello,
>
> we want to disable readproxy cache tier but before doing so we would like
> to make sure we won't loose data.
>
> Is there a way to confirm that flush actually write objects to disk ?
>
> We're using ceph version 0.94.6.
>
>
> I tried that, with cephfs_data_ro_cache being the hot storage pool and
> cephfs_data being the cold storage pool
>
> # rados -p cephfs_data_ro_cache ls
>
> then choose a random object in the list : 14d6142.
>
> Find the object on cache disk :
>
> # ceph osd map cephfs_data_ro_cache 14d6142.
> osdmap e301 pool 'cephfs_data_ro_cache' (6) object '14d6142.'
> -> pg 6.d01 (6.0) -> up ([4,5,8], p4) acting ([4,5,8], p4)
>
> Object is in the pg 6.0 on OSD 4, 5 and 8, I can find the file on disk.
>
> # ls -l
> /var/lib/ceph/osd/ceph-4/current/6.0_head/DIR_0/DIR_0/DIR_0/DIR_0/14d6142.__head_0D01__6
> -rw-r--r--. 1 root root 0 Apr  8 19:36
> /var/lib/ceph/osd/ceph-4/current/6.0_head/DIR_0/DIR_0/DIR_0/DIR_0/14d6142.__head_0D01__6
>
> Flush the object :
>
> # rados -p cephfs_data_ro_cache cache-try-flush 14d6142.
>
> Find the object on disk :
>
> # ceph osd map cephfs_data 14d6142.
> osdmap e301 pool 'cephfs_data' (1) object '14d6142.' -> pg
> 1.d01 (1.0) -> up ([1,7,2], p1) acting ([1,7,2], p1)
>
> Object is in the pg 1.0 on OSD 1, 7 and 2, I can't find the file on disk
> on any of the 3 OSDs
>
> # ls -l
> /var/lib/ceph/osd/ceph-1/current/1.0_head/DIR_0/DIR_0/DIR_0/DIR_0/14d6142.*
> ls: cannot access
> /var/lib/ceph/osd/ceph-1/current/1.0_head/DIR_0/DIR_0/DIR_0/DIR_0/14d6142.*:
> No such file or directory
>
>
>
> What am I doing wrong ? To me it seems that nothing is actually flushed to
> disk.
>
>
Is the directory actually hashed four levels deep? Or is it shallower and
you're looking too far down?
-Greg


> Thank you,
> Ben.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD image mounted by command "rbd-nbd" the status is read-only.

2016-04-25 Thread Ilya Dryomov
On Mon, Apr 25, 2016 at 1:53 PM, Stefan Lissmats  wrote:
> Hello!
>
> Running a completely new testcluster with status HEALTH_OK i get the same
> error.
> I'm running Ubuntu 14.04 with kernel  3.16.0-70-generic and ceph 10.2.0 on
> all hosts.
> The rbd-nbd mapping was done on the same host having one osd and mon. (This
> is a small cluster with 4 virtual hosts and one osd per host).
>
> Steps after creating cluster.
>
> Created rbd device with standard options.
> #rbd create --size 50G nbd2
>
> Map the device (became device /dev/nbd2)
> #rbd-mbd map nbd2
>
> Create ext4 filesystem
> #mkfs.ext4 /dev/nbd2
>
> During creation of filesystem there was alot of errors in dmesg but mkfs
> indicated done.
> The errors was block nbd2: Other side returned error (5)
>
> I was able to mount the ext4 filesystem but that created even more errors in
> dmesg.
>
> Here is a selection of dmesg that probably contains the intresting bits.
>
> [13864.102569] block nbd2: Other side returned error (5)
> [13951.186296] block nbd2: Other side returned error (5)
> [13951.186443] blk_update_request: 2157 callbacks suppressed
> [13951.186445] end_request: I/O error, dev nbd2, sector 0
> [13951.186598] quiet_error: 271152 callbacks suppressed
> [13951.186600] Buffer I/O error on device nbd2, logical block 0
> [13951.186780] lost page write due to I/O error on nbd2
> [13951.187816] EXT4-fs (nbd2): mounted filesystem with ordered data mode.
> Opts: (null)
> [13952.049103] block nbd2: Other side returned error (5)
> [13952.049323] end_request: I/O error, dev nbd2, sector 8464
> [13952.070722] block nbd2: Other side returned error (5)
> [13952.071009] end_request: I/O error, dev nbd2, sector 8720
> [13952.074069] block nbd2: Other side returned error (5)
> [13952.074392] end_request: I/O error, dev nbd2, sector 8976
> [13952.075283] block nbd2: Other side returned error (5)
> [13952.075635] end_request: I/O error, dev nbd2, sector 9232
> [13952.076249] block nbd2: Other side returned error (5)
> [13952.076636] end_request: I/O error, dev nbd2, sector 9488
> [13952.077108] block nbd2: Other side returned error (5)
> [13952.077606] end_request: I/O error, dev nbd2, sector 9744
> [13952.078064] block nbd2: Other side returned error (5)
> [13952.078537] end_request: I/O error, dev nbd2, sector 1
> [13952.079038] block nbd2: Other side returned error (5)
> [13952.079583] end_request: I/O error, dev nbd2, sector 10256
> [13952.080301] block nbd2: Other side returned error (5)
> [13952.080869] end_request: I/O error, dev nbd2, sector 10512
> [13952.081474] block nbd2: Other side returned error (5)
> [13952.082088] block nbd2: Other side returned error (5)
> [13952.082701] block nbd2: Other side returned error (5)
> [13952.083316] block nbd2: Other side returned error (5)
> [13952.083943] block nbd2: Other side returned error (5)
> [13952.084654] block nbd2: Other side returned error (5)
> [13952.085301] block nbd2: Other side returned error (5)

Looks like this has come up before:


https://github.com/ceph/ceph/pull/7215/commits/3ff60a61bf68516983c0b6ea6791ce712c98a073

Do we set rval to the length of the request for aio writes?  I thought
we did this only for reads and that it's always <= 0 on writes.
Mykola, could you look into this?

I certainly wouldn't advise running rbd-nbd on OSD hosts.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] is it possible using different ceph-fuse version on clients from server

2016-04-25 Thread Gregory Farnum
On Thursday, April 21, 2016, Serkan Çoban  wrote:

> I cannot install a different kernel that is not supported by redhat to
> clients.
> Any other way to increase fuse performance with default 6.7 kernel?
> Maybe I can compile jewel ceph-fuse packages for rhel6, is this make a
> difference?


It might! We don't because Jewel requires newer build tools than are
available in RHEL6 by default, but I think you can pull them in.

As Oliver said, Hammer ceph-fuse should work okay against a Jewel cluster
-- you are just subject to all the hammer bugs which have been fixed since
then. Getting those fixed is one of the biggest reasons to use cephfs-fuse
and update quickly!
-Greg



>
> On Thu, Apr 21, 2016 at 5:24 PM, Oliver Dzombic  > wrote:
> > Hi,
> >
> > yes, it should be.
> >
> > If you want to do something good, try to use a recent kernel on the
> > centos 6.7 things. Then you could also complile something, that you dont
> > need fuse.
> >
> > The speed might be awesome bad if you use centos 6.7 std. kernel with
> fuse.
> >
> > --
> > Mit freundlichen Gruessen / Best regards
> >
> > Oliver Dzombic
> > IP-Interactive
> >
> > mailto:i...@ip-interactive.de 
> >
> > Anschrift:
> >
> > IP Interactive UG ( haftungsbeschraenkt )
> > Zum Sonnenberg 1-3
> > 63571 Gelnhausen
> >
> > HRB 93402 beim Amtsgericht Hanau
> > Geschäftsführung: Oliver Dzombic
> >
> > Steuer Nr.: 35 236 3622 1
> > UST ID: DE274086107
> >
> >
> > Am 21.04.2016 um 16:22 schrieb Serkan Çoban:
> >> Hi,
> >> I would like to install and test ceph jewel release.
> >> My servers are rhel 7.2 but clients are rhel6.7.
> >> Is it possible to install jewel release to server and use hammer
> >> ceph-fuse rpms on clients?
> >>
> >> Thanks,
> >> Serkan
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com 
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-25 Thread Karol Mroz
On Mon, Apr 25, 2016 at 02:23:28PM +0200, Ansgar Jazdzewski wrote:
> Hi,
> 
> we test Jewel in our  QA environment (from Infernalis to Hammer) the
> upgrade went fine but the Radosgw did not start.
> 
> the error appears also with radosgw-admin
> 
> # radosgw-admin user info --uid="images" --rgw-region=eu --rgw-zone=eu-qa
> 2016-04-25 12:13:33.425481 7fc757fad900  0 error in read_id for id  :
> (2) No such file or directory
> 2016-04-25 12:13:33.425494 7fc757fad900  0 failed reading zonegroup
> info: ret -2 (2) No such file or directory
> couldn't init storage provider
> 
> do i have to change some settings, also for upgrade of the radosgw?

Hi,

Testing a recent master build (with only default region and zone),
I'm able to successfully run the command you specified:

% ./radosgw-admin user info --uid="testid" --rgw-region=default 
--rgw-zone=default
...
{
"user_id": "testid",
"display_name": "M. Tester",
...
}

Are you certain the region and zone you specified exist?

What do the following report:

radosgw-admin zone list
radosgw-admin region list

-- 
Regards,
Karol


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RadosGW not start after upgrade to Jewel

2016-04-25 Thread Ansgar Jazdzewski
Hi,

we test Jewel in our  QA environment (from Infernalis to Hammer) the
upgrade went fine but the Radosgw did not start.

the error appears also with radosgw-admin

# radosgw-admin user info --uid="images" --rgw-region=eu --rgw-zone=eu-qa
2016-04-25 12:13:33.425481 7fc757fad900  0 error in read_id for id  :
(2) No such file or directory
2016-04-25 12:13:33.425494 7fc757fad900  0 failed reading zonegroup
info: ret -2 (2) No such file or directory
couldn't init storage provider

do i have to change some settings, also for upgrade of the radosgw?

thanks
Ansgar
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD image mounted by command "rbd-nbd" the status is read-only.

2016-04-25 Thread Stefan Lissmats
Hello!

Running a completely new testcluster with status HEALTH_OK i get the same error.
I'm running Ubuntu 14.04 with kernel  3.16.0-70-generic and ceph 10.2.0 on all 
hosts.
The rbd-nbd mapping was done on the same host having one osd and mon. (This is 
a small cluster with 4 virtual hosts and one osd per host).

Steps after creating cluster.

Created rbd device with standard options.
#rbd create --size 50G nbd2

Map the device (became device /dev/nbd2)
#rbd-mbd map nbd2

Create ext4 filesystem
#mkfs.ext4 /dev/nbd2

During creation of filesystem there was alot of errors in dmesg but mkfs 
indicated done.
The errors was block nbd2: Other side returned error (5)

I was able to mount the ext4 filesystem but that created even more errors in 
dmesg.

Here is a selection of dmesg that probably contains the intresting bits.

[13864.102569] block nbd2: Other side returned error (5)
[13951.186296] block nbd2: Other side returned error (5)
[13951.186443] blk_update_request: 2157 callbacks suppressed
[13951.186445] end_request: I/O error, dev nbd2, sector 0
[13951.186598] quiet_error: 271152 callbacks suppressed
[13951.186600] Buffer I/O error on device nbd2, logical block 0
[13951.186780] lost page write due to I/O error on nbd2
[13951.187816] EXT4-fs (nbd2): mounted filesystem with ordered data mode. Opts: 
(null)
[13952.049103] block nbd2: Other side returned error (5)
[13952.049323] end_request: I/O error, dev nbd2, sector 8464
[13952.070722] block nbd2: Other side returned error (5)
[13952.071009] end_request: I/O error, dev nbd2, sector 8720
[13952.074069] block nbd2: Other side returned error (5)
[13952.074392] end_request: I/O error, dev nbd2, sector 8976
[13952.075283] block nbd2: Other side returned error (5)
[13952.075635] end_request: I/O error, dev nbd2, sector 9232
[13952.076249] block nbd2: Other side returned error (5)
[13952.076636] end_request: I/O error, dev nbd2, sector 9488
[13952.077108] block nbd2: Other side returned error (5)
[13952.077606] end_request: I/O error, dev nbd2, sector 9744
[13952.078064] block nbd2: Other side returned error (5)
[13952.078537] end_request: I/O error, dev nbd2, sector 1
[13952.079038] block nbd2: Other side returned error (5)
[13952.079583] end_request: I/O error, dev nbd2, sector 10256
[13952.080301] block nbd2: Other side returned error (5)
[13952.080869] end_request: I/O error, dev nbd2, sector 10512
[13952.081474] block nbd2: Other side returned error (5)
[13952.082088] block nbd2: Other side returned error (5)
[13952.082701] block nbd2: Other side returned error (5)
[13952.083316] block nbd2: Other side returned error (5)
[13952.083943] block nbd2: Other side returned error (5)
[13952.084654] block nbd2: Other side returned error (5)
[13952.085301] block nbd2: Other side returned error (5)


Otherwise this rbd-nbd is really interesting since the krbd is often far behind 
in functionality and upgrading kernel is not always possible for other reasons, 
so being able to use a user-space client is really great.

Best regards
Stefan Lissmats




Från: ceph-users [ceph-users-boun...@lists.ceph.com] för Mika c 
[mika.leaf...@gmail.com]
Skickat: den 21 april 2016 05:47
Till: ceph-users
Ämne: [ceph-users] RBD image mounted by command "rbd-nbd" the status is 
read-only.

Hi cephers,
 Read this post "CEPH Jewel 
Preiew"
 before.
Follow the steps can map and mount rbd image to /dev/nbd successfully.
But I can not write any files. The error message is "Read-only file system".
​Is this feature still in the experimental stage?


Best wishes,
Mika

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using s3 (radosgw + ceph) like a cache

2016-04-25 Thread Dominik Mostowiec
Hi,
I thought that xfs fragmentation or leveldb(gc list growing, locking,
...) could be a problem.
Do you have any experience with this ?

---
Regards
Dominik

2016-04-24 13:40 GMT+02:00  :
> I do not see any issue with that
>
> On 24/04/2016 12:39, Dominik Mostowiec wrote:
>> Hi,
>> I'm curious if using s3 like a cache -  frequent put/delete in the
>> long term   may cause some problems in radosgw or OSD(xfs)?
>>
>> -
>> Regards
>> Dominik
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-25 Thread Udo Lembke

Hi Mike,

Am 21.04.2016 um 15:20 schrieb Mike Miller:

Hi Udo,

thanks, just to make sure, further increased the readahead:

$ sudo blockdev --getra /dev/rbd0
1048576

$ cat /sys/block/rbd0/queue/read_ahead_kb
524288

No difference here. First one is sectors (512 bytes), second one KB.

oops, sorry! My fault. Sector/KB make sense...


The second read (after drop cache) is somewhat faster (10%-20%) but 
not much.
That's very strange! Looks like tuning possibilities. Has your OSD-Nodes 
enough RAM? Are they very very busy?


If I do single thread reading on a test-vm I got following results (very 
small test-cluster - 2 nodes with 10GB-Nic and one Node with 1GB-Nic):

support@upgrade-test:~/fio$ dd if=fiojo.0.0 of=/dev/null bs=1M
4096+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 62,0267 s, 69,2 MB/s

### as root "echo 3 > /proc/sys/vm/drop_caches" and the same on the VM-host

support@upgrade-test:~/fio$ dd if=fiojo.0.0 of=/dev/null bs=1M
4096+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 30,0987 s, 143 MB/s

# this is due to cached data on the osd-nodes
# with cleared cache on all nodes (vm, vm-host, osd-nodes)
# I got the value like on the first run:

support@upgrade-test:~/fio$ dd if=fiojo.0.0 of=/dev/null bs=1M
4096+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 61,8995 s, 69,4 MB/s

I don't know why this should not the same with krbd.


Udo

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] increase pgnum after adjust reweight osd

2016-04-25 Thread Christian Balzer

Hello,

On Mon, 25 Apr 2016 13:23:04 +0800 lin zhou wrote:

> Hi,Cephers:
> 
> Recently,I face a problem of full.and I have using reweight to adjust it.
> But now I want to increase pgnum before I can add new nodes into the
> cluster.
> 
How many more nodes, OSDs?

> current pg_num is 2048,and total OSD is 69.I want to increase to 4096.
> 
> so what's the recommended steps:one-time increase derectly to 4096 or
> increase it slowly,eg.increase 200 each time.
> 
> ceph version is:0.67.8.and I use rbd pool only.
>
That's quite old, there are AFAIK some changes in current Ceph versions
that improve data placement.

Also current versions won't allow you to make large changes to pg_num
because the massive and prolonged impact that can have.

So you're better off to do it in small steps, unless you can afford your
cluster having poor performance for a long time.

>  the form of osdid,osd pg_um,osd reweight and osd used is below:
> 
It's still quite uneven, I'd be worried that any OSD with more that 85%
utilization might become near_full or even full during the data
re-balancing. 

Christian
> dumped all in format plain
> OSD weight  pgnum   used
> 0   0.89106 83%
> 1   1   102 73%
> 2   0.9 104 87%
> 3   0.9192  107 80%
> 5   0.89106 85%
> 6   0.9271  108 82%
> 7   1   112 77%
> 8   0.9477  113 82%
> 9   1   112 78%
> 10  0.9177  109 79%
> 11  1   108 76%
> 12  0.9266  109 84%
> 13  1   105 75%
> 14  0.846   103 80%
> 15  0.91109 80%
> 16  1   99   68%
> 17  1   108 79%
> 18  1   109 77%
> 19  0.8506  109 84%
> 20  0.9504  111 79%
> 21  1   95   71%
> 22  0.9178  106 76%
> 23  1   108 76%
> 24  0.9274  118 82%
> 25  0.923   117 86%
> 26  1   107 76%
> 27  1   111 80%
> 28  0.9254  101 80%
> 29  0.9445  104 82%
> 30  1   115 81%
> 31  0.9285  105 75%
> 32  0.7823  105 81%
> 33  0.9002  111 81%
> 34  0.8024  106 79%
> 35  1   100 71%
> 36  1   117 81%
> 37  0.7949  106 79%
> 38  0.9356  108 78%
> 39  0.866   106 76%
> 40  0.8322  105 76%
> 41  0.9297  97   81%
> 42  1   97   68%
> 43  0.8393  115 81%
> 44  0.9355  108 78%
> 45  0.8429  115 84%
> 46  1   100 71%
> 47  1   105 73%
> 48  0.9476  109 80%
> 49  1   117 82%
> 50  0.8642  100 74%
> 51  1   101 76%
> 56  1   104 77%
> 57  1   102 70%
> 62  1   106 79%
> 63  0.9332  99 82%
> 68  1   103 76%
> 69  1   100 71%
> 74  1   105 77%
> 75  1   104 80%
> 80  1   101 73%
> 81  1   112 78%
> 86  0.866   104 76%
> 87  1   97   70%
> 92  1   104 79%
> 93  0.9464  102 75%
> 98  0.9082  113 80%
> 99  1   108 77%
> 104 1   107 79%
> 105 1   109 77%
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com