Re: [ceph-users] Ceph Day at University of Santa Cruz - September 19

2018-09-20 Thread Kamble, Nitin A
Hi Mike,
  Are the slides of presentations available anywhere? If not, can those be 
shared?

Thanks,
Nitin


On 9/11/18, 3:47 PM, "ceph-users on behalf of Mike Perez" 
 wrote:

[External Email]


Hey all,

Just a reminder that Ceph Day at UCSC Silicon Valley campus is coming this
September 19. This is a great opportunity to hear various use cases around
Ceph, and have discussions with various contributors in the community.

Potentially we'll be hearing a presentation from the university that helped
Sage with his research to start it all, and how Ceph enables Genomic
research at the campus today!

Registration is up and the schedule is posted:


https://ceph.com/cephdays/ceph-day-silicon-valley-university-santa-cruz-silicon-
+valley-campus/:

--
Mike Perez (thingee)


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Force cephfs delayed deletion

2018-08-01 Thread Kamble, Nitin A


From: John Spray 
Date: Wednesday, August 1, 2018 at 4:02 AM
To: "Kamble, Nitin A" 
Cc: "arya...@intermedia.net" , 
"ceph-users@lists.ceph.com" 
Subject: Re: [ceph-users] Force cephfs delayed deletion

[External Email]

On Tue, Jul 31, 2018 at 11:43 PM Kamble, Nitin A 
mailto:nitin.kam...@teradata.com>> wrote:
Hi John,

I am running ceph Luminous 12.2.1 release on the storage nodes with v4.4.114 
kernel on the cephfs clients.

3 client nodes are running 3 instances of a test program.
The test program is doing this repeatedly in a loop:

  *   sequentially write a 256GB file on cephfs
  *   delete the file

‘ceph df’ shows that after delete the space is not getting freed from cephfs 
and and cephfs space utilization (number of objects, space used and  % 
utilization) keeps growing up continuously.

I double checked, and no process is holding an open handle to the closed files.

When the test program is stopped, the writing workload stops and then the 
cephfs space utilization starts going down as expected.

Looks like the cephfs write load is not giving enough opportunity to actually 
perform the delete file operations from clients. It is a consistent behavior, 
and easy to reproduce.

Deletes are not prioritised ahead of writes, and we probably wouldn't want them 
to be: client workloads are in general a higher priority than purging the 
objects from deleted files.

This only becomes an issue if a filesystem is almost completely fully: at that 
point it would be nice to block the clients on the purging, rather than give 
them ENOSPC.

John



I see your approach. Currently multiple file deletes are completed in loop 
before purging any of the associated ceph objects. When the space is almost 
full, there may not be any more purge requests, and the earlier purges will 
still be on hold because the current write pressure. So, that approach may not 
work as expected.

I understand some would like to deprioritize purges over writes, but if one 
wants to prioritize purges over writes, there should be a way to do it.

Thanks,
Nitin

I tried playing with these advanced MDS config parameters:

  *   mds_max_purge_files
  *   mds_max_purge_ops
  *   mds_max_purge_ops_per_pg
  *   mds_purge_queue_busy_flush_period

But it is not helping with the workload.

Is this a known issue? And is there a workaround to give more priority to the 
objects purging operations?

Thanks in advance,
Nitin

From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
on behalf of Alexander Ryabov 
mailto:arya...@intermedia.net>>
Date: Thursday, July 19, 2018 at 8:09 AM
To: John Spray mailto:jsp...@redhat.com>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] Force cephfs delayed deletion


>Also, since I see this is a log directory, check that you don't have some 
>processes that are holding their log files open even after they're unlinked.

Thank you very much - that was the case.

lsof /mnt/logs | grep deleted



After dealing with these, space was reclaimed in about 2-3min.






From: John Spray mailto:jsp...@redhat.com>>
Sent: Thursday, July 19, 2018 17:24
To: Alexander Ryabov
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Force cephfs delayed deletion

On Thu, Jul 19, 2018 at 1:58 PM Alexander Ryabov 
mailto:arya...@intermedia.net>> wrote:

Hello,

I see that free space is not released after files are removed on CephFS.

I'm using Luminous with replica=3 without any snapshots etc and with default 
settings.



From client side:
$ du -sh /mnt/logs/
4.1G /mnt/logs/
$ df -h /mnt/logs/
Filesystem   Size  Used Avail Use% Mounted on
h1,h2:/logs  125G   87G   39G  70% /mnt/logs

These stats are after couple of large files were removed in /mnt/logs dir, but 
that only dropped Useв space a little.

Check what version of the client you're using -- some older clients had bugs 
that would hold references to deleted files and prevent them from being purged. 
 If you find that the space starts getting freed when you unmount the client, 
this is likely to be because of a client bug.

Also, since I see this is a log directory, check that you don't have some 
processes that are holding their log files open even after they're unlinked.

John



Doing 'sync' command also changes nothing.

From server side:
# ceph  df
GLOBAL:
SIZE AVAIL  RAW USED %RAW USED
124G 39226M   88723M 69.34
POOLS:
NAMEID USED   %USED MAX AVAIL OBJECTS
cephfs_data 1  28804M 76.80 8703M7256
cephfs_metadata 2236M  2.65 8703M 101


Why there are such a large difference between 'du' and 

Re: [ceph-users] Force cephfs delayed deletion

2018-08-01 Thread Kamble, Nitin A


From: "Yan, Zheng" 
Date: Tuesday, July 31, 2018 at 8:14 PM
To: "Kamble, Nitin A" 
Cc: "arya...@intermedia.net" , John Spray 
, ceph-users 
Subject: Re: [ceph-users] Force cephfs delayed deletion

[External Email]

On Wed, Aug 1, 2018 at 6:43 AM Kamble, Nitin A 
mailto:nitin.kam...@teradata.com>> wrote:
Hi John,

I am running ceph Luminous 12.2.1 release on the storage nodes with v4.4.114 
kernel on the cephfs clients.

3 client nodes are running 3 instances of a test program.
The test program is doing this repeatedly in a loop:

  *   sequentially write a 256GB file on cephfs
  *   delete the file
do the clients write to the same file? I mean same file name in a directory.

No. each client node has a separate work directory on cephfs. And each client 
apps writes on a separate file. There is no sharing of file data across clients.

Thanks,
Nitin


‘ceph df’ shows that after delete the space is not getting freed from cephfs 
and and cephfs space utilization (number of objects, space used and  % 
utilization) keeps growing up continuously.

I double checked, and no process is holding an open handle to the closed files.

When the test program is stopped, the writing workload stops and then the 
cephfs space utilization starts going down as expected.

Looks like the cephfs write load is not giving enough opportunity to actually 
perform the delete file operations from clients. It is a consistent behavior, 
and easy to reproduce.

I tried playing with these advanced MDS config parameters:

  *   mds_max_purge_files
  *   mds_max_purge_ops
  *   mds_max_purge_ops_per_pg
  *   mds_purge_queue_busy_flush_period

But it is not helping with the workload.

Is this a known issue? And is there a workaround to give more priority to the 
objects purging operations?

Thanks in advance,
Nitin

From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
on behalf of Alexander Ryabov 
mailto:arya...@intermedia.net>>
Date: Thursday, July 19, 2018 at 8:09 AM
To: John Spray mailto:jsp...@redhat.com>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] Force cephfs delayed deletion


>Also, since I see this is a log directory, check that you don't have some 
>processes that are holding their log files open even after they're unlinked.

Thank you very much - that was the case.

lsof /mnt/logs | grep deleted



After dealing with these, space was reclaimed in about 2-3min.






From: John Spray mailto:jsp...@redhat.com>>
Sent: Thursday, July 19, 2018 17:24
To: Alexander Ryabov
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Force cephfs delayed deletion

On Thu, Jul 19, 2018 at 1:58 PM Alexander Ryabov 
mailto:arya...@intermedia.net>> wrote:

Hello,

I see that free space is not released after files are removed on CephFS.

I'm using Luminous with replica=3 without any snapshots etc and with default 
settings.



From client side:
$ du -sh /mnt/logs/
4.1G /mnt/logs/
$ df -h /mnt/logs/
Filesystem   Size  Used Avail Use% Mounted on
h1,h2:/logs  125G   87G   39G  70% /mnt/logs

These stats are after couple of large files were removed in /mnt/logs dir, but 
that only dropped Useв space a little.

Check what version of the client you're using -- some older clients had bugs 
that would hold references to deleted files and prevent them from being purged. 
 If you find that the space starts getting freed when you unmount the client, 
this is likely to be because of a client bug.

Also, since I see this is a log directory, check that you don't have some 
processes that are holding their log files open even after they're unlinked.

John



Doing 'sync' command also changes nothing.

From server side:
# ceph  df
GLOBAL:
SIZE AVAIL  RAW USED %RAW USED
124G 39226M   88723M 69.34
POOLS:
NAMEID USED   %USED MAX AVAIL OBJECTS
cephfs_data 1  28804M 76.80 8703M7256
cephfs_metadata 2236M  2.65 8703M 101


Why there are such a large difference between 'du' and 'USED'?

I've found that it could be due to 'delayed delete' 
http://docs.ceph.com/docs/luminous/dev/delayed-delete/<https://imsva91-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2furl.emailprotection.link%2f%3fa%2dzj7k72CS0gb415itDM1eU90VEm5BhojI3q4cHQrsilpaYjPTTNWfFTqC14bd5j2XtNq%2dUFuEZul7eZHtnh%2d5g%7e%7e&umid=A4F1CB59-715B-8E05-AA77-B7AD0CCE487F&auth=5e584526fc71bf85011d6d2e8a81aa05f4bd018d-f875139c2b2492612900e3cbf4d3611ffd4146fd>

And previously it seems could be tuned by adjusting the "mds max purge files" 
and "mds max purge ops"

http

Re: [ceph-users] Force cephfs delayed deletion

2018-07-31 Thread Kamble, Nitin A
Hi John,

I am running ceph Luminous 12.2.1 release on the storage nodes with v4.4.114 
kernel on the cephfs clients.

3 client nodes are running 3 instances of a test program.
The test program is doing this repeatedly in a loop:

  *   sequentially write a 256GB file on cephfs
  *   delete the file

‘ceph df’ shows that after delete the space is not getting freed from cephfs 
and and cephfs space utilization (number of objects, space used and  % 
utilization) keeps growing up continuously.

I double checked, and no process is holding an open handle to the closed files.

When the test program is stopped, the writing workload stops and then the 
cephfs space utilization starts going down as expected.

Looks like the cephfs write load is not giving enough opportunity to actually 
perform the delete file operations from clients. It is a consistent behavior, 
and easy to reproduce.

I tried playing with these advanced MDS config parameters:

  *   mds_max_purge_files
  *   mds_max_purge_ops
  *   mds_max_purge_ops_per_pg
  *   mds_purge_queue_busy_flush_period

But it is not helping with the workload.

Is this a known issue? And is there a workaround to give more priority to the 
objects purging operations?

Thanks in advance,
Nitin

From: ceph-users  on behalf of Alexander 
Ryabov 
Date: Thursday, July 19, 2018 at 8:09 AM
To: John Spray 
Cc: "ceph-users@lists.ceph.com" 
Subject: Re: [ceph-users] Force cephfs delayed deletion


>Also, since I see this is a log directory, check that you don't have some 
>processes that are holding their log files open even after they're unlinked.

Thank you very much - that was the case.

lsof /mnt/logs | grep deleted



After dealing with these, space was reclaimed in about 2-3min.






From: John Spray 
Sent: Thursday, July 19, 2018 17:24
To: Alexander Ryabov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Force cephfs delayed deletion

On Thu, Jul 19, 2018 at 1:58 PM Alexander Ryabov 
mailto:arya...@intermedia.net>> wrote:

Hello,

I see that free space is not released after files are removed on CephFS.

I'm using Luminous with replica=3 without any snapshots etc and with default 
settings.



From client side:
$ du -sh /mnt/logs/
4.1G /mnt/logs/
$ df -h /mnt/logs/
Filesystem   Size  Used Avail Use% Mounted on
h1,h2:/logs  125G   87G   39G  70% /mnt/logs

These stats are after couple of large files were removed in /mnt/logs dir, but 
that only dropped Useв space a little.

Check what version of the client you're using -- some older clients had bugs 
that would hold references to deleted files and prevent them from being purged. 
 If you find that the space starts getting freed when you unmount the client, 
this is likely to be because of a client bug.

Also, since I see this is a log directory, check that you don't have some 
processes that are holding their log files open even after they're unlinked.

John



Doing 'sync' command also changes nothing.

From server side:
# ceph  df
GLOBAL:
SIZE AVAIL  RAW USED %RAW USED
124G 39226M   88723M 69.34
POOLS:
NAMEID USED   %USED MAX AVAIL OBJECTS
cephfs_data 1  28804M 76.80 8703M7256
cephfs_metadata 2236M  2.65 8703M 101


Why there are such a large difference between 'du' and 'USED'?

I've found that it could be due to 'delayed delete' 
http://docs.ceph.com/docs/luminous/dev/delayed-delete/

And previously it seems could be tuned by adjusting the "mds max purge files" 
and "mds max purge ops"

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013679.html

But there is no more of such options in 
http://docs.ceph.com/docs/luminous/cephfs/mds-config-ref/



So the question is - how to purge deleted data and reclaim free space?

Thank you.


​__
ceph

Re: [ceph-users] RDMA support in Ceph

2018-06-28 Thread Kamble, Nitin A


On 6/28/18, 12:11 AM, "kefu chai"  wrote:
> What is the state of the RDMA code in the Ceph Luminous and later 
releases?

in Ceph, the RDMA support has been constantly worked on. xio messenger
support was added 4 years ago, but i don't think it's maintained
anymore. and async messenger was IB protocol support. i think that's
what you wanted to try out. recently, we added the iWARP support to
the async messenger, see https://github.com/ceph/ceph/pull/20297. that
change also brought better connection management by using rdma-cm to
Ceph. and i believe to get RDMA support we should have
https://github.com/ceph/ceph/pull/14681, which is still pending on
review.
>
> When will it be production ready?
the RDMA support in Ceph is completely driven by our community. and we
don't have the hardware (NIC) for testing RoCEv2/iWARP, not to mention
IB. so i can hardly tell from the maintainer's perspective.
> [1]: https://community.mellanox.com/docs/DOC-2721
-- 
Regards
Kefu Chai

Hi Kefu,
  Thanks for the detailed explanation. Looks like, we will have to wait for few 
releases to get a supported and production ready RDMA support in ceph.

Thanks,
Nitin


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RDMA support in Ceph

2018-06-26 Thread Kamble, Nitin A
I tried enabling the RDMA support in Ceph Luminous release following this [1] 
guide.
I used the released Luminous bits, and not the Mellanox branches mentioned in 
the guide.

I could see some RDMA traffic in the perf counters, but the ceph daemons were 
still
complaining that they are not able to talk with each other.

AFAIK the RDMA support in Ceph is experimental.

I would like to know…
What is the state of the RDMA code in the Ceph Luminous and later releases?
When will it be production ready?

Thanks,
Nitin


[1]: https://community.mellanox.com/docs/DOC-2721
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] a question about ceph raw space usage

2017-11-06 Thread Kamble, Nitin A
Dear Cephers,

As seen below, I notice that 12.7% of raw storage is consumed with zero pools 
in the system. These are bluestore OSDs. 
Is this expected or an anomaly?

Thanks,
Nitin

maruti1:~ # ceph -v
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
maruti1:~ # ceph -s
  cluster:
id: 37e0fe9e-6a19-4182-8350-e377d45291ce
health: HEALTH_OK

  services:
mon: 1 daemons, quorum maruti1
mgr: maruti1(active)
osd: 12 osds: 12 up, 12 in

  data:
pools:   0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage:   972 GB used, 6681 GB / 7653 GB avail
pgs:

maruti1:~ # ceph df
GLOBAL:
SIZE  AVAIL RAW USED %RAW USED
7653G 6681G 972G 12.70
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
maruti1:~ # ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE  USEAVAIL %USE  VAR  PGS
0   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
6   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
9   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
1   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
5   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
11   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
3   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
7   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
10   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
2   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
4   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
8   hdd 0.62279  1.0  637G 82955M  556G 12.70 1.00   0
TOTAL 7653G   972G 6681G 12.70
MIN/MAX VAR: 1.00/1.00  STDDEV: 0



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-08-18 Thread Kamble, Nitin A
I see the same issue with ceph v12.1.4 as well. We are not using openstack or 
keystone, and see these errors in the rgw log. RGW is not hanging though.

Thanks,
Nitin


From: ceph-users  on behalf of Martin Emrich 

Date: Monday, July 24, 2017 at 10:08 PM
To: Vasu Kulkarni , Vaibhav Bhembre 

Cc: "ceph-users@lists.ceph.com" 
Subject: Re: [ceph-users] Luminous radosgw hangs after a few hours

I created an issue: http://tracker.ceph.com/issues/20763
 
Regards,
 
Martin
 
Von: Vasu Kulkarni 
Datum: Montag, 24. Juli 2017 um 19:26
An: Vaibhav Bhembre 
Cc: Martin Emrich , "ceph-users@lists.ceph.com" 

Betreff: Re: [ceph-users] Luminous radosgw hangs after a few hours
 
Please raise a tracker for rgw and also provide some additional journalctl logs 
and info(ceph version, os version etc): http://tracker.ceph.com/projects/rgw
 
On Mon, Jul 24, 2017 at 9:03 AM, Vaibhav Bhembre  
wrote:
I am seeing the same issue on upgrade to Luminous v12.1.0 from Jewel.
I am not using Keystone or OpenStack either and my radosgw daemon
hangs as well. I have to restart it to resume processing.

2017-07-24 00:23:33.057401 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 00:38:33.057524 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 00:53:33.057648 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 01:08:33.057749 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 01:23:33.057878 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 01:38:33.057964 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 01:53:33.058098 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22
2017-07-24 02:08:33.058225 7f196096a700  0 ERROR: keystone revocation
processing returned error r=-22

The following are my keystone config options:

"rgw_keystone_url": ""
"rgw_keystone_admin_token": ""
"rgw_keystone_admin_user": ""
"rgw_keystone_admin_password": ""
"rgw_keystone_admin_tenant": ""
"rgw_keystone_admin_project": ""
"rgw_keystone_admin_domain": ""
"rgw_keystone_barbican_user": ""
"rgw_keystone_barbican_password": ""
"rgw_keystone_barbican_tenant": ""
"rgw_keystone_barbican_project": ""
"rgw_keystone_barbican_domain": ""
"rgw_keystone_api_version": "2"
"rgw_keystone_accepted_roles": "Member
"rgw_keystone_accepted_admin_roles": ""
"rgw_keystone_token_cache_size": "1"
"rgw_keystone_revocation_interval": "900"
"rgw_keystone_verify_ssl": "true"
"rgw_keystone_implicit_tenants": "false"
"rgw_s3_auth_use_keystone": "false"

Is this fixed in RC2 by any chance?

On Thu, Jun 29, 2017 at 3:11 AM, Martin Emrich
 wrote:
> Since upgrading to 12.1, our Object Gateways hang after a few hours, I only
> see these messages in the log file:
>
>
>
> 2017-06-29 07:52:20.877587 7fa8e01e5700  0 ERROR: keystone revocation
> processing returned error r=-22
>
> 2017-06-29 08:07:20.877761 7fa8e01e5700  0 ERROR: keystone revocation
> processing returned error r=-22
>
> 2017-06-29 08:07:29.994979 7fa8e11e7700  0 process_single_logshard: Error in
> get_bucket_info: (2) No such file or directory
>
> 2017-06-29 08:22:20.877911 7fa8e01e5700  0 ERROR: keystone revocation
> processing returned error r=-22
>
> 2017-06-29 08:27:30.086119 7fa8e11e7700  0 process_single_logshard: Error in
> get_bucket_info: (2) No such file or directory
>
> 2017-06-29 08:37:20.878108 7fa8e01e5700  0 ERROR: keystone revocation
> processing returned error r=-22
>
> 2017-06-29 08:37:30.187696 7fa8e11e7700  0 process_single_logshard: Error in
> get_bucket_info: (2) No such file or directory
>
> 2017-06-29 08:52:20.878283 7fa8e01e5700  0 ERROR: keystone revocation
> processing returned error r=-22
>
> 2017-06-29 08:57:30.280881 7fa8e11e7700  0 process_single_logshard: Error in
> get_bucket_info: (2) No such file or directory
>
> 2017-06-29 09:07:20.878451 7fa8e01e5700  0 ERROR: keystone revocation
> processing returned error r=-22
>
>
>
> FYI: we do not use Keystone or Openstack.
>
>
>
> This started after upgrading from jewel (via kraken) to luminous.
>
>
>
> What could I do to fix this?
>
> Is there some “fsck” like consistency check + repair for the radosgw
> buckets?
>
>
>
> Thanks,
>
>
>
> Martin
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] is docs.ceph.com down?

2017-01-19 Thread Kamble, Nitin A
http://ceph.com opens fine, while http://docs.ceph.com is not opening.

From: Mohammed Naser mailto:mna...@vexxhost.com>>
Date: Thursday, January 19, 2017 at 11:31 AM
To: Nitin Kamble mailto:nitin.kam...@teradata.com>>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] is docs.ceph.com down?

Loading fine here!
On Jan 19, 2017, at 2:31 PM, Kamble, Nitin A 
mailto:nitin.kam...@teradata.com>> wrote:

My browser is getting hang on http://docs.ceph.com<http://ceph.com/>

Thanks,
Nitin

___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] is docs.ceph.com down?

2017-01-19 Thread Kamble, Nitin A
My browser is getting hang on http://docs.ceph.com

Thanks,
Nitin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw setup issue

2017-01-08 Thread Kamble, Nitin A

> On Jan 4, 2017, at 10:12 AM, Orit Wasserman  wrote:
> 
> On Wed, Jan 4, 2017 at 7:08 PM, Brian Andrus  
> wrote:
>> Regardless of whether it worked before, have you verified your RadosGWs have
>> write access to monitors? They will need it if you want the RadosGW to
>> create its own pools.
>> 
>> ceph auth get 
>> 
> 
> I agree, it could be permissions issue

# ceph auth get client.radosgw.gateway
exported keyring for client.radosgw.gateway
[client.radosgw.gateway]
key = XX
caps mon = "allow *"
caps osd = "allow *”

That is not the issue. I have a strong feeling that the issue is with the 
default CRUSH rule, which is used for creating the RGW pools automatically. I 
will verify and update.

Thanks,
Nitin


> 
>> On Wed, Jan 4, 2017 at 8:59 AM, Kamble, Nitin A 
>> wrote:
>>> 
>>> 
>>>> On Dec 26, 2016, at 2:48 AM, Orit Wasserman  wrote:
>>>> 
>>>> On Fri, Dec 23, 2016 at 3:42 AM, Kamble, Nitin A
>>>>  wrote:
>>>>> I am trying to setup radosgw on a ceph cluster, and I am seeing some
>>>>> issues where google is not helping. I hope some of the developers would be
>>>>> able to help here.
>>>>> 
>>>>> 
>>>>> I tried to create radosgw as mentioned here [0] on a jewel cluster. And
>>>>> it gives the following error in log file after starting radosgw.
>>>>> 
>>>>> 
>>>>> 2016-12-22 17:36:46.755786 7f084beeb9c0  0 set uid:gid to 167:167
>>>>> (ceph:ceph)
>>>>> 2016-12-22 17:36:46.755849 7f084beeb9c0  0 ceph version
>>>>> 10.2.2-118-g894a5f8 (894a5f8d878d4b267f80b90a4bffce157f2b4ba7), process
>>>>> radosgw, pid 10092
>>>>> 2016-12-22 17:36:46.763821 7f084beeb9c0  1 -- :/0 messenger.start
>>>>> 2016-12-22 17:36:46.764731 7f084beeb9c0  1 -- :/1011033520 -->
>>>>> 39.0.16.7:6789/0 -- auth(proto 0 40 bytes epoch 0) v1 -- ?+0 
>>>>> 0x7f084c8e9f60
>>>>> con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.765055 7f084beda700  1 -- 39.0.16.9:0/1011033520
>>>>> learned my addr 39.0.16.9:0/1011033520
>>>>> 2016-12-22 17:36:46.765492 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> <== mon.0 39.0.16.7:6789/0 1  mon_map magic: 0 v1  195+0+0
>>>>> (146652916 0 0) 0x7f0814000a60 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.765562 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> <== mon.0 39.0.16.7:6789/0 2  auth_reply(proto 2 0 (0) Success) v1 
>>>>> 
>>>>> 33+0+0 (1206278719 0 0) 0x7f0814000ee0 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.765697 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> --> 39.0.16.7:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
>>>>> 0x7f08180013b0 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.765968 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> <== mon.0 39.0.16.7:6789/0 3  auth_reply(proto 2 0 (0) Success) v1 
>>>>> 
>>>>> 222+0+0 (4230455906 0 0) 0x7f0814000ee0 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.766053 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> --> 39.0.16.7:6789/0 -- auth(proto 2 181 bytes epoch 0) v1 -- ?+0
>>>>> 0x7f0818001830 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.766315 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> <== mon.0 39.0.16.7:6789/0 4  auth_reply(proto 2 0 (0) Success) v1 
>>>>> 
>>>>> 425+0+0 (3179848142 0 0) 0x7f0814001180 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.766383 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> --> 39.0.16.7:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 
>>>>> 0x7f084c8ea440
>>>>> con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.766452 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520
>>>>> --> 39.0.16.7:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0 0x7f084c8ea440
>>>>> con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.766518 7f082a7fc700  1 -- 39.0.16.9:0/1011033520
>>>>> <== mon.0 39.0.16.7:6789/0 5  mon_map magic: 0 v1  195+0+0
>>>>> (146652916 0 0) 0x7f0814001110 con 0x7f084c8e9480
>>>>> 2016-12-22 17:36:46.766671 7f08227fc700  2
>>>>> RGWDataChangesLog::ChangesRenewThread: start
>>>>> 2016-12-22 17:36:46.766691 7f084beeb9c0 20 get_system_obj_state:
>>>>> rctx=0x7ffec

Re: [ceph-users] radosgw setup issue

2017-01-04 Thread Kamble, Nitin A

> On Dec 26, 2016, at 2:48 AM, Orit Wasserman  wrote:
> 
> On Fri, Dec 23, 2016 at 3:42 AM, Kamble, Nitin A
>  wrote:
>> I am trying to setup radosgw on a ceph cluster, and I am seeing some issues 
>> where google is not helping. I hope some of the developers would be able to 
>> help here.
>> 
>> 
>> I tried to create radosgw as mentioned here [0] on a jewel cluster. And it 
>> gives the following error in log file after starting radosgw.
>> 
>> 
>> 2016-12-22 17:36:46.755786 7f084beeb9c0  0 set uid:gid to 167:167 (ceph:ceph)
>> 2016-12-22 17:36:46.755849 7f084beeb9c0  0 ceph version 10.2.2-118-g894a5f8 
>> (894a5f8d878d4b267f80b90a4bffce157f2b4ba7), process radosgw, pid 10092
>> 2016-12-22 17:36:46.763821 7f084beeb9c0  1 -- :/0 messenger.start
>> 2016-12-22 17:36:46.764731 7f084beeb9c0  1 -- :/1011033520 --> 
>> 39.0.16.7:6789/0 -- auth(proto 0 40 bytes epoch 0) v1 -- ?+0 0x7f084c8e9f60 
>> con 0x7f084c8e9480
>> 2016-12-22 17:36:46.765055 7f084beda700  1 -- 39.0.16.9:0/1011033520 learned 
>> my addr 39.0.16.9:0/1011033520
>> 2016-12-22 17:36:46.765492 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 1  mon_map magic: 0 v1  195+0+0 (146652916 0 
>> 0) 0x7f0814000a60 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.765562 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 2  auth_reply(proto 2 0 (0) Success) v1  
>> 33+0+0 (1206278719 0 0) 0x7f0814000ee0 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.765697 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7f08180013b0 
>> con 0x7f084c8e9480
>> 2016-12-22 17:36:46.765968 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 3  auth_reply(proto 2 0 (0) Success) v1  
>> 222+0+0 (4230455906 0 0) 0x7f0814000ee0 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.766053 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- auth(proto 2 181 bytes epoch 0) v1 -- ?+0 0x7f0818001830 
>> con 0x7f084c8e9480
>> 2016-12-22 17:36:46.766315 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 4  auth_reply(proto 2 0 (0) Success) v1  
>> 425+0+0 (3179848142 0 0) 0x7f0814001180 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.766383 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f084c8ea440 con 
>> 0x7f084c8e9480
>> 2016-12-22 17:36:46.766452 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0 0x7f084c8ea440 con 
>> 0x7f084c8e9480
>> 2016-12-22 17:36:46.766518 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 5  mon_map magic: 0 v1  195+0+0 (146652916 0 
>> 0) 0x7f0814001110 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.766671 7f08227fc700  2 
>> RGWDataChangesLog::ChangesRenewThread: start
>> 2016-12-22 17:36:46.766691 7f084beeb9c0 20 get_system_obj_state: 
>> rctx=0x7ffec2850d00 obj=.rgw.root:default.realm state=0x7f084c8efdf8 
>> s->prefetch_data=0
>> 2016-12-22 17:36:46.766750 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 6  osd_map(9506..9506 src has 8863..9506) v3  
>> 66915+0+0 (689048617 0 0) 0x7f0814011680 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.767029 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- mon_get_version(what=osdmap handle=1) v1 -- ?+0 
>> 0x7f084c8f05f0 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.767163 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 7  mon_get_version_reply(handle=1 version=9506) 
>> v2  24+0+0 (2817198406 0 0) 0x7f0814001110 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.767214 7f084beeb9c0 20 get_system_obj_state: 
>> rctx=0x7ffec2850210 obj=.rgw.root:default.realm state=0x7f084c8efdf8 
>> s->prefetch_data=0
>> 2016-12-22 17:36:46.767231 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- mon_get_version(what=osdmap handle=2) v1 -- ?+0 
>> 0x7f084c8f0ac0 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.767341 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== 
>> mon.0 39.0.16.7:6789/0 8  mon_get_version_reply(handle=2 version=9506) 
>> v2  24+0+0 (1826043941 0 0) 0x7f0814001110 con 0x7f084c8e9480
>> 2016-12-22 17:36:46.767367 7f084beeb9c0 10 could not read realm id: (2) No 
>> such file or directory
>> 2016-12-22 17:36:46.767390 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
>> 39.0.16.7:6789/0 -- mon_get_version(

[ceph-users] radosgw setup issue

2016-12-22 Thread Kamble, Nitin A
I am trying to setup radosgw on a ceph cluster, and I am seeing some issues 
where google is not helping. I hope some of the developers would be able to 
help here.


I tried to create radosgw as mentioned here [0] on a jewel cluster. And it 
gives the following error in log file after starting radosgw.

 
2016-12-22 17:36:46.755786 7f084beeb9c0  0 set uid:gid to 167:167 (ceph:ceph)
2016-12-22 17:36:46.755849 7f084beeb9c0  0 ceph version 10.2.2-118-g894a5f8 
(894a5f8d878d4b267f80b90a4bffce157f2b4ba7), process radosgw, pid 10092
2016-12-22 17:36:46.763821 7f084beeb9c0  1 -- :/0 messenger.start
2016-12-22 17:36:46.764731 7f084beeb9c0  1 -- :/1011033520 --> 39.0.16.7:6789/0 
-- auth(proto 0 40 bytes epoch 0) v1 -- ?+0 0x7f084c8e9f60 con 0x7f084c8e9480
2016-12-22 17:36:46.765055 7f084beda700  1 -- 39.0.16.9:0/1011033520 learned my 
addr 39.0.16.9:0/1011033520
2016-12-22 17:36:46.765492 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 1  mon_map magic: 0 v1  195+0+0 (146652916 0 0) 
0x7f0814000a60 con 0x7f084c8e9480
2016-12-22 17:36:46.765562 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 2  auth_reply(proto 2 0 (0) Success) v1  33+0+0 
(1206278719 0 0) 0x7f0814000ee0 con 0x7f084c8e9480
2016-12-22 17:36:46.765697 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7f08180013b0 con 
0x7f084c8e9480
2016-12-22 17:36:46.765968 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 3  auth_reply(proto 2 0 (0) Success) v1  222+0+0 
(4230455906 0 0) 0x7f0814000ee0 con 0x7f084c8e9480
2016-12-22 17:36:46.766053 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- auth(proto 2 181 bytes epoch 0) v1 -- ?+0 0x7f0818001830 
con 0x7f084c8e9480
2016-12-22 17:36:46.766315 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 4  auth_reply(proto 2 0 (0) Success) v1  425+0+0 
(3179848142 0 0) 0x7f0814001180 con 0x7f084c8e9480
2016-12-22 17:36:46.766383 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f084c8ea440 con 
0x7f084c8e9480
2016-12-22 17:36:46.766452 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0 0x7f084c8ea440 con 
0x7f084c8e9480
2016-12-22 17:36:46.766518 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 5  mon_map magic: 0 v1  195+0+0 (146652916 0 0) 
0x7f0814001110 con 0x7f084c8e9480
2016-12-22 17:36:46.766671 7f08227fc700  2 
RGWDataChangesLog::ChangesRenewThread: start
2016-12-22 17:36:46.766691 7f084beeb9c0 20 get_system_obj_state: 
rctx=0x7ffec2850d00 obj=.rgw.root:default.realm state=0x7f084c8efdf8 
s->prefetch_data=0
2016-12-22 17:36:46.766750 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 6  osd_map(9506..9506 src has 8863..9506) v3  
66915+0+0 (689048617 0 0) 0x7f0814011680 con 0x7f084c8e9480
2016-12-22 17:36:46.767029 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- mon_get_version(what=osdmap handle=1) v1 -- ?+0 
0x7f084c8f05f0 con 0x7f084c8e9480
2016-12-22 17:36:46.767163 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 7  mon_get_version_reply(handle=1 version=9506) v2  
24+0+0 (2817198406 0 0) 0x7f0814001110 con 0x7f084c8e9480
2016-12-22 17:36:46.767214 7f084beeb9c0 20 get_system_obj_state: 
rctx=0x7ffec2850210 obj=.rgw.root:default.realm state=0x7f084c8efdf8 
s->prefetch_data=0
2016-12-22 17:36:46.767231 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- mon_get_version(what=osdmap handle=2) v1 -- ?+0 
0x7f084c8f0ac0 con 0x7f084c8e9480
2016-12-22 17:36:46.767341 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 8  mon_get_version_reply(handle=2 version=9506) v2  
24+0+0 (1826043941 0 0) 0x7f0814001110 con 0x7f084c8e9480
2016-12-22 17:36:46.767367 7f084beeb9c0 10 could not read realm id: (2) No such 
file or directory
2016-12-22 17:36:46.767390 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- mon_get_version(what=osdmap handle=3) v1 -- ?+0 
0x7f084c8efe50 con 0x7f084c8e9480
2016-12-22 17:36:46.767496 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 9  mon_get_version_reply(handle=3 version=9506) v2  
24+0+0 (3600349867 0 0) 0x7f0814001110 con 0x7f084c8e9480
2016-12-22 17:36:46.767518 7f084beeb9c0 10 failed to list objects 
pool_iterate_begin() returned r=-2
2016-12-22 17:36:46.767542 7f084beeb9c0 20 get_system_obj_state: 
rctx=0x7ffec2850420 obj=.rgw.root:zone_names.default state=0x7f084c8f0f38 
s->prefetch_data=0
2016-12-22 17:36:46.767554 7f084beeb9c0  1 -- 39.0.16.9:0/1011033520 --> 
39.0.16.7:6789/0 -- mon_get_version(what=osdmap handle=4) v1 -- ?+0 
0x7f084c8f1630 con 0x7f084c8e9480
2016-12-22 17:36:46.767660 7f082a7fc700  1 -- 39.0.16.9:0/1011033520 <== mon.0 
39.0.16.7:6789/0 10  mon_get_version_reply(handle=4 ve

Re: [ceph-users] I need help building the source code can anyone help?

2016-11-01 Thread Kamble, Nitin A
Building ceph is bit involved process.

what version are you trying to build?
For building are you following README in the code?

- Nitin
On Oct 28, 2016, at 12:16 AM, 刘 畅 
mailto:liuchang890...@hotmail.com>> wrote:

After I successfully run the install-deps.sh , I try to run cmake and return as 
follow:

ubuntu@i-c9rgl1y5:~/projects/ceph/build$ ls
bin  CMakeCache.txt  CMakeFiles  doc  include  man  src
ubuntu@i-c9rgl1y5:~/projects/ceph/build$ cmake ..
-- /usr/lib/x86_64-linux-gnu/libatomic_ops.a
-- NSS_LIBRARIES: 
/usr/lib/x86_64-linux-gnu/libssl3.so;/usr/lib/x86_64-linux-gnu/libsmime3.so;/usr/lib/x86_64-linux-gnu/libnss3.so;/usr/lib/x86_64-linux-gnu/libnssutil3.so
-- NSS_INCLUDE_DIRS: /usr/include/nss
-- SSL with NSS selected (Libs: 
/usr/lib/x86_64-linux-gnu/libssl3.so;/usr/lib/x86_64-linux-gnu/libsmime3.so;/usr/lib/x86_64-linux-gnu/libnss3.so;/usr/lib/x86_64-linux-gnu/libnssutil3.so)
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable 
version "2.7.12", minimum required is "2.7")
-- Boost version: 1.58.0
-- Found the following Boost libraries:
--   python
-- Boost version: 1.58.0
-- Found the following Boost libraries:
--   thread
--   system
--   regex
--   random
--   program_options
--   date_time
--   iostreams
--   chrono
--   atomic
--  we have a modern and working yasm
--  we are x84_64
--  we are not x32
--  yasm can also build the isa-l stuff
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable 
version "2.7.12", minimum required is "2")
--  Using EventEpoll for events.
CMake Error at src/CMakeLists.txt:508 (add_subdirectory):
  The source directory

/home/ubuntu/projects/ceph/src/lua

  does not contain a CMakeLists.txt file.


-- Found cython
CMake Error at /usr/share/cmake-3.5/Modules/ExternalProject.cmake:1915 
(message):
  No download info given for 'rocksdb_ext' and its source directory:

   /home/ubuntu/projects/ceph/src/rocksdb

  is not an existing non-empty directory.  Please specify one of:

   * SOURCE_DIR with an existing non-empty directory
   * URL
   * GIT_REPOSITORY
   * HG_REPOSITORY
   * CVS_REPOSITORY and CVS_MODULE
   * SVN_REVISION
   * DOWNLOAD_COMMAND
Call Stack (most recent call first):
  /usr/share/cmake-3.5/Modules/ExternalProject.cmake:2459 
(_ep_add_download_command)
  src/CMakeLists.txt:655 (ExternalProject_Add)


CMake Error at src/CMakeLists.txt:706 (add_subdirectory):
  add_subdirectory given source "googletest/googlemock" which is not an
  existing directory.


-- Configuring incomplete, errors occurred!
See also "/home/ubuntu/projects/ceph/build/CMakeFiles/CMakeOutput.log".
See also "/home/ubuntu/projects/ceph/build/CMakeFiles/CMakeError.log".
ubuntu@i-c9rgl1y5:~/projects/ceph/build$
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cmake and rpmbuild

2016-07-28 Thread Kamble, Nitin A
Gerald,
   To me this looks like a terribly crippled environment to build ceph, and I 
won’t bother building in such environment. IMO, It’s not worth for anybody to 
optimize the build process to make it work on such a crippled environment.

And to answer your question, If you can not add more memory then provide more 
swap space to increase the virtual memory.  You can also remove the  “-j" 
parameter to “make" in the spec file to avoid creating lot of processes for 
parallel build.

Nitin

> On Jul 28, 2016, at 6:50 PM, Gerard Braad  wrote:
> 
> Hi All,
> 
> 
> At the moment I am setting up CI pipelines for Ceph and ran into a
> small issue; I have some memory constrained runners (2G). So, when
> performing a build using do-cmake all is fine... the build might last
> long, but after an hour or two I am greeted with a 'Build succeeded'
> message, I gather the artifacts and all is well.
> 
> But when I do a rpmbuild, I have to rely on doing a make-dist, and
> then issue a rpmbuild targeting this tarball. However, in the same
> runners, this build will fail with "virtual memory exhausted". Is
> there anything I can do that does not immediately involve adding more
> memory?
> 
> If more information is needed, please let me know...
> 
> regards,
> 
> 
> Gerard
> 
> -- 
> 
>   Gerard Braad | http://gbraad.nl
>   [ Doing Open Source Matters ]
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com