[ceph-users] Re: MDS stuck in replay

2022-06-04 Thread Ramana Venkatesh Raja
On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN  wrote:
>
> So, we are wondering what it is up to. How long it might take. And is
> there something we can do to speed up the replay phase.
>

I'm not sure what can be done to speed up replay for MDSes in your
nautilus cluster since they are already stuck.

Configuring your standby MDSs as standby-replay should decrease
recovery times in your pacific cluster. Setting a very large
"mds_log_max_segments" value has caused users trouble before,
https://tracker.ceph.com/issues/47582 . So you'd want to avoid doing
that.

> Regards
> magnus
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336. Is e buidheann carthannais a th’ ann an 
> Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MDS stuck in replay

2022-06-04 Thread Ramana Venkatesh Raja
On Thu, Jun 2, 2022 at 4:39 AM Magnus HAGDORN  wrote:
>
> at this stage we are not so worried about recovery since we moved to
> our new pacific cluster. The problem arose during one of the nightly
> syncs of the old cluster to the new cluster. However, we are quite keen
> to use this as a learning opportunity to see what we can do to bring
> this filesystem back to life.
>
> On Wed, 2022-06-01 at 20:11 -0400, Ramana Venkatesh Raja wrote:
> > Can you temporarily turn up the MDS debug log level (debug_mds) to
> >
> > check what's happening to this MDS during replay?
> >
> > ceph config set mds debug_mds 10
> >
> >
>
>
> 2022-06-02 09:32:36.814 7faca6d16700  5 mds.beacon.store06 Sending
> beacon up:replay seq 195662
> 2022-06-02 09:32:36.814 7faca6d16700  1 --
> [v2:192.168.34.113:6800/3361270776,v1:192.168.34.113:6801/3361270776]
> --> [v2:192.168.34.179:3300/0,v1:192.168.34.179:6789/0] --
> mdsbeacon(196066899/store06 up:replay seq 195662 v200622) v7 --
> 0x5603d846d200 con 0x560185920c00
> 2022-06-02 09:32:36.814 7facab51f700  1 --
> [v2:192.168.34.113:6800/3361270776,v1:192.168.34.113:6801/3361270776]
> <== mon.0 v2:192.168.34.179:3300/0 230794 
> mdsbeacon(196066899/store06 up:replay seq 195662 v200622) v7 
> 132+0+0 (crc 0 0 0) 0x5603d846d200 con 0x560185920c00
> 2022-06-02 09:32:36.814 7facab51f700  5 mds.beacon.store06 received
> beacon reply up:replay seq 195662 rtt 0
> 2022-06-02 09:32:37.090 7faca4d12700  2 mds.0.cache Memory
> usage:  total 22446592, rss 18448072, heap 332040, baseline 307464, 0 /
> 6982189 inodes have caps, 0 caps, 0 caps per inode
> 2022-06-02 09:32:37.090 7faca4d12700 10 mds.0.cache cache not ready for
> trimming
> 2022-06-02 09:32:38.091 7faca4d12700  2 mds.0.cache Memory
> usage:  total 22446592, rss 18448072, heap 332040, baseline 307464, 0 /
> 6982189 inodes have caps, 0 caps, 0 caps per inode
> 2022-06-02 09:32:38.091 7faca4d12700 10 mds.0.cache cache not ready for
> trimming
> 2022-06-02 09:32:38.320 7faca6515700  1 --
> [v2:192.168.34.113:6800/3361270776,v1:192.168.34.113:6801/3361270776]
> --> [v2:192.168.34.124:6805/1445500,v1:192.168.34.124:6807/1445500] --
> mgrreport(unknown.store06 +0-0 packed 1414) v8 -- 0x56018651ae00 con
> 0x5601869cb400
> 2022-06-02 09:32:39.092 7faca4d12700  2 mds.0.cache Memory
> usage:  total 22446592, rss 18448072, heap 332040, baseline 307464, 0 /
> 6982189 inodes have caps, 0 caps, 0 caps per inode
> 2022-06-02 09:32:39.092 7faca4d12700 10 mds.0.cache cache not ready for
> trimming
> 2022-06-02 09:32:40.094 7faca4d12700  2 mds.0.cache Memory
> usage:  total 22446592, rss 18448072, heap 332040, baseline 307464, 0 /
> 6982189 inodes have caps, 0 caps, 0 caps per inode
> 2022-06-02 09:32:40.094 7faca4d12700 10 mds.0.cache cache not ready for
> trimming
> 2022-06-02 09:32:40.813 7faca6d16700  5 mds.beacon.store06 Sending
> beacon up:replay seq 195663
> 2022-06-02 09:32:40.813 7faca6d16700  1 --
> [v2:192.168.34.113:6800/3361270776,v1:192.168.34.113:6801/3361270776]
> --> [v2:192.168.34.179:3300/0,v1:192.168.34.179:6789/0] --
> mdsbeacon(196066899/store06 up:replay seq 195663 v200622) v7 --
> 0x5603d846d500 con 0x560185920c00
> 2022-06-02 09:32:40.813 7facab51f700  1 --
> [v2:192.168.34.113:6800/3361270776,v1:192.168.34.113:6801/3361270776]
> <== mon.0 v2:192.168.34.179:3300/0 230795 
> mdsbeacon(196066899/store06 up:replay seq 195663 v200622) v7 
> 132+0+0 (crc 0 0 0) 0x5603d846d500 con 0x560185920c00
> 2022-06-02 09:32:40.813 7facab51f700  5 mds.beacon.store06 received
> beacon reply up:replay seq 195663 rtt 0
> 2022-06-02 09:32:41.095 7faca4d12700  2 mds.0.cache Memory
> usage:  total 22446592, rss 18448072, heap 332040, baseline 307464, 0 /
> 6982189 inodes have caps, 0 caps, 0 caps per inode
>
>

"cache is not ready for trimming" is logged when MDS is in replay
state. It doesn't tell much about why it's stuck in replay state. The
MDS is probably waiting on something from the OSDs. Maybe check the
objecter request of the MDS?

ceph tell mds. objecter_requests

If that's not helpful, then try setting `ceph config set mds
debug_objecter 10`,  restart the MDS, and check the objecter related
logs in the MDS?

The other MDS is in resolve state is waiting for the peer to also
reach up:resolve.

> >
> > Is the health of the MDS host okay? Is it low on memory?
> >
> >
> plenty
> [root@store06 ~]# free
>   totalusedfree  shared  buff/cache   a
> vailable
> Mem:  13193960475007512 2646656338054285436
> 52944852
> Swap:  32930300180032928500
>
>
> >
> > > The cluster is healthy.>
> >
> > Can you share the ou

[ceph-users] Re: Help needed picking the right amount of PGs for (Cephfs) metadata pool

2022-06-02 Thread Ramana Venkatesh Raja
On Thu, Jun 2, 2022 at 11:40 AM Stefan Kooman  wrote:
>
> Hi,
>
> We have a CephFS filesystem holding 70 TiB of data in ~ 300 M files and
> ~ 900 M sub directories. We currently have 180 OSDs in this cluster.
>
> POOL  ID  PGS   STORED   (DATA)   (OMAP)   OBJECTS  USED
> (DATA)   (OMAP)   %USED  MAX AVAIL
> cephfs_metadata   6   512  984 GiB  243 MiB  984 GiB  903.98M  2.9 TiB
> 728 MiB  2.9 TiB   3.06 30 TiB
>
> The PGs in this pool (replicated, size=3, min_size=2), 6, are giving us
> a hard time (again). When PGs get remapped to other OSDs it introduces
> (tons of) slow ops and mds slow requests. Remapping more than 10 PGs at
> a time will result in OSDs marked as dead (iothread timeout). Scrubbing
> (with default settings) triggers slow ops too. Half of the cluster is
> running on SSDs (SAMSUNG MZ7LM3T8HMLP-5 / INTEL SSDSC2KB03) with
> cache mode in write through, the other half is NVMe (SAMSUNG
> MZQLB3T8HALS-7). No seperate WAL/DB devices. SSDs run on Intel (14
> cores / 128 GB RAM), NVMe on AMD EPYC gen 1 / 2 with 16 cores 128 GB
> RAM). OSD_MEMORY_TARGET=11G. The load on the pool (and cluster in
> general) is modest. Plenty of CPU power available (mostly idling
> really). In the order of ~6 K MDS requests, ~ 1.5 K metadata ops
> (ballpark figure).
>
> We currently have 512 PGs allocated to this pool. The autoscaler suggest
> reducing this amount to "32" PGs. This would result in only a fraction
> of the OSDs having *all* of the metadata. I can tell you, based on
> experience, that is not a good advise (the longer story here [1]). At
> least you want to spread out all OMAP data over as many (fast) disks as
> possible. So in this case it should advise 256.
>

Curious, how many PGs do you have in total in all the pools of your
Ceph cluster? What are the other pools (e.g., data pools) and each of
their PG counts?

What version of Ceph are you using?

> As the PGs merely act as a "placeholder" for the (OMAP) data residing in
> the RocksDB database I wonder if it would help improve performance if we
> would split the PGs to, let's say, 2048 PGs. The amount of OMAP per PG
> would go down dramatically. Currently the amount of OMAP bytes per PG is
> ~ 1 GiB and # keys is ~ 2.3 M. Are these numbers crazy high causing the
> issues we see?
>
> I guess upgrading to Pacific and sharding RocksDB would help a lot as
> well. But is there anything we can do to improve the current situation?
> Apart from throwing more OSDs at the problem ...
>
> Thanks,
>
> Gr. Stefan
>
> [1]:
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SDFJECHHVGVP3RTL3U5SG4NNYZOV5ALT/
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

Regards,
Ramana

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MDS stuck in replay

2022-06-01 Thread Ramana Venkatesh Raja
On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN  wrote:
>
> Hi all,
> it seems to be the time of stuck MDSs. We also have our ceph filesystem
> degraded. The MDS is stuck in replay for about 20 hours now.
>
> We run a nautilus ceph cluster with about 300TB of data and many
> millions of files. We run two MDSs with a particularly large directory
> pinned to one of them. Both MDSs have standby MDSs.
>
>  We are in the process of migrating to a new pacific cluster and have
> been syncing files daily. Over the weekend something happened and we
> ended up with slow MDS responses and some directories became very slow
> (as we'd expect). We restarted the second MDS. It came back within a
> minute and the problem disappeared for a little while. The slow MDS
> operations came back and we restarted the other MDS. This one has been
> in replay state since yesterday.
>

Can you temporarily turn up the MDS debug log level (debug_mds) to
check what's happening to this MDS during replay?
ceph config set mds debug_mds 10

Is the health of the MDS host okay? Is it low on memory?

> The cluster is healthy.
>

Can you share the output of the `ceph status` , `ceph fs status`  and
`ceph --version`?

> So, we are wondering what it is up to. How long it might take. And is
> there something we can do to speed up the replay phase.
>
> Regards
> magnus
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336. Is e buidheann carthannais a th’ ann an 
> Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

Regards,
Ramana

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-19 Thread Ramana Venkatesh Raja
On Sat, Apr 16, 2022 at 10:15 PM Ramana Venkatesh Raja  wrote:
>
> On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor  wrote:
> >
> > Hello,
> >
> >
> > I am using cephfs via Openstack Manila (Ussuri I think).
> >
> > The cephfs cluster is v14.2.22 and my client has kernel  
> > 4.18.0-348.20.1.el8_5.x86_64
> >
> >
> > I have a Manila share
> >
> > /volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2
> >
> >
> > that is 5000 GB in size. When I mount it the size is reported correctly:
> >
> >
> > # df -h /cephfs
> > Filesystem  
> >Size  Used Avail Use% Mounted on
> > 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2
> >   4.9T  278G  4.7T   6% /cephfs
> >
> >
> > However when I mount a subpath /test1 of my share, then both the size and 
> > usage are showing the size of the whole cephfs filesystem rather than my 
> > private share.
> >
> >
> > # df -h /cephfs
> > Filesystem  
> >  Size  Used Avail Use% Mounted on
> > 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2/test1
> >   4.0P  277T  3.7P   7% /cephfs
> >
>
> What are the capabilities of the ceph client user ID that you used to
> mount "/volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2/test1" ?
> Maybe you're hitting this limitation in
> https://docs.ceph.com/en/latest/cephfs/quota/#limitations ,
> "Quotas must be configured carefully when used with path-based mount
> restrictions. The client needs to have access to the directory inode
> on which quotas are configured in order to enforce them. If the client
> has restricted access to a specific path (e.g., /home/user) based on
> the MDS capability, and a quota is configured on an ancestor directory
> they do not have access to (e.g., /home), the client will not enforce
> it. When using path-based access restrictions be sure to configure the
> quota on the directory the client is restricted too (e.g., /home/user)
> or something nested beneath it. "
>

Hi Ryan,

I think you maybe actually hitting this
https://tracker.ceph.com/issues/55090 . Are you facing this issue with
the FUSE client?

-Ramana

> >
> > I tried setting the  ceph.quota.max_bytes  xattr on a subdirectory but it 
> > did not help.
> >
>
> You can't set quota xattr if your ceph client user ID doesn't have 'p'
> flag in its MDS capabilities,
> https://docs.ceph.com/en/latest/cephfs/client-auth/#layout-and-quota-restriction-the-p-flag
> .
>
> -Ramana
>
> > I'm not sure if the issue is in cephfs or Manila, but what would be 
> > required to get the right size and usage stats to be reported by df when a 
> > subpath of a share is mounted?
> >
> >
> > Thanks!
> >
> > -rt
> >
> >
> > Ryan Taylor
> > Research Computing Specialist
> > Research Computing Services, University Systems
> > University of Victoria
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-16 Thread Ramana Venkatesh Raja
On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor  wrote:
>
> Hello,
>
>
> I am using cephfs via Openstack Manila (Ussuri I think).
>
> The cephfs cluster is v14.2.22 and my client has kernel  
> 4.18.0-348.20.1.el8_5.x86_64
>
>
> I have a Manila share
>
> /volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2
>
>
> that is 5000 GB in size. When I mount it the size is reported correctly:
>
>
> # df -h /cephfs
> Filesystem
>  Size  Used Avail Use% Mounted on
> 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2
>   4.9T  278G  4.7T   6% /cephfs
>
>
> However when I mount a subpath /test1 of my share, then both the size and 
> usage are showing the size of the whole cephfs filesystem rather than my 
> private share.
>
>
> # df -h /cephfs
> Filesystem
>Size  Used Avail Use% Mounted on
> 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2/test1
>   4.0P  277T  3.7P   7% /cephfs
>

What are the capabilities of the ceph client user ID that you used to
mount "/volumes/_nogroup/55e46a89-31ff-4878-9e2a-81b4226c3cb2/test1" ?
Maybe you're hitting this limitation in
https://docs.ceph.com/en/latest/cephfs/quota/#limitations ,
"Quotas must be configured carefully when used with path-based mount
restrictions. The client needs to have access to the directory inode
on which quotas are configured in order to enforce them. If the client
has restricted access to a specific path (e.g., /home/user) based on
the MDS capability, and a quota is configured on an ancestor directory
they do not have access to (e.g., /home), the client will not enforce
it. When using path-based access restrictions be sure to configure the
quota on the directory the client is restricted too (e.g., /home/user)
or something nested beneath it. "

>
> I tried setting the  ceph.quota.max_bytes  xattr on a subdirectory but it did 
> not help.
>

You can't set quota xattr if your ceph client user ID doesn't have 'p'
flag in its MDS capabilities,
https://docs.ceph.com/en/latest/cephfs/client-auth/#layout-and-quota-restriction-the-p-flag
.

-Ramana

> I'm not sure if the issue is in cephfs or Manila, but what would be required 
> to get the right size and usage stats to be reported by df when a subpath of 
> a share is mounted?
>
>
> Thanks!
>
> -rt
>
>
> Ryan Taylor
> Research Computing Specialist
> Research Computing Services, University Systems
> University of Victoria
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Path to a cephfs subvolume

2022-03-22 Thread Ramana Venkatesh Raja
On Tue, Mar 22, 2022 at 11:24 AM Robert Vasek  wrote:
>
> Hello,
>
> I have a question about cephfs subvolume paths. The path to a subvol seems
> to be in the format of //, e.g.:
>
> /volumes/csi/csi-vol-59c3cb5a-a9ee-11ec-b412-0242ac110004/b2b5a0b3-e02b-4f93-a3f5-fdcef80ebbea
>
> I'm wondering about the  segment. Where is it coming from, why
> is there this indirection?

It is the directory within the subvolume where the user's subvolume
data is stored. When you fetch the mount path of the subvolume using
`ceph fs subvolume getpath` command, you get the absolute path of the
directory of the form, /volumes
The indirection was introduced to allow storing subvolume's internal
metadata within /volumes///, and to
support features such as removing a subvolume while retaining its
snapshots.  For more details you can see,

https://github.com/ceph/ceph/blob/v16.2.7/src/pybind/mgr/volumes/fs/operations/versions/subvolume_v1.py#L33
https://github.com/ceph/ceph/blob/v16.2.7/src/pybind/mgr/volumes/fs/operations/versions/subvolume_v2.py#L22

The UUID component of a subvolume's mount path is generated here,
https://github.com/ceph/ceph/blob/v16.2.7/src/pybind/mgr/volumes/fs/operations/versions/subvolume_v2.py#L168

> I suppose this means there can be multiple of
> these UUIDs?

No. There is only one such UUID directory that stores the user's
subvolume data during the lifecycle of the subvolume.

> Nowhere in "ceph fs subvolume{,group}" can I find anything to
> list them (without actually traversing //*) however.
> Or can you give me hints where to look for this in the code?

`ceph fs subvolume ls` is used to list the subvolume names within a
subvolume group, and `ceph fs subvolume getpath` or `ceph fs subvolume
info`  is used to fetch the mount path of a subvolume. There is no
single command to list the mount paths of all the subvolumes within a
subvolume group . Looking at
https://github.com/ceph/ceph/blob/v16.2.7/src/pybind/mgr/volumes/fs/volume.py#L368
and 
https://github.com/ceph/ceph/blob/v16.2.7/src/pybind/mgr/volumes/fs/volume.py#L329
should provide clues on how it can be implemented.

>
> Thank you!
>
> Cheers,
> Robert
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

Regards,
Ramana

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: are you using nfs-ganesha builds from download.ceph.com

2022-01-12 Thread Ramana Venkatesh Raja
On Wed, Jan 12, 2022 at 10:24 AM Dan van der Ster  wrote:
>
> Dear Ceph Users,
>
> There was a question at the CLT today about the nfs-ganesha builds at:
> https://download.ceph.com/nfs-ganesha/
>
> Are people actively using those? Is there a reason you don't use the
> builds from https://download.nfs-ganesha.org/ (which links to the
> Storage SIG in the case of CentOS and has builds for SUSE, Debian, and
> Ubuntu).
>
> Asking another way -- if we stop building nfs-ganesha and distributing
> them on download.ceph.com -- what would break?
>

The OpenStack Manila CI for CephFS NFS driver could be using
download.ceph.com for ganesha packages [1] ? I see that the Ubuntu
Focal based manila CI is using ganesha packages from ppa [2] .
Victoria, are any manila CIs using download.ceph.com ganesha packages?

download.ceph.com helps build nfs-ganesha and nfs-ganesha ceph related
packages (e.g., nfs-ganesha-ceph, nfs-ganesha-rados-urls) using the
desired version of ceph client packages (e.g., master, latest-pacific,
etc.) .  When nfs-ganesha-ceph, nfs-ganesha-rados-urls and CephFS NFS
manila driver were under heavy development "download.ceph.com" was
crucial.  We fixed issues in ganesha/ganesha's Ceph FSAL/libcephfs,
built ganesha packages using download.ceph.com, and used them in the
manila CI. Currently, manila's CephFS NFS driver is transitioning to
use cephadm and mgr/nfs module.  To enable this we need to fix a
NFS-Ganesha Ceph issue
https://github.com/nfs-ganesha/nfs-ganesha/issues/757 and possibly
others.  If the fixes end up in ganesha and ceph client libraries, I
expect that download.ceph.com would build ganesha packages using ceph
packages from latest quincy or latest pacific and not wait for a point
release of Ceph. Can we build such packages for
https://download.nfs-ganesha.org/ ?

Thanks,
Ramana

[1] 
https://github.com/openstack/devstack-plugin-ceph/blob/stable/xena/devstack/lib/ceph#L928
[2] 
https://zuul.opendev.org/t/openstack/build/d398da18d3164da18ca6413cdb9e9c6e/log/controller/logs/devstacklog.txt


> Thanks!
>
> Dan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-07-03 Thread Ramana Venkatesh Raja
Hi Victoria and Goutham,

I triggered a jenkins build and the nfs-ganesha packages are up for now.
https://jenkins.ceph.com/job/nfs-ganesha-stable/480/

The cephfs-nfs driver manila job also passed,
https://review.opendev.org/#/c/733161/
https://zuul.opendev.org/t/openstack/build/bf189ae48a234c13abdfa6eb4bc81f3b

Maybe going forward we should switch to using nfs-ganesha bionic
packages hosted in launchpad,
https://launchpad.net/~nfs-ganesha, for devstack-plugin-ceph?

-Ramana


On Tue, Jun 30, 2020 at 2:46 AM Victoria Martinez de la Cruz
 wrote:
>
> Hey Cephers,
>
> Can someone help us out with this? Seems that it could be fixed by just
> rerunning that job Goutham pointed out. We have a bunch of changes waiting
> for this to merge.
>
> Thanks in advance,
>
> V
>
> On Fri, Jun 26, 2020 at 2:49 PM Goutham Pacha Ravi 
> wrote:
>
> > Hello!
> >
> > Thanks for bringing this issue up, Victoria.
> >
> > Ramana and David - we're using shaman to look up appropriate builds of
> > packages on chacra to test Ceph with OpenStack Cinder, Manila, Nova, and
> > Glance in the upstream OpenStack projects.
> >
> > This LRC outage hit us - we're sorted for everything except nfs-ganesha.
> > We're not looking for a "specific" build of nfs-ganesha; our query is this:
> >
> >
> > https://shaman.ceph.com/api/search/?project=nfs-ganesha-stable=ubuntu/bionic=ceph_nautilus=latest
> >
> > This gives us the latest build of nfs-ganesha on chacra, and that's gone.
> >
> > I was wondering if just re-running this Jenkins job will make a new
> > package for us on chacra:
> >
> >
> > https://jenkins.ceph.com/job/nfs-ganesha/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/1156/
> >
> >
> > I can't re-run it myself of-course, because I don't have permissions - if
> > that theory works, could one of you help us?
> >
> >
> > Thanks,
> > Goutham Pacha Ravi
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-06-23 Thread Ramana Venkatesh Raja
On Tue, Jun 23, 2020 at 6:59 PM Victoria Martinez de la Cruz
 wrote:
>
> Hi folks,
>
> I'm hitting issues with the nfs-ganesha-stable packages [0], the repo url
> [1] is broken. Is there a known issue for this?
>

The missing packages in chacra could be due to the recent mishap in
the sepia long running cluster,
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/YQMAHTB7MUHL25QP7V5ZUJQSTOGY4GHX/

-Ramana

> Thanks,
>
> Victoria
>
> [0]
> https://shaman.ceph.com/repos/nfs-ganesha-stable/V2.7-stable/1a1fb71cdb811c1bac68f269dfbd5fed69c0913f/ceph_nautilus/128925/
> [1]
> https://chacra.ceph.com/r/nfs-ganesha-stable/V2.7-stable/1a1fb71cdb811c1bac68f269dfbd5fed69c0913f/ubuntu/xenial/flavors/ceph_nautilus/
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io