[ceph-users] Re: reef 18.2.3 QE validation status

2024-05-30 Thread Yuri Weinstein
I reran rados on the fix https://github.com/ceph/ceph/pull/57794/commits
and seeking approvals from Radek and Laure

https://tracker.ceph.com/issues/65393#note-1

On Tue, May 28, 2024 at 2:12 PM Yuri Weinstein  wrote:
>
> We have discovered some issues (#1 and #2) during the final stages of
> testing that require considering a delay in this point release until
> all options and risks are assessed and resolved.
>
> We will keep you all updated on the progress.
>
> Thank you for your patience!
>
> #1 https://tracker.ceph.com/issues/66260
> #2 https://tracker.ceph.com/issues/61948#note-21
>
> On Wed, May 1, 2024 at 3:41 PM Yuri Weinstein  wrote:
> >
> > We've run into a problem during the last verification steps before
> > publishing this release after upgrading the LRC to it  =>
> > https://tracker.ceph.com/issues/65733
> >
> > After this issue is resolved, we will continue testing and publishing
> > this point release.
> >
> > Thanks for your patience!
> >
> > On Thu, Apr 18, 2024 at 11:29 PM Christian Rohmann
> >  wrote:
> > >
> > > On 18.04.24 8:13 PM, Laura Flores wrote:
> > > > Thanks for bringing this to our attention. The leads have decided that
> > > > since this PR hasn't been merged to main yet and isn't approved, it
> > > > will not go in v18.2.3, but it will be prioritized for v18.2.4.
> > > > I've already added the PR to the v18.2.4 milestone so it's sure to be
> > > > picked up.
> > >
> > > Thanks a bunch. If you miss the train, you miss the train - fair enough.
> > > Nice to know there is another one going soon and that bug is going to be
> > > on it !
> > >
> > >
> > > Regards
> > >
> > > Christian
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > >
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph Reef v18.2.3 - release date?

2024-05-30 Thread Pierre Riteau
Hi Peter,

The upcoming Reef minor release is delayed due to important bugs:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/FMFUZHKNFH4Z5DWS5BAYBPENHTNJCAYS/

On Wed, 29 May 2024 at 21:03, Peter Razumovsky 
wrote:

> Hello! We're waiting brand new minor 18.2.3 due to
> https://github.com/ceph/ceph/pull/56004. Why? Timing in our work is a
> tough thing. Could you kindly share an estimation of 18.2.3 release
> timeframe? It is 16 days passed from original tag creation so I want to
> understand when it will be released for upgrade time planning.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS HA: mgr finish mon failed to return metadata for mds

2024-05-30 Thread Patrick Donnelly
The fix was actually backported to v18.2.3. The tracker was wrong.

On Wed, May 29, 2024 at 3:26 PM  wrote:
>
> Hi,
>
> we have a stretched cluster (Reef 18.2.1) with 5 nodes (2 nodes on each side 
> + witness). You can se our daemon placement below.
>
> [admin]
> ceph-admin01 labels="['_admin', 'mon', 'mgr']"
>
> [nodes]
> [DC1]
> ceph-node01 labels="['mon', 'mgr', 'mds', 'osd']"
> ceph-node02 labels="['mon', 'rgw', 'mds', 'osd']"
> [DC2]
> ceph-node03 labels="['mon', 'mgr', 'mds', 'osd']"
> ceph-node04 labels="['mon', 'rgw', 'mds', 'osd']"
>
> We have been testing CephFS HA and noticed when we have active MDS (we have 
> two active MDS daemons at all times) and active MGR (MGR is either on admin 
> node or in one of the DC's) in one DC and when we shutdown that site (DC) we 
> have a problem when one of the MDS metadata can't be retrieved thus showing 
> in logs as:
>
> "mgr finish mon failed to return metadata for mds"
>
> After we turn that site back on the problem persists and metadata of MDS in 
> question can't be retrieved with "ceph mds metadata"
>
> After I manually fail MDS daemon in question with "ceph mds fail" the problem 
> is solved and I can retrieve MDS metadata.
>
> My question is, would this be related to the following bug 
> (https://tracker.ceph.com/issues/63166) - I can see that it is showed as 
> backported to 18.2.1 but I can't find it in release notes for Reef.
>
> Second question is should this work in current configuration at all as MDS 
> and MGR are both at the same moment disconnected from the rest of the cluster?
>
> And final question would be what would be the solution here and is there any 
> loss of data when this happens?
>
> Any help is appreciated.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Help needed! First MDs crashing, then MONs. How to recover ?

2024-05-30 Thread Patrick Donnelly
On Tue, May 28, 2024 at 8:54 AM Noe P.  wrote:
>
> Hi,
>
> we ran into a bigger problem today with our ceph cluster (Quincy,
> Alma8.9).
> We have 4 filesystems and a total of 6 MDs, the largest fs having
> two ranks assigned (i.e. one standby).
>
> Since we often have the problem of MDs lagging behind, we restart
> the MDs occasionally. Helps ususally, the standby taking over.

Please do not routinely restart MDS. Starting MDS recovery may only
multiply your problems (as it has).

> Today however, the restart didn't work and the rank 1 MDs started to
> crash for unclear reasons. Rank 0 seemed ok.

Figure out why! You might have tried increasing debugging on the mds:

ceph config set mds mds.X debug_mds 20
ceph config set mds mds.X debug_ms 1

> We decided at some point to go back to one rank by settings max_mds to 1.

Doing this will have no positive effect. I've made a tracker ticket so
that folks don't do this:

https://tracker.ceph.com/issues/66301

> Due to the permanent crashing, the rank1 didn't stop however, and at some
> point we set it to failed and the fs not joinable.

The monitors will not stop rank 1 until the cluster is healthy again.
What do you mean "set it to failed"? Setting the fs as not joinable
will mean it never becomes healthy again.

Please do not flail around with administration commands without
understanding the effects.

> At this point it looked like this:
>  fs_cluster - 716 clients
>  ==
>  RANK  STATE MDSACTIVITY DNSINOS   DIRS   CAPS
>   0active  cephmd6a  Reqs:0 /s  13.1M  13.1M  1419k  79.2k
>   1failed
>POOL TYPE USED  AVAIL
>  fs_cluster_meta  metadata  1791G  54.2T
>  fs_cluster_datadata 421T  54.2T
>
> with rank1 still being listed.
>
> The next attempt was to remove that failed
>
>ceph mds rmfailed fs_cluster:1 --yes-i-really-mean-it
>
> which, after a short while brought down 3 out of five MONs.
> They keep crashing shortly after restart with stack traces like this:
>
> ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy 
> (stable)
> 1: /lib64/libpthread.so.0(+0x12cf0) [0x7ff8813adcf0]
> 2: gsignal()
> 3: abort()
> 4: /lib64/libstdc++.so.6(+0x9009b) [0x7ff8809bf09b]
> 5: /lib64/libstdc++.so.6(+0x9654c) [0x7ff8809c554c]
> 6: /lib64/libstdc++.so.6(+0x965a7) [0x7ff8809c55a7]
> 7: /lib64/libstdc++.so.6(+0x96808) [0x7ff8809c5808]
> 8: /lib64/libstdc++.so.6(+0x92045) [0x7ff8809c1045]
> 9: (MDSMonitor::maybe_resize_cluster(FSMap&, int)+0xa9e) [0x55f05d9a5e8e]
> 10: (MDSMonitor::tick()+0x18a) [0x55f05d9b18da]
> 11: (MDSMonitor::on_active()+0x2c) [0x55f05d99a17c]
> 12: (Context::complete(int)+0xd) [0x55f05d76c56d]
> 13: (void finish_contexts std::allocator > >(ceph::common::CephContext*, 
> std::__cxx11::list >&, int)+0x9d) [0x55f05
>d799d7d]
> 14: (Paxos::finish_round()+0x74) [0x55f05d8c5c24]
> 15: (Paxos::dispatch(boost::intrusive_ptr)+0x41b) 
> [0x55f05d8c7e5b]
> 16: (Monitor::dispatch_op(boost::intrusive_ptr)+0x123e) 
> [0x55f05d76a2ae]
> 17: (Monitor::_ms_dispatch(Message*)+0x406) [0x55f05d76a976]
> 18: (Dispatcher::ms_dispatch2(boost::intrusive_ptr const&)+0x5d) 
> [0x55f05d79b3ed]
> 19: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr 
> const&)+0x478) [0x7ff88367fed8]
> 20: (DispatchQueue::entry()+0x50f) [0x7ff88367d31f]
> 21: (DispatchQueue::DispatchThread::entry()+0x11) [0x7ff883747381]
> 22: /lib64/libpthread.so.0(+0x81ca) [0x7ff8813a31ca]
> 23: clone()
> NOTE: a copy of the executable, or `objdump -rdS ` is needed 
> to interpret this.
>
> The MDSMonitor::maybe_resize_cluster somehow suggests a connection to the 
> above MDs operation.

Yes, you've made a mess of things. I assume you ignored this warning:

"WARNING: this can make your filesystem inaccessible! Add
--yes-i-really-mean-it if you are sure you wish to continue."

:(

> Does anyone have an idea how to get this cluster back together again ? Like 
> manually fixing the
> MD ranks ?

You will probably need to bring the file system down but you've
clearly caused the mons to hit an assert where this will be difficult.
You need to increase debugging on the mons (in their
/etc/ceph/ceph.conf):

[mon]
   debug mon = 20
   debug ms = 1

and share the logs on this list or via ceph-post-file.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MDS Abort druing FS scrub

2024-05-30 Thread Patrick Donnelly
On Fri, May 24, 2024 at 7:09 PM Malcolm Haak  wrote:
>
> When running a cephfs scrub the MDS will crash with the following backtrace
>
> -1> 2024-05-25T09:00:23.028+1000 7ef2958006c0 -1
> /usr/src/debug/ceph/ceph-18.2.2/src/mds/MDSRank.cc: In function 'void
> MDSRank::abort(std::string_view)' thread 7ef2958006c0 time
> 2024-05-25T09:00:23.031373+1000
> /usr/src/debug/ceph/ceph-18.2.2/src/mds/MDSRank.cc: 938:
> ceph_abort_msg("abort() called")
> [...]

Do you have more of the logs you can share? Possibly using ceph-post-file?

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Gregory Farnum
There's a major NVMe effort underway but it's not even merged to
master yet, so I'm not sure how docs would have ended up in the Reef
doc tree. :/ Zac, any idea? Can we pull this out?
-Greg


On Thu, May 30, 2024 at 7:03 AM Robert Sander
 wrote:
>
> Hi,
>
> On 5/30/24 14:18, Frédéric Nass wrote:
>
> > ceph config set mgr mgr/cephadm/container_image_nvmeof 
> > "quay.io/ceph/nvmeof:1.2.13"
>
> Thanks for the hint. With that the orchestrator deploys the current container 
> image.
>
> But: It suddenly listens on port 5499 instead of 5500 and:
>
> # podman run -it quay.io/ceph/nvmeof-cli:latest --server-address 10.128.8.29 
> --server-port 5500 subsystem add --subsystem nqn.2016-06.io.spdk:cephtest29
> Failure adding subsystem nqn.2016-06.io.spdk:cephtest29:
> <_InactiveRpcError of RPC that terminated with:
> status = StatusCode.UNAVAILABLE
> details = "failed to connect to all addresses; last error: UNKNOWN: 
> ipv4:10.128.8.29:5500: Failed to connect to remote host: Connection refused"
> debug_error_string = "UNKNOWN:failed to connect to all addresses; 
> last error: UNKNOWN: ipv4:10.128.8.29:5500: Failed to connect to remote host: 
> Connection refused {grpc_status:14, 
> created_time:"2024-05-30T13:59:33.24226686+00:00"}"
>
> # podman run -it quay.io/ceph/nvmeof-cli:latest --server-address 10.128.8.29 
> --server-port 5499 subsystem add --subsystem nqn.2016-06.io.spdk:cephtest29
> Failure adding subsystem nqn.2016-06.io.spdk:cephtest29:
> <_InactiveRpcError of RPC that terminated with:
> status = StatusCode.UNIMPLEMENTED
> details = "Method not found!"
> debug_error_string = "UNKNOWN:Error received from peer 
> ipv4:10.128.8.29:5499 {created_time:"2024-05-30T13:59:49.678809906+00:00", 
> grpc_status:12, grpc_message:"Method not found!"}"
>
>
> Is this not production ready?
> Why is it in the documentation for a released Ceph version?
>
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Amtsgericht Berlin-Charlottenburg - HRB 220009 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] RBD Mirror - Failed to unlink peer

2024-05-30 Thread Scott Cairns
Hi,

Following the introduction of an additional node to our Ceph cluster, we've 
started to see unlink errors when taking a rbd mirror snapshot.

We've had RBD mirroring configured for over a year now and it's been working 
flawlessly, however after we created OSD's on a new node we've receiving the 
following error:

librbd::mirror::snapshot::CreatePrimaryRequest: 0x7f60c80056f0 
handle_unlink_peer: failed to unlink peer: (2) No such file or directory

This seemed to appear on around 3 of 150 snapshots on the first night and over 
the weeks has progressed to almost every snapshot.

What's odd, is that the snapshot appears to be taken without any issues and 
does mirror to the DR site - we can see the snapshot ID taken on the source 
side is mirrored to the destination side when checking the rbd snap ls, and 
we've tested promoting an image on the DR site to ensure the snapshot does 
include up to date data, which it does.

I can't see any other errors generated when the snapshot is taken to identify 
what file/directory isn't found - everything appears to be working okay it's 
just generating an error during the snapshot.


I've also tried disabling mirroring on the disk and re-enabling however it 
doesn't appear to make any difference - there's no error on the initial mirror 
image, or the first snapshot taken after that, but every subsequent snapshot 
shows the error again.

Any ideas?

Thanks,
Scott



The content of this e-mail and any attachment is confidential and intended 
solely for the use of the individual to whom it is addressed.
Any views or opinions presented are solely those of the author and do not 
necessarily represent those of Tecnica Limited.
If you have received this e-mail in error please notify the sender.
Any use, dissemination, forwarding, printing, or copying of this e-mail or any 
attachments thereto, in whole or part, without permission is strictly 
prohibited.

Tecnica Limited Registered office: 5 Castle Court, Carnegie Campus, 
Dunfermline, Fife, KY11 8PB.
Registered in Scotland No. SC250307.
VAT No. 827 5110 42.

This footnote also confirms that this email message has been swept for the 
presence of computer viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Robert Sander

Hi,

On 5/30/24 14:18, Frédéric Nass wrote:


ceph config set mgr mgr/cephadm/container_image_nvmeof 
"quay.io/ceph/nvmeof:1.2.13"


Thanks for the hint. With that the orchestrator deploys the current container 
image.

But: It suddenly listens on port 5499 instead of 5500 and:

# podman run -it quay.io/ceph/nvmeof-cli:latest --server-address 10.128.8.29 
--server-port 5500 subsystem add --subsystem nqn.2016-06.io.spdk:cephtest29
Failure adding subsystem nqn.2016-06.io.spdk:cephtest29:
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: 
ipv4:10.128.8.29:5500: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: 
ipv4:10.128.8.29:5500: Failed to connect to remote host: Connection refused {grpc_status:14, 
created_time:"2024-05-30T13:59:33.24226686+00:00"}"

# podman run -it quay.io/ceph/nvmeof-cli:latest --server-address 10.128.8.29 
--server-port 5499 subsystem add --subsystem nqn.2016-06.io.spdk:cephtest29
Failure adding subsystem nqn.2016-06.io.spdk:cephtest29:
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Method not found!"
debug_error_string = "UNKNOWN:Error received from peer ipv4:10.128.8.29:5499 
{created_time:"2024-05-30T13:59:49.678809906+00:00", grpc_status:12, grpc_message:"Method not 
found!"}"


Is this not production ready?
Why is it in the documentation for a released Ceph version?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Dino Yancey
I've never used this feature, but I wanted to point out your command versus
the error message; gateway-name / gateway_name (dash versus underscore)

On Thu, May 30, 2024 at 5:07 AM Robert Sander 
wrote:

> Hi,
>
> I am trying to follow the documentation at
> https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an
> NVMe over Fabric service.
>
> Step 2b of the configuration section is currently the showstopper.
>
> First the command says:
>
> error: the following arguments are required: --host-name/-t
>
> Then it tells me (after adding --host-name):
>
> error: unrecognized arguments: --gateway-name XXX
>
> and when I remove --gateway-name the error is:
>
> both gateway_name and traddr or neither must be specified
>
> So I am stuck in a kind of a loop here.
>
> Is there a working description for NVMe over TCP available?
>
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Amtsgericht Berlin-Charlottenburg - HRB 220009 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
__
Dino Yancey
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] RBD-Images are not shown in the Dashbord: Failed to execute RBD [errno 19] error generating diff from snapshot None

2024-05-30 Thread Maximilian Dauer
Dear Community,

I hope, you can guide me to solve this error or I can assist in solving a bug: 
RBD-Images are not shown in my Dashbord.

- When accessing the dashboard page (block -> images) no images were listed and 
the error "Failed to execute RBD [errno 19] error generating diff from snapshot 
None" is shown.  
- When asking for the list via cli by "rbd ls" the test-image is shown.  
- There are no snapshots listed when asking for "rbd snap ls --all test".  
- Back to the ui: I can create and use a new image via UI, this will be shown 
via cli, but not in the ui.
- 
The cluster manages several RBD, RGW and Cephfs pools successfully with ~13TiB 
used capacity out of ~18TiB.

**Environment details**
- Ceph-Cluster with 7 Debian 12 nodes and 3 Ubuntu 22.04 nodes at latest patch 
level.  
- Cluster-Version: 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef 
(stable).  
- Error persists since Quincy.  
- Orchestrator: cephadm.  
- Dashboard is accessed via haproxy, but it's the same behavior when accessing 
it directly via ip.  
- All nodes are updated and rebooted.  
- Dashboard was removed and reactivated.  

**Steps to reproduce  **
I need your help to find the root cause.

**Logs**  
Health status is ok. I can't find useful logs except the attached error message 
shown in the Dashboard ("ailed to execute RBD [errno 19] error generating diff 
from snapshot None").

Do you have an idea, where to search for further useful logs?

Last but not least: Thanks for this great piece of software!

All the best
Max
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread John Mulligan
On Thursday, May 30, 2024 7:03:44 AM EDT Robert Sander wrote:
> Hi,
> 
> On 5/30/24 11:58, Robert Sander wrote:
> 
> 
> > I am trying to follow the documentation at 
> > https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an 
> > NVMe over Fabric service.
> 
> 
> It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image
> quay.io/ceph/nvmeof:0.0.2 which is 9 months old.
 
> When I try to redeploy the daemon with the latest image
> ceph orch daemon redeploy nvmeof.nvme01.cephtest29.gookea --image
> quay.io/ceph/nvmeof:latest
 it tells me:
> 
> Error EINVAL: Cannot redeploy nvmeof.nvme01.cephtest29.gookea with a new
> image: Supported types are: mgr, mon, crash, osd, mds, rgw, rbd-mirror,
> cephfs-mirror, ceph-exporter, iscsi, nfs
 
> How do I set the container image for this service?
> 
> ceph config set nvmeof container_image quay.io/ceph/nvmeof:latest

I haven't tested this but try this instead:
ceph config set mgr mgr/cephadm/container_image_nvmeof  quay.io/ceph/
nvmeof:latest

Generally it is the cephadm component that controls what images get deployed. 
The pattern has been to name a config key within the cephadm mgr module 
"container_image_".



> 
> does not work with Error EINVAL: unrecognized config target 'nvmeof'
> 
> Regards
> -- 
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
> 
> https://www.heinlein-support.de
> 
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
> 
> Amtsgericht Berlin-Charlottenburg - HRB 220009 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Frédéric Nass
Hello Robert,

You could try:

ceph config set mgr mgr/cephadm/container_image_nvmeof 
"quay.io/ceph/nvmeof:1.2.13" or whatever image tag you need (1.2.13 is current 
latest).

Another way to run the image is by editing the unit.run file of the service or 
by directly running the container with podman run (you'll need to adjust names, 
cluster sid, etc.):

/usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM 
--authfile=/etc/ceph/podman-auth.json --net=host --init --name 
ceph-aa558815-042c-4fce-ac37-80c0255bf3c0-nvmeof-nvmeof_pool01-test-lis04h02-baakhx
 --pids-limit=-1 --ulimit memlock=-1:-1 --ulimit nofile=10240 
--cap-add=SYS_ADMIN --cap-add=CAP_SYS_NICE --log-driver journald 
--conmon-pidfile 
/run/ceph-aa558815-042c-4fce-ac37-80c0255bf3c0@nvmeof.nvmeof_pool01.test-lis04h02.baakhx.service-pid
 --cidfile 
/run/ceph-aa558815-042c-4fce-ac37-80c0255bf3c0@nvmeof.nvmeof_pool01.test-lis04h02.baakhx.service-cid
 --cgroups=split -e CONTAINER_IMAGE=quay.io/ceph/nvmeof:1.2.13 -e 
NODE_NAME=test-lis04h02.peta.libe.dc.univ-lorraine.fr -e 
CEPH_USE_RANDOM_NONCE=1 -v 
/var/lib/ceph/aa558815-042c-4fce-ac37-80c0255bf3c0/nvmeof.nvmeof_pool01.test-lis04h02.baakhx/config:/etc/ceph/ceph.conf:z
 -v 
/var/lib/ceph/aa558815-042c-4fce-ac37-80c0255bf3c0/nvmeof.nvmeof_pool01.test-lis04h02.baakhx/keyring:/etc/ceph/keyring:z
 -v 
/var/lib/ceph/aa558815-042c-4fce-ac37-80c0255bf3c0/nvmeof.nvmeof_pool01.test-lis04h02.baakhx/ceph-nvmeof.conf:/src/ceph-nvmeof.conf:z
 -v 
/var/lib/ceph/aa558815-042c-4fce-ac37-80c0255bf3c0/nvmeof.nvmeof_pool01.test-lis04h02.baakhx/configfs:/sys/kernel/config
 -v /dev/hugepages:/dev/hugepages -v /dev/vfio/vfio:/dev/vfio/vfio -v 
/etc/hosts:/etc/hosts:ro --mount 
type=bind,source=/lib/modules,destination=/lib/modules,ro=true 
quay.io/ceph/nvmeof:1.2.13

The commands I wrote here [1] in February should still work I believe.

Regards,
Frédéric.

[1] https://github.com/ceph/ceph-nvmeof/issues/459

- Le 30 Mai 24, à 13:03, Robert Sander r.san...@heinlein-support.de a écrit 
:

> Hi,
> 
> On 5/30/24 11:58, Robert Sander wrote:
> 
>> I am trying to follow the documentation at
>> https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an
>> NVMe over Fabric service.
> 
> It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image
> quay.io/ceph/nvmeof:0.0.2 which is 9 months old.
> 
> When I try to redeploy the daemon with the latest image
> ceph orch daemon redeploy nvmeof.nvme01.cephtest29.gookea --image
> quay.io/ceph/nvmeof:latest
> it tells me:
> 
> Error EINVAL: Cannot redeploy nvmeof.nvme01.cephtest29.gookea with a new 
> image:
> Supported types are: mgr, mon, crash, osd, mds, rgw, rbd-mirror, 
> cephfs-mirror,
> ceph-exporter, iscsi, nfs
> 
> How do I set the container image for this service?
> 
> ceph config set nvmeof container_image quay.io/ceph/nvmeof:latest
> 
> does not work with Error EINVAL: unrecognized config target 'nvmeof'
> 
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
> 
> https://www.heinlein-support.de
> 
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
> 
> Amtsgericht Berlin-Charlottenburg - HRB 220009 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Missing ceph data

2024-05-30 Thread Eugen Block

Hi,

I've never heard of automatic data deletion. Maybe just some snapshots  
were removed? Or someone deleted data on purpose because of the  
nearfull state of some OSDs? And there's no trash function for cephfs  
(for rbd there is). Do you use cephfs snapshots?



Zitat von Prabu GJ :


Hi Team,


We are using Ceph Octopus version with a total disk size of 136 TB,  
configured with two replicas. Currently, our usage is 57 TB, and the  
available size is 5.3 TB. An incident occurred yesterday where  
around 3 TB of data was deleted automatically. Upon analysis, we  
couldn't find the reason for the deletion. All OSDs are functioning  
properly and actively running.


We have 3 MDS , we try to restarted all MDS services. Is there any  
solution to recover those data. Can anyone please help us find the  
issue?






cluster:

    id: 0d605d58-5caf-4f76-b6bd-e12402a22296

    health: HEALTH_WARN

    insufficient standby MDS daemons available

    5 nearfull osd(s)

    3 pool(s) nearfull

    1 pool(s) have non-power-of-two pg_num



  services:

    mon: 4 daemons, quorum  
download-mon3,download-mon4,download-mon1,download-mon2 (age 14h)


    mgr: download-mon2(active, since 14h), standbys: download-mon1,  
download-mon3


    mds: integdownload:2  
{0=download-mds3=up:active,1=download-mds1=up:active}


    osd: 39 osds: 39 up (since 16h), 39 in (since 4d)



  data:

    pools:   3 pools, 1087 pgs

    objects: 71.76M objects, 51 TiB

    usage:   105 TiB used, 31 TiB / 136 TiB avail

    pgs: 1087 active+clean



  io:

    client:   414 MiB/s rd, 219 MiB/s wr, 513 op/s rd, 1.22k op/s wr



ID  HOST USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA   
STATE  


0  download-osd1   2995G   581G 14 4785k  6 6626k   
exists,up  


1  download-osd2   2578G   998G 84 3644k 18 10.1M   
exists,up  


2  download-osd3   3093G   483G 17 5114k  5 4152k   
exists,nearfull,up 


3  download-osd4   2757G   819G 12  996k  2 4107k   
exists,up  


4  download-osd5   2889G   687G 28 3355k 20 8660k   
exists,up  


5  download-osd6   2448G  1128G    183 3312k 10 9435k   
exists,up  


6  download-osd7   2814G   762G  7 1667k  4 6354k   
exists,up  


7  download-osd8   2872G   703G 14 1672k 15 10.5M   
exists,up  


8  download-osd9   2577G   999G 10 6615k  3 6960k   
exists,up  


9  download-osd10  2651G   924G 16 4736k  3 7378k   
exists,up  


10  download-osd11  2889G   687G 15 4810k  6 8980k   
exists,up  


11  download-osd12  2912G   664G 11 2516k  2 4106k   
exists,up  


12  download-osd13  2785G   791G 74 4643k 11 3717k   
exists,up  


13  download-osd14  3150G   426G    214 6133k  4 7389k   
exists,nearfull,up 


14  download-osd15  2728G   848G 11 4959k  4 6603k   
exists,up  


15  download-osd16  2682G   894G 13 3170k  3 2503k   
exists,up  


16  download-osd17  2555G  1021G 53 2183k  7 5058k   
exists,up  


17  download-osd18  3013G   563G 18 3497k  3 4427k   
exists,up  


18  download-osd19  2924G   651G 24 3534k 12 10.4M   
exists,up  


19  download-osd20  3003G   573G 19 5149k  3 2531k   
exists,up  


20  download-osd21  2757G   819G 16 3707k  9 9816k   
exists,up  


21  download-osd22  2576G   999G 15 2526k  8 7739k   
exists,up  


22  download-osd23  2758G   818G 13 4412k 16 7125k   
exists,up  


23  download-osd24  2862G   714G 18 4424k  6 5787k   
exists,up  


24  download-osd25  2792G   783G 16 1972k  9 9749k   
exists,up  


25  download-osd26  2397G  1179G 14 4296k  9 12.0M   
exists,up  


26  download-osd27  2308G  1267G  8 3149k 22 6280k   
exists,up  


27  download-osd29  2732G   844G 12 3357k  3 7372k   
exists,up  


28  download-osd28  2814G   761G 11  476k  5 3316k   
exists,up  


29  download-osd30  3069G   507G 15 9043k 17 5628k   
exists,nearfull,up 


30  download-osd31  2660G   916G 15  841k 14 7798k   
exists,up  


31  download-osd32  2037G  1539G 10 1153k 15 3719k   
exists,up  


32  download-osd33  3116G   460G 20 7704k 12 9041k   
exists,nearfull,up 


33  download-osd34  2847G   728G 19 5788k  4 9014k   
exists,up  


34  download-osd35  3088G   488G 17 7178k  7   

[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Robert Sander

Hi,

On 5/30/24 11:58, Robert Sander wrote:

I am trying to follow the documentation at 
https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an 
NVMe over Fabric service.


It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image 
quay.io/ceph/nvmeof:0.0.2 which is 9 months old.

When I try to redeploy the daemon with the latest image
ceph orch daemon redeploy nvmeof.nvme01.cephtest29.gookea --image 
quay.io/ceph/nvmeof:latest
it tells me:

Error EINVAL: Cannot redeploy nvmeof.nvme01.cephtest29.gookea with a new image: 
Supported types are: mgr, mon, crash, osd, mds, rgw, rbd-mirror, cephfs-mirror, 
ceph-exporter, iscsi, nfs

How do I set the container image for this service?

ceph config set nvmeof container_image quay.io/ceph/nvmeof:latest

does not work with Error EINVAL: unrecognized config target 'nvmeof'

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] How to setup NVMeoF?

2024-05-30 Thread Robert Sander

Hi,

I am trying to follow the documentation at 
https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an 
NVMe over Fabric service.


Step 2b of the configuration section is currently the showstopper.

First the command says:

error: the following arguments are required: --host-name/-t

Then it tells me (after adding --host-name):

error: unrecognized arguments: --gateway-name XXX

and when I remove --gateway-name the error is:

both gateway_name and traddr or neither must be specified

So I am stuck in a kind of a loop here.

Is there a working description for NVMe over TCP available?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to recover from an MDs rank in state 'failed'

2024-05-30 Thread Dhairya Parmar
Hi Noe,

If the MDS has failed and you're sure of the fact that there are no pending
tasks or sessions associated with the failed MDS, you can try to make use
of `ceph mds rmfailed` but beware this MDS is really doing nothing and
doesn't link to any file system otherwise things can go wrong and can lead
to an inaccessible file system, more info regarding the command can be
found at [0] and [1].

[0] https://docs.ceph.com/en/quincy/man/8/ceph/
[1] https://docs.ceph.com/en/latest/cephfs/administration/#advanced
--
*Dhairya Parmar*

Associate Software Engineer, CephFS

IBM, Inc.

On Wed, May 29, 2024 at 4:24 PM Noe P.  wrote:

> Hi,
>
> after our desaster yesterday, it seems that we got our MONs back.
> One of the filesystems, however, seems in a strange state:
>
>   % ceph fs status
>
>   
>   fs_cluster - 782 clients
>   ==
>   RANK  STATE MDSACTIVITY DNSINOS   DIRS   CAPS
>0active  cephmd6a  Reqs:5 /s  13.2M  13.2M  1425k  51.4k
>1failed
> POOL TYPE USED  AVAIL
>   fs_cluster_meta  metadata  3594G  53.5T
>   fs_cluster_datadata 421T  53.5T
>   
>   STANDBY MDS
> cephmd6b
> cephmd4b
>   MDS version: ceph version 17.2.7
> (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
>
>
>   % ceph fs dump
>   
>   Filesystem 'fs_cluster' (3)
>   fs_name fs_cluster
>   epoch   3068261
>   flags   12 joinable allow_snaps allow_multimds_snaps
>   created 2022-08-26T15:55:07.186477+0200
>   modified2024-05-29T12:43:30.606431+0200
>   tableserver 0
>   root0
>   session_timeout 60
>   session_autoclose   300
>   max_file_size   4398046511104
>   required_client_features{}
>   last_failure0
>   last_failure_osd_epoch  1777109
>   compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
> uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline
> data,8=no anchor table,9=file layout v2,10=snaprealm v2}
>   max_mds 2
>   in  0,1
>   up  {0=911794623}
>   failed
>   damaged
>   stopped 2,3
>   data_pools  [32]
>   metadata_pool   33
>   inline_data disabled
>   balancer
>   standby_count_wanted1
>   [mds.cephmd6a{0:911794623} state up:active seq 44701 addr [v2:
> 10.13.5.6:6800/189084355,v1:10.13.5.6:6801/189084355] compat
> {c=[1],r=[1],i=[7ff]}]
>
>
> We would like to get rid of the failed rank 1 (without crashing the MONs)
> and have a 2nd MD from the standbys step in .
>
> Anyone have an idea how to do this ?
> I'm a bit reluctant to try 'ceph mds rmfailed', as this seems to have
> triggered the MONs to crash.
>
> Regards,
>   Noe
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Rebalance OSDs after adding disks?

2024-05-30 Thread Robert Sander

On 5/30/24 08:53, tpDev Tester wrote:

Can someone please point me to the docs how I can expand the capacity of 
the pool without such problems.


Please show the output of

ceph status

ceph df

ceph osd df tree

ceph osd crush rule dump

ceph osd pool ls detail


Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Rebalance OSDs after adding disks?

2024-05-30 Thread tpDev Tester

Hi,


I have been curiuos about ceph for long time and now I started to 
experiment to find out, how it works. The idea I like most is, that ceph 
can provide growing storage without the need to move from storage x to 
storage y on consumer side.


I started with a 3 node cluster where each node got one OSD (2x 500GB, 
1x 1TB). I created the pool and the filesystem, mounted the filesystem 
and filled it with data. All three OSDs filled up evenly and when the 
pool reached 70%, I added two 1TB OSDs (each on the node with 500GB). 
Now I expected some activity to rebalance the fill level of the OSDs, 
but nothing special happened. The new OSDs got some data, but the 500GB 
OSDs ran into 95%, the pool reached 100%, everything got stuck with 2 
OSDs filled up just 20% and left 80% free space?


I searched the documentation how to initiate the 
rebalance/redistribution by hand but was unable to find anything.


Can someone please point me to the docs how I can expand the capacity of 
the pool without such problems.



Thanks in advance

Thomas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: We are using ceph octopus environment. For client can we use ceph quincy?

2024-05-30 Thread Robert Sander

On 5/27/24 09:28, s.dhivagar@gmail.com wrote:

We are using ceph octopus environment. For client can we use ceph quincy?


Yes.

--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] MDs stuck in rejoin with '[ERR] : loaded dup inode'

2024-05-30 Thread Noe P.


Hi,

I'm still unable to get our filesystem back.
I now have this:

fs_cluster - 0 clients
==
RANK  STATE MDS ACTIVITY   DNSINOS   DIRS   CAPS
 0rejoin  cephmd4b90.0k  89.4k  14.7k 0
 1rejoin  cephmd6b 105k   105k  21.3k 0
 2failed
  POOL TYPE USED  AVAIL
fs_cluster_meta  metadata   288G  55.2T
fs_cluster_datadata 421T  55.2T


Still cannot get rid of the 3rd failed rank. But the other two currently
stay in state rejoin forever. After all clients were stopped, the log
complains about a 'dup inode':

  2024-05-30T07:59:46.252+0200 7f2fe9146700 -1 log_channel(cluster) log [ERR] :
  loaded dup inode 0x1001710ea1d [12bc6a,head] v1432525092 at
  /homes/YYY/ZZZ/.bash_history-21032.tmp, but inode 0x1001710ea1d.head
  v1432525109 already exists at /homes/YYY/ZZZ/.bash_history

Questions:
 - Is there a way to scan/repair the metadata without any MD in 'active' state ?

 - Is there a way to remove (or otherwise fix) the inode in question given the
   above inode number ?

 - Is the state 'rejoin' due to the inode error or because of that 3rd rank ?


Regard,
  N.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Missing ceph data

2024-05-30 Thread Prabu GJ
Hi Team,


We are using Ceph Octopus version with a total disk size of 136 TB, configured 
with two replicas. Currently, our usage is 57 TB, and the available size is 5.3 
TB. An incident occurred yesterday where around 3 TB of data was deleted 
automatically. Upon analysis, we couldn't find the reason for the deletion. All 
OSDs are functioning properly and actively running. 

We have 3 MDS , we try to restarted all MDS services. Is there any solution to 
recover those data. Can anyone please help us find the issue?





cluster:

    id: 0d605d58-5caf-4f76-b6bd-e12402a22296

    health: HEALTH_WARN

    insufficient standby MDS daemons available

    5 nearfull osd(s)

    3 pool(s) nearfull

    1 pool(s) have non-power-of-two pg_num



  services:

    mon: 4 daemons, quorum 
download-mon3,download-mon4,download-mon1,download-mon2 (age 14h)

    mgr: download-mon2(active, since 14h), standbys: download-mon1, 
download-mon3

    mds: integdownload:2 {0=download-mds3=up:active,1=download-mds1=up:active}

    osd: 39 osds: 39 up (since 16h), 39 in (since 4d)



  data:

    pools:   3 pools, 1087 pgs

    objects: 71.76M objects, 51 TiB

    usage:   105 TiB used, 31 TiB / 136 TiB avail

    pgs: 1087 active+clean



  io:

    client:   414 MiB/s rd, 219 MiB/s wr, 513 op/s rd, 1.22k op/s wr



ID  HOST USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE   
   

0  download-osd1   2995G   581G 14 4785k  6 6626k  exists,up
  

1  download-osd2   2578G   998G 84 3644k 18 10.1M  exists,up
  

2  download-osd3   3093G   483G 17 5114k  5 4152k  
exists,nearfull,up 

3  download-osd4   2757G   819G 12  996k  2 4107k  exists,up
  

4  download-osd5   2889G   687G 28 3355k 20 8660k  exists,up
  

5  download-osd6   2448G  1128G    183 3312k 10 9435k  exists,up
  

6  download-osd7   2814G   762G  7 1667k  4 6354k  exists,up
  

7  download-osd8   2872G   703G 14 1672k 15 10.5M  exists,up
  

8  download-osd9   2577G   999G 10 6615k  3 6960k  exists,up
  

9  download-osd10  2651G   924G 16 4736k  3 7378k  exists,up
  

10  download-osd11  2889G   687G 15 4810k  6 8980k  exists,up   
   

11  download-osd12  2912G   664G 11 2516k  2 4106k  exists,up   
   

12  download-osd13  2785G   791G 74 4643k 11 3717k  exists,up   
   

13  download-osd14  3150G   426G    214 6133k  4 7389k  
exists,nearfull,up 

14  download-osd15  2728G   848G 11 4959k  4 6603k  exists,up   
   

15  download-osd16  2682G   894G 13 3170k  3 2503k  exists,up   
   

16  download-osd17  2555G  1021G 53 2183k  7 5058k  exists,up   
   

17  download-osd18  3013G   563G 18 3497k  3 4427k  exists,up   
   

18  download-osd19  2924G   651G 24 3534k 12 10.4M  exists,up   
   

19  download-osd20  3003G   573G 19 5149k  3 2531k  exists,up   
   

20  download-osd21  2757G   819G 16 3707k  9 9816k  exists,up   
   

21  download-osd22  2576G   999G 15 2526k  8 7739k  exists,up   
   

22  download-osd23  2758G   818G 13 4412k 16 7125k  exists,up   
   

23  download-osd24  2862G   714G 18 4424k  6 5787k  exists,up   
   

24  download-osd25  2792G   783G 16 1972k  9 9749k  exists,up   
   

25  download-osd26  2397G  1179G 14 4296k  9 12.0M  exists,up   
   

26  download-osd27  2308G  1267G  8 3149k 22 6280k  exists,up   
   

27  download-osd29  2732G   844G 12 3357k  3 7372k  exists,up   
   

28  download-osd28  2814G   761G 11  476k  5 3316k  exists,up   
   

29  download-osd30  3069G   507G 15 9043k 17 5628k  
exists,nearfull,up 

30  download-osd31  2660G   916G 15  841k 14 7798k  exists,up   
   

31  download-osd32  2037G  1539G 10 1153k 15 3719k  exists,up   
   

32  download-osd33  3116G   460G 20 7704k 12 9041k  
exists,nearfull,up 

33  download-osd34  2847G   728G 19 5788k  4 9014k  exists,up   
   

34  download-osd35  3088G   488G 17 7178k  7 5730k  
exists,nearfull,up 

35  download-osd36  2414G  1161G 27 2017k 14 7612k  exists,up   
   

36  download-osd37  2760G   815G 17 4292k  5 10.6M  exists,up   
   

37  download-osd38  2679G   897G 12 2610k  5 10.0M  exists,up   
   

38  download-osd39  3013G   563G 18 1804k  7 9235k  exists,up