Landry Breuil <lan...@openbsd.org> writes:

> Hi,
>
> i know we don't really have an ntfs maintainer, and that nobody should
> use ntfs, but interoperability...
>
> i have a 8Tb 'seagate backup plus hub' appearing as:
>
> uhub8 at uhub1 port 5 configuration 1 interface 0 "Seagate Backup+ Hub" rev 
> 2.10/48.85 addr 2
> umass0 at uhub8 port 1 configuration 1 interface 0 "Seagate Backup+ Hub BK" 
> rev 2.10/1.00 addr 3
> umass0: using SCSI over Bulk-Only
> scsibus4 at umass0: 2 targets, initiator 0
> sd2 at scsibus4 targ 1 lun 0: <Seagate, Backup+ Hub BK, D781> SCSI4 0/direct 
> fixed
>
> which just has a huge ntfs partition:
>
> # /dev/rsd2c:
> type: SCSI
> disk: SCSI disk
> label: Backup+ Hub BK  
> duid: 0000000000000000
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 972801
> total sectors: 15628053167
> boundstart: 0
> boundend: 15628053167
> drivedata: 0 
>
> 16 partitions:
> #                size           offset  fstype [fsize bsize   cpg]
>   c:      15628053167                0  unused                    
>   i:           262144               34 unknown                    # /mnt/sd2
>   j:      15627788288           264192   MSDOS              
>
> (with -pG)
> total sectors: 15628053167 # total bytes: 7452.0G
>   c:          7452.0G                0  unused                    
>   i:             0.1G               34 unknown                    # /mnt/sd2
>   j:          7451.9G           264192   MSDOS      
>
> Trying to mount it (dev/sd2j, of course) immediately panics the kernel
> (hand-written):
>
> panic: out of space in kmem_map
> panic
> malloc
> ntfs_calccfree
> ntfs_mountfs
> ntfs_mount
> sys_mount
> syscall
>
> I suppose pointing at
> https://github.com/openbsd/src/blob/master/sys/ntfs/ntfs_vfsops.c#L567
>
> So, what can be done to 1) avoid the panic and 2) eventually find a way to
> support those partitions sizes ?

I guess that we should not allocate memory for the whole bitmap file,
but instead split it in chunks.

I have only tested with dummy ntfs volumes on vnd devices, but this
fixes landry's panic and allows him to browse his NTFS filesystem.

Concerns:
- what would be a better chunk size than 1MB?
- uint64_t vs off_t: add more tests

Input welcome.


Index: ntfs/ntfs_vfsops.c
===================================================================
RCS file: /d/cvs/src/sys/ntfs/ntfs_vfsops.c,v
retrieving revision 1.55
diff -u -p -r1.55 ntfs_vfsops.c
--- ntfs/ntfs_vfsops.c  7 Sep 2016 17:30:12 -0000       1.55
+++ ntfs/ntfs_vfsops.c  3 Mar 2017 17:52:18 -0000
@@ -558,22 +558,35 @@ ntfs_calccfree(struct ntfsmount *ntmp, c
        u_int8_t *tmp;
        int j, error;
        cn_t cfree = 0;
-       size_t bmsize, i;
+       uint64_t bmsize, offset;
+       size_t chunksize, i;
 
        vp = ntmp->ntm_sysvn[NTFS_BITMAPINO];
 
        bmsize = VTOF(vp)->f_size;
 
-       tmp = malloc(bmsize, M_TEMP, M_WAITOK);
+       if (bmsize > 1024 * 1024)
+               chunksize = 1024 * 1024;
+       else
+               chunksize = bmsize;
+
+       tmp = malloc(chunksize, M_TEMP, M_WAITOK);
+
+       for (offset = 0; offset < bmsize; offset += chunksize) {
+               if (chunksize > bmsize - offset)
+                       chunksize = bmsize - offset;
+
+               error = ntfs_readattr(ntmp, VTONT(vp), NTFS_A_DATA, NULL,
+                   offset, chunksize, tmp, NULL);
+               if (error)
+                       goto out;
+
+               for (i = 0; i < chunksize; i++)
+                       for (j = 0; j < 8; j++)
+                               if (~tmp[i] & (1 << j))
+                                       cfree++;
+       }
 
-       error = ntfs_readattr(ntmp, VTONT(vp), NTFS_A_DATA, NULL,
-                              0, bmsize, tmp, NULL);
-       if (error)
-               goto out;
-
-       for(i=0;i<bmsize;i++)
-               for(j=0;j<8;j++)
-                       if(~tmp[i] & (1 << j)) cfree++;
        *cfreep = cfree;
 
     out:


-- 
jca | PGP : 0x1524E7EE / 5135 92C1 AD36 5293 2BDF  DDCC 0DFA 74AE 1524 E7EE

Reply via email to