On Thu, 2022-11-10 at 15:19 +0100, DdB wrote:
> Am 10.11.2022 um 14:28 schrieb DdB:
> > Take some time to
> > play with an installation (in a vm or just with a file based pool should
> > be considered).
> 
> an example to show, that is is possible to allocate hugefiles (bigger
> than a single disk size) from a pool:
> 
> > datakanja@PBuster-NFox:~$ mkdir disks
> > datakanja@PBuster-NFox:~$ cd disks/
> > datakanja@PBuster-NFox:~/disks$ seq  -w 0 15 | xargs -i truncate -s 4T
> > disk{}.bin # this creates sparse files to act as virtual disks
> > datakanja@PBuster-NFox:~/disks$ zpool create TEST raidz3 ~/disks/d*
> > datakanja@PBuster-NFox:~/disks$ zpool list
> > NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH 
> > ALTROOT
> > TEST  64.0T   314K  64.0T        -         -     0%     0%  1.00x    ONLINE 
> > -
> 16*4 TB = 64 TB size
> > datakanja@PBuster-NFox:~/disks$ zfs list TEST
> > NAME   USED  AVAIL     REFER  MOUNTPOINT
> > TEST   254K  50.1T     64.7K  /TEST
> # due to redundacy in the pool, the maximum size of a file is slightly
> over 50TB
> 
> #do not forget to clean up (destroying pool and files)
> 

Ok, but once the space gets actually allocated, things blow up.  Or what happens
when you use partitions and allocate more space than the partitions have?

Reply via email to