The reason /var/pkg/downloads doesn't seem to get deleted is
apparently because the download occurs in the current boot
environment but pkg seems to empty it in the new one. So
before reboot, it seems to have been deleted, but afterwards
it comes back again. As of snv125, the only way to free up the
space taken by the cache is to delete the old BE. IMO this is
a bug, but since efforts are under way to put the cache elsewhere,
it may be resolved another way.

On 10/08/09 18:56, Frank Middleton wrote:

... When b125 shows up I'll try it again

Updated SPARC snv124 to snv125 using pkg image-update. Also
did snv124 to snv125 on AMD64. Results on AMD6 and SPARC
were the same, so it isn't a platform issue.

Initially there was just one BE, and no snapshots, and
/var/pkg/download had been deleted. On both systems:

# pkg property
PROPERTY                       VALUE
send-uuid                      True
preferred-publisher            opensolaris.dev
require-optional               False
flush-content-cache-on-success true
display-copyrights             True
pursue-latest                  True

At the end of the update, it said

"Deleting content cache"

but it ran only for a couple of minutes and there was then no
/var/pkg/download /before/ rebooting, so pkg had, indeed
removed it. The following is from SPARC  but AMD64 is
similar:

Before reboot:

# zpool list  tpool
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tpool  18.6G  15.1G  3.48G    81%  ONLINE  -
r...@apogee6:~# ls /var/pkg
cfg_cache  file  gui_cache  history  index  lost+found  pkg  publisher  state

After reboot, /var/package/download was back:

# ls /var/pkg
cfg_cache  file       history  lost+found  publisher
download   gui_cache  index    pkg         state
# du -sh /var/pkg/download
1.3G    /var/pkg/download
# rm -r /var/package/download
# zpool list tpool
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tpool  18.6G  15.2G  3.38G    81%  ONLINE  -
# zfs list -t snapshot | grep tpool
tpool/ROOT/snv...@2009-10-17-23:42:41                         3.91G      -  
10.9G  -
tpool/ROOT/snv125/o...@2009-10-17-23:42:41                         0      -    
20K
# beadm list
BE     Active Mountpoint Space  Policy Created
--     ------ ---------- -----  ------ -------
snv124 -      -          15.83M static 2009-10-17 12:14
snv125 NR     /          20.32G static 2009-10-17 19:42
# zfs destroy -r tp...@2009-10-17-23:42:41
cannot destroy 'tpool/ROOT/snv...@2009-10-17-23:42:41': snapshot is cloned
no snapshots destroyed
# beadm destroy snv124
Are you sure you want to destroy snv124? This action cannot be undone(y/[n]): y
# zpool list  tpool
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tpool  18.6G  10.1G  8.52G    54%  ONLINE  -

Note that deleting the old BE and all snapshots doesn't empty
/var/pkg/download. You have to do both and then delete the
cache by hand. flush-content-cache-on-success is useless.

So I have three questions

o Can you free up the space taken by /var/pkg/download but keep
  the old BE? I believe the answer is no.

o Should this be reported as a bug/RFE?

o What is currently the minimum amount of free space required to
  (say) image-update from snv125 to snv126?

FWIW the AMD64 image is about 4GB smaller than the SPARC image
of snv125, so in theory you could update the AMD64 image on a
16GB root pool if it doesn't include swap or dump. But it would be
rather tight (assuming you still need 8GB free to do an image-update).

Thanks -- Frank
_______________________________________________
indiana-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/indiana-discuss

Reply via email to