I used the yum update to simulate a developer install of substantial size - nothing special about it. I wanted to see if you wrote a lot to it what would happen. I gather we are running into different mount points filling up? (Remember I am a developer not a linux guy.) There is room on the stick just not in tmp it filled up?
I have a feeling that I will need to spin my own .iso from the Developer Live CD that includes RHDS and pretty much everything they need. Then let them save a few documents in home to keep the large amount of writing to a minimum. Am I heading in the right direction? MikeD On Fri, 2008-01-04 at 22:46 -0600, Douglas McClendon wrote: > Mike Dickson wrote: > > I am out of the chess game. Any news? > > I'm not sure I follow your analogy? > > Do you understand how the persistence is achieved? I.e. a devicemapper > shapshot, where any changed blocks on the root filesystem get written to > the persistence file. If the same block gets changed more than once, > that block gets updated in the persistence file, and thus no more space > is taken. > > As a result of this, if you do something like a yum update, that creates > and deletes a bunch of files on the rootfs (in addition to the ones it > finally installs and leaves as is), all those changed blocks eat up > space in the persistence file, and don't get freed or even reused, > unless and until the filesystem decides to write to the exact same block. > > With this lack of ideal efficiency, the question then becomes- is this > sufficient for your goals? I can imagine many usage scenarios in which > this is sufficient, and as mentioned in other mails, many ways in which > to try and mitigate the ineficiency. > > One thing I'll try when I find the time, is something like doing a > > mkdir /dev/shm/tmpspace > mkdir /dev/shm/tmpspace/vtmp > mkdir /dev/shm/tmpspace/tmp > mkdir /dev/shm/tmpspace/fedora > mkdir /dev/shm/tmpspace/updates > mount --bind /dev/shm/tmpspace/tmp /tmp > mount --bind /dev/shm/tmpspace/vtmp /var/tmp > mount --bind /dev/shm/tmpspace/fedora /var/cache/yum/fedora/packages > mount --bind /dev/shm/tmpspace/updates /var/cache/yum/updates/packages > > before I do a yum install of some small package, and then seeing what > the difference is in blocks used on the persistence file. > > Anyway, beyond that, I do intend to re-add optional unionfs support to > my VirOS livecd creation toolset, despite the fact that it breaks my > rebootless installation mechanism. Hopefully that will be done soon, > but probably not for months as I have several other higher priorities at > the moment. > > But to be clear, because of all the above, trying to do a yum update > even with a 1G persistence file and the above method, is probably not > really feasible (except maybe the first day or two after a new release). > Yum updating a single package to get some specific critical bugfix, now > that might be doable. > > The main usage scenario I foresee for the feature, is adding users to > the system, a seperate /home in fstab (mounted from a different fsimage > file on the same usbstick), and editing configuration files > (/etc/dovecot.conf, /etc/sysconfig/* /etc/rc.d/rc.local, etc....) > And installing a small number of other packages. > > That certainly isn't as nice as if you could do a yum update, and end up > only using the same amount of space on the liveusb as if you respun the > livecd with the same updates. But if you figure out a way to do that, I > will give you mad props :) > > -dmc > > > > > > > MikeD > > > > On Fri, 2008-01-04 at 02:33 +0000, Mike Dickson wrote: > >> I just finished trying to download JBoss Developer Studio and > >> installing it on the thumb drive. It filled up again. I then dropped > >> the .jar on the stick hoping that I could install from that and it > >> filled up again. Checkmate. > >> > >> MikeD > >> > >> On Thu, 2008-01-03 at 15:30 -0800, Mike Dickson wrote: > >>> Ran that and yes the snapshot area filled up BEFORE the errors. Let > >>> me know what I can do.... > >>> > >>> MikeD > >>> > >>> On Thu, 2008-01-03 at 16:31 -0600, Douglas McClendon wrote: > >>>> Mike Dickson wrote: > >>>> > Guys, > >>>> > > >>>> > I got a LiveCD + Persistence usb drive running from your scripts, but > >>>> > got I/O errors if I tried to do a yum update. > >>>> > > >>>> > Before that I was able to vi test.txt and put some text in and it > >>>> > survived a reboot. > >>>> > > >>>> > What can I do to address the i/o errors? > >>>> > >>>> My first question/explanation would be that you filled up the snapshot > >>>> device. This is quite possible, as a yum install involves creating > >>>> several copies of the actual files you end up installing. > >>>> > >>>> The way to see if this is what is happening would be to have another > >>>> terminal open, and periodically watch the output of "dmsetup status". > >>>> As new blocks are written to the rootfs snapshot device, you will see > >>>> the snapshot filling up. > >>>> > >>>> If you get these IO errors even before the snapshot fills up, please try > >>>> to post some more detailed output. > >>>> > >>>> In general, as discussed there are pros and cons with this method, and a > >>>> unionfs method. I do think there are ways to work around the cons of > >>>> this method in such a way that it is useful. For instance, I'll play > >>>> around and see if I can prescribe a process of using yum that will get > >>>> it to create all of its intermediate files in a native tmpfs (/dev/shm > >>>> or the like) instead of the rootfs, so that they don't eat into the > >>>> snapshot space. Likewise, now that I have my first actual tester, maybe > >>>> I'll figure out some other creative ways to improve the method (I have > >>>> some ideas I need to experiment with...). > >>>> > >>>> Thanks, > >>>> > >>>> -dmc > >>>> > >>>> > >>>> > >>>> > > >>>> > MikeD > >>>> > > >>>> > "Messsage from [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> at > >>>> > kernel: journal commit i/o error" > >>>> > > >>>> > > >>>> > On Wed, 2008-01-02 at 04:07 -0800, Mike Dickson wrote: > >>>> >> I have some time now. I am attempting this tonight and tomorrow. I > >>>> >> will let you know. > >>>> >> > >>>> >> MikeD > >>>> >> > >>>> > > >>>> > > >>>> > ------------------------------------------------------------------------ > >>>> > > >>>> > -- > >>>> > Fedora-livecd-list mailing list > >>>> > Fedora-livecd-list@redhat.com <mailto:Fedora-livecd-list@redhat.com> > >>>> > https://www.redhat.com/mailman/listinfo/fedora-livecd-list > >>>> > >>> -- > >>> Fedora-livecd-list mailing list > >>> Fedora-livecd-list@redhat.com <mailto:Fedora-livecd-list@redhat.com> > >>> https://www.redhat.com/mailman/listinfo/fedora-livecd-list > >> -- > >> Fedora-livecd-list mailing list > >> Fedora-livecd-list@redhat.com <mailto:Fedora-livecd-list@redhat.com> > >> https://www.redhat.com/mailman/listinfo/fedora-livecd-list > > > > ------------------------------------------------------------------------ > > > > -- > > Fedora-livecd-list mailing list > > Fedora-livecd-list@redhat.com > > https://www.redhat.com/mailman/listinfo/fedora-livecd-list > >
-- Fedora-livecd-list mailing list Fedora-livecd-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-livecd-list