The distribution vehicles used by FPGA vendors are a few clicks behind best practice. Every year or so there is a major release, which is usually delivered as a DVD or download that contains a big tar file. "big" is nominally 10~20 GB. The user expands that tar file to a tree that is used by the installer, another 10~20 GB. Then the installer expands the zips and such the installation path root, which may occupy 40 GB itself. So to say that you would 100 GB or such a scenario, per tool, per release, is not a wild overestimate. You repeat a similar process (with smaller footprints) on the minor releases throughout the year or as needed. Fine, FPGA vendor tools can be large and sprawling, we can't change that much. ... All of this is really not a hassle when you are native Linux, and have say a 1TB drive with a 200 GB OS partition; and the remaining 800GB for user. ... But I frequently (right now) I need to have a Linux VM to show OpenCPI running under either OSX on a MacBook, or XP on a spare desktop. I found that if you build the VM with big statefull disk partitions, they get very slow to load and suspend. So I have been using modest 20 GB partitions when I create my VM; then I map in a mount to my host OS disk. This seems to make a bad situation better; as it would seem impractical to have 1TB VM images. There doesn't seem to be any noticeable performace degradation by having the tools object codes on the host OS. ... It still takes a few hours to install; and another few to upgrade. Grrrr
-Shep
_______________________________________________ opencpi_dev mailing list [email protected] http://lists.opencpi.org/listinfo.cgi/opencpi_dev-opencpi.org
