Re: Creating grub/grub2/grldr.mbr bootrom with makerom

2007-12-22 Thread Yoshinori K. Okuji
On Friday 21 December 2007 20:04, Robert Millan wrote:
 How well does compression work for GRUB 2 ?  core.img is already compressed
 (with lzo); if LZMA makes better results perhaps it'd be a good idea to
 switch.

It's not that simple. LZO was chosen instead of gzip, because of the size 
requirement on PC. To preserve safety, we need to keep the core part less 
than 31.5KB (63 sectors).

The size is the sum of non-compressable bootstrap code, decompression code and 
compressed code + data. When I made an experiment in PUPA, although gzip had 
a better compression ratio, due to the decompression code size, LZO won.

I don't know precisely, but I suspect that decompression code for LZMA would 
be slightly larger than gzip's (IIRC, a range coder is likely to require more 
code and data). So I don't expect that LZMA can replace the current usage of 
LZO in normal PC so easily.

Okuji


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: Switching to git?

2007-12-22 Thread Yoshinori K. Okuji
On Tuesday 18 December 2007 13:05, Otavio Salvador wrote:
  - All developers are forced to install new software and learn it (always
  a pain).

 Developers are used (or ought to) to learn new things since it's of
 programming art. I guess learning wouldn't be a problem.

From a theoretical point of view, you're definitely right, but the reality 
looks reverse to me. For instance, look at the inability of developers to 
editors... Even if Emacs is far superior when writing GNU-style C code, vi 
users never try to learn how to use Emacs. When it comes to command-line 
utilities vs. graphical applications, the situation is even worse.

In my experience, (unfortunately) developers are too lazy to change tools. 
They change, only when they are forced or excited for some (geeky) reason. 
This includes myself.

  - All local (pending) changes in working copies become very hard to merge
  (extremely painful).

 Just a cvs diff  /tmp/foo ; cd ~/newrepo ; patch -p1  /tmp/foo
 works for most of cases and then it's not a really big problem from my POV.

It is a problem. It is catastrophic, especially when an original repository is 
down.

BTW, I have 4 different working copies of GRUB locally. All of them have 
small, different changes not committed. Do you think I would be happy to deal 
with these changes with a mostly-working solution? If I don't see more 
benefit from migrating to another SCM, I really don't want.

  - It is hard to re-select yet another SCM later, because old software is
  usually better supported for migrations, i.e. it's not cheap to migrate
  back and forth (very painful).

 I guess nobody wants to come back to CVS after getting out from it.

You need it, if a new SCM does not have a converter directly.

 Agree on that. However since git does offer a CVS server this can be
 reduced a lot allowing you and anyother that don't want to move to it
 to stay using CVS for hacking.

This is nice.

  Ok, now about the git. As Tomáš pointed out, the lack of portability is
  regression from CVS. If you think, for example, grub4dos is important,
  why can you choose git?

 Agree on that too.

 It's not that bad[1] and users can use git with cygwin or via
 git-cvspserver.

 1. http://git.or.cz/gitwiki/WindowsInstall

I can't say if it is good or not, since I myself does not use Windows at all 
these days. I leave the evaluation to someone who uses Windows every day.

 While I agree that it's not the best merging algorithm I also fail to
 see why it could be a blocker.

 I've been using GIT for a while and I do not see conflicts very
 ofthen. Linux kernel also does it and I don't see people complaining
 about it.

The problem is not conflicts but merging. Usually, people don't understand the 
importance, until they get weird merging results, and spend several days only 
to fix up wrong results. However, if you notice a merging problem, you are 
still lucky; in particular when you merge big changes, it is not easy to see 
how merging went well. Sometimes, having conflicts is much better, because 
you obtain a chance to see what your SCM thinks. When merging is done 
silently, and it is wrong, the effort on finding mistakes is tremendous.

 Personally I don't like bazaar due performance problem. It's really
 slow for big projects (it wouldn't be a big problem since GRUB is a
 small one) and it changes its data format too ofthen.

Hmm, I thought they have fixed the performance issues already? About the data 
format, I have no idea. jbailey, do you have any comment? ;)


 If I'd going to choose, I'd go to GIT or Mercurial.

Mercurial is not bad, except for the 3-way merging.

Okuji


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: Creating grub/grub2/grldr.mbr bootrom with makerom

2007-12-22 Thread Bean
On Dec 22, 2007 4:06 PM, Yoshinori K. Okuji [EMAIL PROTECTED] wrote:
 On Friday 21 December 2007 20:04, Robert Millan wrote:
  How well does compression work for GRUB 2 ?  core.img is already compressed
  (with lzo); if LZMA makes better results perhaps it'd be a good idea to
  switch.

 It's not that simple. LZO was chosen instead of gzip, because of the size
 requirement on PC. To preserve safety, we need to keep the core part less
 than 31.5KB (63 sectors).

 The size is the sum of non-compressable bootstrap code, decompression code and
 compressed code + data. When I made an experiment in PUPA, although gzip had
 a better compression ratio, due to the decompression code size, LZO won.

 I don't know precisely, but I suspect that decompression code for LZMA would
 be slightly larger than gzip's (IIRC, a range coder is likely to require more
 code and data). So I don't expect that LZMA can replace the current usage of
 LZO in normal PC so easily.

The decompression code for LZMA is very small, i use -Os option to
compile LzmaDecode.c, the result is about 2.8K.

-- 
Bean


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: Creating grub/grub2/grldr.mbr bootrom with makerom

2007-12-22 Thread Yoshinori K. Okuji
On Saturday 22 December 2007 10:03, Bean wrote:
 On Dec 22, 2007 4:06 PM, Yoshinori K. Okuji [EMAIL PROTECTED] wrote:
  On Friday 21 December 2007 20:04, Robert Millan wrote:
   How well does compression work for GRUB 2 ?  core.img is already
   compressed (with lzo); if LZMA makes better results perhaps it'd be a
   good idea to switch.
 
  It's not that simple. LZO was chosen instead of gzip, because of the size
  requirement on PC. To preserve safety, we need to keep the core part less
  than 31.5KB (63 sectors).
 
  The size is the sum of non-compressable bootstrap code, decompression
  code and compressed code + data. When I made an experiment in PUPA,
  although gzip had a better compression ratio, due to the decompression
  code size, LZO won.
 
  I don't know precisely, but I suspect that decompression code for LZMA
  would be slightly larger than gzip's (IIRC, a range coder is likely to
  require more code and data). So I don't expect that LZMA can replace the
  current usage of LZO in normal PC so easily.

 The decompression code for LZMA is very small, i use -Os option to
 compile LzmaDecode.c, the result is about 2.8K.

Thank you for your information. In my estimate, LZMA will outperform, if the 
ratio, compressed size : original size, is less than 0.52. So LZMA might 
win. :)

Okuji


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: embedding filesystems in core image (Re: moving ata initialisation to a command)

2007-12-22 Thread Robert Millan

Here's a new patch, with some cleanup.  The main difference is that memdisk.c
doesn't include any arch-specific code.

I've spotted a memory management problem.  The memdisk image, at the location
that it's usually uncompressed, tends to collide with the payload loading
region (grub_os_area_{addr,size}).

I'm not sure what would be a good solution to this.  Perhaps we could copy it
to dynamic memory in grub_memdisk_init() ?  However, if user plays with payload
images without booting them, and then loads memdisk.mod manually, things could
break.

Although, embedding a memdisk image without embedding memdisk.mod is kind of
silly;  perhaps grub-mkimage shouldn't allow it.

What do you think?

-- 
Robert Millan

GPLv2 I know my rights; I want my phone call!
DRM What use is a phone call, if you are unable to speak?
(as seen on /.)


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: Switching to git?

2007-12-22 Thread Yoshinori K. Okuji
On Saturday 22 December 2007 12:20, Robert Millan wrote:
 On Sat, Dec 22, 2007 at 09:50:50AM +0100, Yoshinori K. Okuji wrote:
   Finally, things like grub4dos should not be forks, they should be
   branches.  This would give then a better exposure.  CVS branch support
   is pathetic, and the same applies to Subversion, although for
   different reasons.

 What's wrong with Subversion branching ?  Or did you mean merging?

Subversion's branches are as stupid as CVS's, because they don't remember what 
have been merged by themselves.

Okuji


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: Switching to git?

2007-12-22 Thread Robert Millan
On Sat, Dec 22, 2007 at 09:50:50AM +0100, Yoshinori K. Okuji wrote:
  Finally, things like grub4dos should not be forks, they should be
  branches.  This would give then a better exposure.  CVS branch support
  is pathetic, and the same applies to Subversion, although for
  different reasons.

What's wrong with Subversion branching ?  Or did you mean merging?

-- 
Robert Millan

GPLv2 I know my rights; I want my phone call!
DRM What use is a phone call, if you are unable to speak?
(as seen on /.)


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: Switching to git?

2007-12-22 Thread Pavel Roskin

Quoting Robert Millan [EMAIL PROTECTED]:


Maybe you find interesting to know that I never use RCS (any of them) merging
feature at all.  I prefer to extract patches from RCS and manage them myself.
I often even manage branches by hand as well.


Just to clear any misunderstanding, extracting a patch and applying to  
another file is still merging.  Unlike the 3-way merge, the patch  
command won't generate a merged file with conflicts that require  
manual editing.  But more trivial kinds of merging will happen is the  
patch command considers it safe.  Applying a patch cleanly doesn't  
guarantee that the resulting file will compile and/or work properly.


Some bugs caused by merging can be avoided like other bugs, i.e. by  
using sane programming and testing practices.  Some bugs just need to  
be tracked down.  That's just a fact of life.  For instance, some code  
could be duplicated in one branch and fixed in another, resulting in  
one copy being unfixed.  A patch could introduce a call to a function  
that changed its semantic after the base revision of the patch.  Both  
can be prevented, but only to a degree.


--
Regards,
Pavel Roskin


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel