Re: packages: areca-cli/areca-cli.spec (NEW) - new; seems redistributable

2012-07-03 Thread Elan Ruusamäe

On 07/03/2012 11:16 PM, glen wrote:

Author: glen Date: Tue Jul  3 20:16:47 2012 GMT
Module: packages  Tag: HEAD
 Log message:
- new; seems redistributable

 Files affected:
packages/areca-cli:
areca-cli.spec (NONE ->  1.1)  (NEW)

 Diffs:


Index: packages/areca-cli/areca-cli.spec
diff -u /dev/null packages/areca-cli/areca-cli.spec:1.1
--- /dev/null   Tue Jul  3 22:16:47 2012
+++ packages/areca-cli/areca-cli.spec   Tue Jul  3 22:16:42 2012
@@ -0,0 +1,55 @@
+# $Revision$, $Date$
+Summary:   Utility to control Areca SATA RAID controllers
+Name:  areca-cli
+Version:   1.86
+Release:   1
+# 
http://www.areca.us/support/download/RaidCards/Documents/Manual_Spec/Downloadable_Software_Licenses.zip
+# Linux Exception: redistributable if not modified
+License:   License for Customer Use of Areca Software


can somebody read the license and check whether we can repackage and 
distribute it?


--
glen

___
pld-devel-en mailing list
pld-devel-en@lists.pld-linux.org
http://lists.pld-linux.org/mailman/listinfo/pld-devel-en


Re: lvm2 and initrd

2012-07-03 Thread Paweł Sikora
On Monday 02 of July 2012 12:07:18 Jan Rękorajski wrote:
> On Mon, 02 Jul 2012, Elan Ruusamäe wrote:
> 
> > On 07/02/2012 09:22 AM, Jacek Konieczny wrote:
> > > On Mon, Jul 02, 2012 at 08:45:41AM +0300, Elan Ruusamäe wrote:
> > >> >  anyone interested working that out (so that udev can be again optional
> > >> >  for rootfs on lvm systems)?
> > > Is udev on initrams that bad, that you just don't want to use it? Are
> > > there scenarios where udev just won't work?
> > well, there's continuous fight getting initrd versions of tools 
> > compiled, as new releases tend to get broken with klibc/uclibc/... and 
> > even if they compile with some patching, they can crash in some 
> > configurations/architectures.
> 
> AFAIR semaphore messages are harmless.
> 
> > this could end up we will be having glibc version of initrd udev, or no 
> > initrd version of udev at all, because nobody wants to do the porting to 
> > small libc's.
> 
> Porting, and lately even just static linking, becomes bigger and bigger
> RPITA, so we may have no choice than to having dynamic linked programs
> in initrd. I just don't see a reason to justify the extent of work one
> have to put into making klibc/uclibc/.../static built tools.

we should use shared libc.so (~1.7MB @ x86-64) in initrd and build essential
init tools with -Os optimization. -Os e.g. reduces mdadm size from 448kB to 
380kB
and 'upx -9' reduces binaries about the ~50% (better than gzip -9).
i vote for drop any klibc/uclibc/glibc static linking.

___
pld-devel-en mailing list
pld-devel-en@lists.pld-linux.org
http://lists.pld-linux.org/mailman/listinfo/pld-devel-en


Re: Cluster stuff (cman,dlm,heartbeat,corosync,openais,pacemaker,drbd,lvm)

2012-07-03 Thread Jacek Konieczny
On Tue, Jul 03, 2012 at 10:15:18AM +0200, Tomasz Rutkowski wrote:
> > 
> > What do you mean  by '3.1 won't build with pacemaker'? There was no
> > pacemaker dependency in cluster.spec
> > 
> 
> well, if You would like to control dlm space from within pacemaker You
> need daemon dlm_controld.pcmk - this one is build from cluster3 suite
> with pacemaker's files, perhaps You have made it build with 3.1 line, 
> I haven't investigate much [at all I mean] why it's not building...
> (and You need dlm space for clvmd as far as I'm concerned)

I was working from I have found in the cluster.spec… there was not
Pacemaker dependency that and nothing failed with my Pacemaker.

As far as clvmd is concerned… I was not able to make it work with
Pacemaker/corosync - yes that was probably the 'dlm_controld.pcmk'
thing, but even Google couldn't help much (documentation for dlm/clvmd
that is available is close to nothing). And I don't quite understand how
the packages from cluster.spec are split (a dlm_conrold binary is in th
group subpackage which pulls cman… but I didn't want cman and it didn't
work anyway). Though, I was able to run clvmd with Pacemaker/openais/corosync
and it seems that will be a good enough solution for me. openais does
not look that scary as the stuff from cluster.spec ;)

I can see place for improvement, though.

Greets,
Jacek
___
pld-devel-en mailing list
pld-devel-en@lists.pld-linux.org
http://lists.pld-linux.org/mailman/listinfo/pld-devel-en


Re: lvm2 and initrd

2012-07-03 Thread Michael Shigorin
On Mon, Jul 02, 2012 at 09:53:17AM +0300, Elan Ruusam?e wrote:
> this could end up we will be having glibc version of initrd
> udev, or no initrd version of udev at all, because nobody wants
> to do the porting to small libc's.

A colleague of mine experiments with musl and tells that so far
things look pretty good.

In the meanwhile ALT has moved to glibc based initrd but there's
some hope to move it to musl (and maybe mdev but not sure on this):
http://en.altlinux.org/Make-initrd
http://www.altlinux.org/Make-initrd [ru]

-- 
  WBR, Michael Shigorin 
  -- Linux.Kiev http://www.linux.kiev.ua/
___
pld-devel-en mailing list
pld-devel-en@lists.pld-linux.org
http://lists.pld-linux.org/mailman/listinfo/pld-devel-en


Re: lvm2 and initrd

2012-07-03 Thread Elan Ruusamäe

On 02.07.2012 09:53, Elan Ruusamäe wrote:
and initrd getting bigger and bigger when added more tools into it, so 
older kernels (for newer compiled in default is increased, so it 
*could* fit) need ramdisk_size commandline override, thus can't be 
automated (need manually verified to see that initrd still does fit)


also big initrd is problem of existing installations having /boot being 
small size, i have installations it being 32MB, 64MB, and custom is to 
have at least two kernel packages installed, current and future kernel.


--
glen

___
pld-devel-en mailing list
pld-devel-en@lists.pld-linux.org
http://lists.pld-linux.org/mailman/listinfo/pld-devel-en


Re: Cluster stuff (cman,dlm,heartbeat,corosync,openais,pacemaker,drbd,lvm)

2012-07-03 Thread Tomasz Rutkowski
Dnia 2012-07-01, nie o godzinie 22:13 +0200, Jacek Konieczny pisze:
> On Sun, Jul 01, 2012 at 05:36:04PM +0200, Tomasz Rutkowski wrote:
> > cluster.spec was my work of 3rd generation of redhat cluster suite
> > (dlm+cman+gfs2+rgmanager), there is bcond in lvm2 to complete stack, but
> > it's already outdated (needs polishing :))
> 
> I have replaced this bcond with something more up to date. Currently
> '3rd generation redhat cluster suite' is near being outdated.
> 
> Keeping old cman and dlm as the default backend for lvmd just does not
> make any sense now.
> 

sorry, I'm on my vacations right now and family goes first... :)
true, cluster2 line is outdated enough

> 
> What do you mean  by '3.1 won't build with pacemaker'? There was no
> pacemaker dependency in cluster.spec
> 

well, if You would like to control dlm space from within pacemaker You
need daemon dlm_controld.pcmk - this one is build from cluster3 suite
with pacemaker's files, perhaps You have made it build with 3.1 line, 
I haven't investigate much [at all I mean] why it's not building...
(and You need dlm space for clvmd as far as I'm concerned)


> 
> I have also played with pacemaker 1.1 (on branch currently) - after a
> bit of patching works with corosync 1.4.3 for me, so I think it could
> go to HEAD, unless there is a reason to keep pacemaker 1.0.x. Though, 
> I don't remember why I use 1.1 and not 1.0 ;)
> 

me neither :)

-- 

Tomasz Rutkowski , +48 604 419 913 , e-mail/xmpp: aluc...@nospheratu.net


___
pld-devel-en mailing list
pld-devel-en@lists.pld-linux.org
http://lists.pld-linux.org/mailman/listinfo/pld-devel-en