Re: [DNG] Drive-by critique

2018-12-12 Thread Roger Leigh

On 12/12/2018 11:14, Rick Moen wrote:

A key part of the basis of Eric's argument is irritatingly and obviously
untrue:

   If you want to have more devs, you need to attract a larger userbase.

I'm surprised to see Eric advance this non-sequitur.


I don't think it's a non sequitur, but I do think it's true as much as 
your points are true.  I agree with both, and don't find them incompatible.


Good, loyal developers are people who use your project and need more 
from it.  Attracting more users doesn't /necessarily/ attract 
developers, but it's still a /prerequisite/ to attracting developers. 
It increases the likelihood that out of the total userbase, some 
fraction of that userbase will have the desire to contribute something 
to the project, and it also increases the /value/ of contributions if 
they are widely used.


There are several reasons why this doesn't always hold true.  Some 
projects are hostile to outside contributions.  Others are so complex 
that there's a technical barrier to effective contributions.  Others 
have cliques which don't welcome outsiders, or baroque submission 
processes which sap the will of any potential contributor.  Or they lack 
the infrastructure and manpower to handle contributions effectively.  So 
in all these cases, increasing the number of users doesn't help much. 
But this is a self-inflicted failure.


But if a project is open to new contributors, welcomes and reviews 
patches in a timely manner, there's a positive reinforcement where new 
users are empowered and encouraged to do just this, and this both 
increases the number of users and developers at the same time.


GNOME is an example of the former.  I've had good quality, tested 
patches sit in its bugzilla for over a decade without review before they 
were summarily closed.  This is a project which did not value 
contributions from outsiders.  In the case of libgnomecanvas, they 
wasted the time of multiple developers writing at least six slightly 
different forks rather than review and apply contributed fixes to the 
canonical implementation to make it usable, featureful and performant. 
All of these forks, plus the original, are now basically dead with no 
users.  They did not foster contributions and this led directly to a 
decline in both contributions and use of the libraries.  It was issues 
like this that killed the prospects of GNOME for commercial software 
development in the mid 2000s, because we couldn't rely on it when issues 
couldn't be resolved in an effective and timely manner.


CMake is an example of the latter.  Turnaround time between submission 
and review is usually under 24 hours.  It's been as little as 20 
minutes.  After addressing reviewer's comments and waiting for CI 
testing to complete, typical time between submission and merging for me 
has been between 24-36 hours depending upon the complexity, and for 
simple fixes has been as little as 60 minutes.  This is a project which 
values code contribution from third parties, helps familiarise new 
developers with the project's codebase and policies, and this both 
encourages repeat contributions as well as grows the user base and 
developer base due to the utility of all this extra work going in over 
time.  As a developer, it means I get bugfixes and new features into my 
users' hands immediately from git, or by the next routine point release.


I and many other Debian developers got started because we needed new 
software packaging, or existing packages fixing or updating, and we got 
stuck in.  For myself, I started by packaging my own upstream software 
releases for projects I belonged to, and went on from there to do much 
much more.  We were users who became developers over time.  While the 
new maintainer process is somewhat lengthy, more users means more people 
who find unmet needs that becoming a developer can fulfill.  I started 
the process because other DDs were getting fed up of reviewing and 
uploading my work, and strongly encouraged it.  I do believe the same 
underlying needs and motivations hold true for Devuan or any other 
distribution.


I don't think Eric's points about ease of installation should be quite 
so trivially dismissed.  It's not like these points haven't been raised 
and discussed at length by the debian-installer team for many years. 
Any barrier which prevents a user doing an installation and getting a 
working system will result in lost users who can't get over that hurdle. 
 From difficulties finding the correct image, to making the bootable 
installation medium, to successfully completing the installation 
process.  They all matter.


This doesn't mean that you have to dumb things down to the lowest common 
denominator.  But it does mean that you have to look at what is 
unnecessary complexity vs necessary complexity, and try to minimise the 
former.  Reducing the number of installer images is beneficial so long 
as you don't sacrifice hardware support, as is having the most up to 
date

Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-29 Thread Roger Leigh

On 24/11/2018 15:08, k...@aspodata.se wrote:

Roger:
...

There's no clean separation between the "base" system and "everything else".

...

I think my urge to have a separate /usr is that I want such a
separation and there isn't a clear other place to have it.


Is there an underlying rationale for why you want this separation?


The other part of the scenario you mentioned here is "doesn't want to
use an initramfs".  Why?

...

I skip initrd to keep the boot setup simple.


I touched on this in another reply with regard to the relative number of 
people testing a separate /usr, but I wanted to come back to this point 
because there's a very important consequence to note here.


It's absolutely true that direct booting without an initrd is simple. 
You're cutting out a relatively large amount of complexity, and you're 
mounting the rootfs directly as a device from the kernel command-line. 
You can't get much simpler than that.


However, there is a cost to bear in mind.  Because this is a relatively 
uncommon usage scenario (most people do use the initramfs since it's the 
default), it's much, much less tested.  There are a lot of special-cases 
in many of the early boot scripts which are only run when direct 
booting, and they are not exercised by the vast majority of systems out 
there.  By choosing to use the least-tested uncommon option, you are 
bearing a certain risk.


The standard initrd generated by initramfs-tools is simply a busybox 
shell, and a few shell scripts.  It also copies in a few tools like udev 
and a fsck for your system if needed.  There is certainly some 
complexity here.  But it's not really all that complicated.  No moreso 
than the early init scripts.  You could read through the scripts in a 
few minutes.  And you can even step through them by hand with the break= 
option on the kernel command-line.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-29 Thread Roger Leigh

On 29/11/2018 13:44, Olaf Meeuwissen wrote:

Q: Isn't there some filesystem type that supports settings at a more
  granular level than the device?  Like directory or per file?
A: Eh ... Don't know.  Haven't checked ...
Solution: Go fish!

# I haven't gone fishing yet but a vague recollection of Roger's post
# where he mentioned ZFS seemed promising ...


You could set them at the directory level if you wanted with ZFS, by  
creating a filesystem per directory.  But that might get a bit  
unmanageable, so doing it in a coarser-grained way is usually  
sufficient.  Let me show some examples.


Firstly, this is an example of a FreeBSD 11.2 ZFS NAS with a few  
terabytes of HDD storage.  There's a dedicated system pool, plus this  
data pool:


% zpool status red
  pool: red
 state: ONLINE
  scan: scrub repaired 0 in 4h12m with 0 errors on Wed Nov 28  
19:51:31 2018

config:

NAME STATE READ WRITE CKSUM
red  ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
gpt/zfs-b1-WCC4M0EYCTAZ  ONLINE   0 0 0
gpt/zfs-b2-WCC4M5PV83PY  ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
gpt/zfs-b3-WCC4N7FLJD34  ONLINE   0 0 0
gpt/zfs-b4-WCC4N4UDKN8F  ONLINE   0 0 0
logs
  mirror-2   ONLINE   0 0 0
gpt/zfs-a1-B492480446ONLINE   0 0 0
gpt/zfs-a2-B492480406ONLINE   0 0 0

errors: No known data errors

This is arranged as a pair of two mirrors, referenced by GPT labels.  
Each of the mirror "vdev"s is RAID1, and then writes are striped across  
them.  It's not /actually/ striping, it's more clever, and it biases the  
writes to balance throughput and free space across all the vdevs, but  
it's similar.  The last vdev is a "log" device made of a pair of  
mirrored SSDs.  This "ZIL" is basically a fast write cache.  I could  
also have an SSD added as an L2ARC read cache as well, but it's not  
necessary for this system's workload.  No errors have occurred on any of  
the devices in the pool.


All filesystems allocate storage from this pool.  Here's a few:

% zfs list -r red | head -n6
NAME USED  AVAIL  REFER  MOUNTPOINT
red 2.57T  1.82T   104K  /red
red/bhyve872K  1.82T88K  /red/bhyve
red/data 718G  1.82T   712G  /export/data
red/distfiles   11.9G  1.82T  11.8G  /red/distfiles
red/home 285G  1.82T96K  /export/home

There are actually 74 filesystems in this pool!  Because the storage is  
shared, there's no limit on how many you can have.  So unlike  
traditional partitioning, or even LVM (w/o thin pool) it's massively  
more flexible.  You can create, snapshot and destroy datasets on a whim,  
and even delegate administration of parts of the tree to different users  
and groups.  So users could create datasets under their home directory,  
snapshot them and send them to and from other systems.  You can organise  
your data into whatever filesystem structure makes sense.


Let's look at the pool used on the Linux system I'm writing this email on:

% zpool status rpool
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0h5m with 0 errors on Mon Nov 26  
17:15:55 2018

config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  sda2  ONLINE   0 0 0

errors: No known data errors

This is a single SSD "root pool" for the operating system; with data in  
other pools not shown here.


% zfs list -r rpool
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool   45.4G  62.2G96K  none
rpool/ROOT  14.7G  62.2G96K  none
rpool/ROOT/default  14.7G  62.2G  12.0G  /
rpool/home  3.18M  62.2G96K  none
rpool/home/root 3.08M  62.2G  3.08M  /root
rpool/swap  8.50G  63.2G  7.51G  -
rpool/var   5.72G  62.2G96K  none
rpool/var/cache 5.32G  62.2G  5.18G  /var/cache
rpool/var/log398M  62.2G   394M  /var/log
rpool/var/spool 8.14M  62.2G  8.02M  /var/spool
rpool/var/tmp312K  62.2G   152K  /var/tmp

These datasets comprise the entire operating system (I've omitted some  
third-party software package datasets from rpool/opt).


% zfs list -t snapshot -r rpool
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool@cosmic-post  0B  -96K  -
rpool/ROOT@cosmic-post 0B  -96K  -
rpool/ROOT/default@cosmic-post  2.66G  -  12.3G  -
rpool/home@cosmic-post 0B  -96K  -
rpool/home/root@cosmic-post0B  

Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-29 Thread Roger Leigh

On 29/11/2018 12:57, k...@aspodata.se wrote:

Roger:
...

Note that to return to the pre-merge policies would be an exercise in
futility.  It was already an exercise in futility back in 2011 because
the number of libraries which /could/ be moved to /lib is an unbounded
set.  There's always another tool which /might/ be required, which pulls
in yet more libraries, and their dependencies, and the dependencies of
their dependencies and so on.  It's a Sisyphean task.


It wouldn't be for a subset of booting setups. It is perfectly possible
for a certain set. If I make a make a kernel package, I can certainly
say that that this package supports direct boot to disk without initrd
and with a separate /usr, with the constraint that it don't support
e.g. lvm and other setups.

Why not agree on a certain set of programs/libs that should be in /bin,
/sbin, and /lib, just to not break my and others packages. That set
don't need to cover all booting possibilities.


This is what we used to (try) to do, but we failed at it.  Even when the 
policy was to not allow any library dependencies in /usr for binaries in 
/ there were many dozens that crept in unnoticed.  The fact that they 
crept in is also part of the problem; mounting /usr was sufficiently 
niche that it didn't get enough testing to pick up these problems and 
resolve them before they caused trouble.  That alone was a warning sign 
that this option wasn't well supported and was subject to breakage as a 
result, and is also a factor in its demise.


The problem is agreeing on what is essential for booting, and what is 
not.  There's so much variety in differing requirements that satisfying 
everyone is impossible.  For example, ext[234], md and lvm2 might be a 
common requirement, maybe xfs and btrfs-tools as well.  How about ZFS? 
Ceph? GlusterFS? Or proprietary third-party filesystems like IBM GPFS. 
LDAP.  Encryption.  Networking.  CA Certificates.  SSL/TLS.  There's 
always "just one more" requirement, which critically breaks someone's 
system if it's not specially catered for.  This is why the problem is 
unbounded and this situation was untenable.  It made a separate /usr 
completely unusable for a good number of perfectly legitimate setups, 
many of which were hard requirements for working in large corporate or 
academic IT environments, where it had to inter-operate well with the 
existing infrastructure.  The fact that Debian was hideously broken by 
default for all these use cases had been an embarrassment for a number 
of years which we needed to solve.


We could certainly pick an essential subset of tools and filesystems, 
and state that this combination is allowed for mounting /usr.  But this 
is basically drawing a line in the sand and saying that outside those 
essential packages, everything else is a second-class citizen which 
isn't properly supported.  One of the points which has been made 
multiple times is the need for freedom for the admin to configure their 
systems as they like.  It's obviously important.  But by drawing this 
line we are saying that you /can't/ have the freedom to use anything 
outside this essential set because it will break.  Forget about 
experimenting with new and interesting filesystems.  And forget about 
integrating with the infrastructure required by your company.


And lest anyone forget history, it was once impossible to use md or lvm 
(or btrfs etc.) for booting, before all that special-casing and 
bootloader support was in place.  I was one of the first people to try 
all that out and identify the problems when it was brand spanking new. 
The same restrictions apply to any new technologies which come along in 
the future.  We don't want to deliberately limit future possibilities.


Those factors had to be weighed against the need to mount a /usr without 
an initramfs.  The number of people doing this was a niche subset of a 
already very small subset.  We support booting without an initramfs.  We 
support booting with a separate /usr.  But not both together.  There are 
limits to what is possible and reasonable to support, and that use case 
was judged to be so tiny that the other use cases were of greater benefit.


No matter how you slice it, restrictions were going to be placed upon 
the use of a separate /usr for one subset of users or another.  The 
status quo was that it worked without an initramfs, but was broken for a 
large number of other scenarios.  We stopped it working without an 
initramfs but made it work for **all** of the other scenarios.  So in 
terms of allowing a separate /usr, we actually /increased/ its 
flexibility and utility at the expense of this one use case.  I tried my 
hardest to avoid any breakage at all, but I couldn't figure out a 
supportable way to do this.  If anyone can figure out a solution, then 
I'd be very happy.  However, given that the solution to the problem is 
to simply use an initramfs, that's the official solution here.  Keep a 
separate /usr, but use the init

Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-29 Thread Roger Leigh
On 29/11/2018 10:21, Arnt Karlsen wrote:> On Wed, 28 Nov 2018 15:26:59 
+, Roger wrote in message

<38625727-08b2-2816-85b0-8f57d6796...@codelibre.net>:


This isn't a bug, or even a feature.  It's a deliberate design
decision which affects the functioning of the system as a whole.  I
understand your concerns, and I even sympathise with them, but the
decision to do this was taken over six years ago after extensive
discussion,


..links to the important parts of those discussions would be nice
and very helpful.


I don't have the time to dig through all the separate mailing lists, IRC 
logs, bug ticket and all the rest.  It's scattered too widely and over 
too long a period of time.  However, I have dug up some public mailing 
list threads over the time period concerned which might be informative.


https://lists.debian.org/debian-devel/2011/12/threads.html
  Red Hat is moving from / to /usr/
  from / to /usr/: a summary

https://lists.debian.org/debian-devel/2011/12/thrd2.html
  #652011
  #652275

https://lists.debian.org/debian-devel/2012/01/threads.html
  from / to /usr/: a summary

https://lists.debian.org/debian-devel/2012/09/msg00010.html
  Stuff from /bin, /sbin, /lib depending on /usr/lib libraries

https://lists.debian.org/debian-devel/2012/08/thrd2.html
  Stuff from /bin, /sbin, /lib depending on /usr/lib libraries

https://lists.debian.org/debian-devel/2012/09/threads.html
  Stuff from /bin, /sbin, /lib depending on /usr/lib libraries

https://lists.debian.org/debian-devel/2012/12/threads.html
  Stuff from /bin, /sbin, /lib depending on /usr/lib libraries

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=652459
  Contains most of the implementation discussion and iterative 
refinement of the patchset.



and it's been implemented and released six and four years ago,
respectively.


...appearantly with trumpian eptitude, running Karl's usr.pl on my
"life boat" install of devuan_ascii_2.0.0_amd64_desktop-live.iso,
I found 20 programs in /bin or /sbin in 12 packages that might need
a fix.


Interestingly, if you read some of these threads, you'll see people 
doing exactly this auditing back in 2011-12 because this was already 
causing breakage back then, even before the mounting of /usr in the 
initramfs.



..I put Karl's usr.pl in /sbin and ran it first bare, then counted
with: usr.pl |grep ^/ |wc -l
and: dpkg -S $(usr.pl |grep ^/ ) |cut -d":" -f1 |sort |uniq |wc -l

..fixing 2 or 3 packages a year, is very doable. ;o)

..now, do we return to the pre-merge policies 6 and 4 years ago?
Yes, as a first step, I believe we should[…]


Note that to return to the pre-merge policies would be an exercise in 
futility.  It was already an exercise in futility back in 2011 because 
the number of libraries which /could/ be moved to /lib is an unbounded 
set.  There's always another tool which /might/ be required, which pulls 
in yet more libraries, and their dependencies, and the dependencies of 
their dependencies and so on.  It's a Sisyphean task.


On 29/11/2018, 06:22 Steve Litt wrote:

> I see the fingerprints of Redhat, Poettering and Freedesktop all over
> such encouragement, and highly doubt it's for technical reasons.

If you read the threads above, you will see that RedHat were planning to 
merge /usr at this point.  It's no secret that there is some 
cross-pollination of ideas between distributions, and that there is also 
some degree of pressure for conformity to benefit interoperability. 
However, it's also /not/ a conspiracy in any way.  This was all 
discussed and implemented in public, not done in secret.  There were 
solid technical reasons for doing this, and it's likely this would have 
happened with or without any degree of influence from RedHat.  The fact 
that it was actually implemented by the sysvinit maintainers should be a 
big hint that we saw a good deal of value in it which would greatly 
improve the flexibility and maintainability of the initscripts for the 
benefit of our users.  And also, without it, there would have been /even 
more/ pressure to adopt systemd, since it removed the arguments about 
sysvinit/initscripts being unable to handle a number of scenarios which 
it previously couldn't cope with.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-28 Thread Roger Leigh

On 28/11/2018 11:36, Rick Moen wrote:

Quoting Didier Kryn (k...@in2p3.fr):


IIUC, your argument boils down to "depending on /usr for early boot is
a *bug*", while Roger told us why it has become a *feature* (~:


My view, which I expressed in detail prior to Roger joining the thread,
is that it's vital to the most vital function of the root filesystem's
maintenance software directories (/sbin, /bin, /lib, /lib64) that their
binaries function _even_ if /usr is not (or at the moment cannot be)
mounted -- because the most vital function of those subtrees is backup,
restore, repair, maintenance (functions that might be required to
recover/fix /usr).


Prior to wheezy, this was considered vital and was explicitly required 
and enforced.  From wheezy onward, this requirement was deliberately and 
intentionally dropped.


If you want to backup, restore, or repair your system, you can still do 
so.  But you have to do so with /usr present either as part of the 
rootfs or mounted as a separate filesystem.


This isn't a bug, or even a feature.  It's a deliberate design decision 
which affects the functioning of the system as a whole.  I understand 
your concerns, and I even sympathise with them, but the decision to do 
this was taken over six years ago after extensive discussion, and it's 
been implemented and released six and four years ago, respectively.



I addressed the main part of Roger's initial post upthread, and don't
care to revisit that discussion, except to mention that he dismissed the
use-case cited above, which is the traditional Unix use-case, in wording
that didn't address the substance of those concerns.


I tried to explain the rationale for the decision in a reasonable amount 
of detail, along with some of the competing concerns which affected it. 
I'm not trying to convince you that your points are not important for 
your needs or that they are irrelevant.  But the decision was already 
made and implemented for quite a long time now.  I'm simply trying to 
explain why things are the way they are today.


I'm also not trying to say that I'm fully happy with the decision or its 
consequences.  Like many decisions, it arose as a imperfect compromise 
which considered many different competing use cases and requirements, 
and tried to pick the path which provided the most benefit for the least 
harm.  We did the best we could.  And if we had to do it over again, I 
don't think much would be materially different.  We did a pretty good 
job given the constraints.  Limitations on the use of a fully separate 
mountable /usr were the price we had to pay for a whole host of benefits 
which were considered to be worth that cost.



It would certainly be possible to move all applications and dynamic
libraries needed for early boot from the /usr tree to /bin, /sbin and
/lib, but Debian has made a different choice.


Debian did try this choice, for most of the 2000s.  We repeatedly moved 
libraries and tools from /usr to the rootfs which were needed for early 
boot.  But the number of libraries and tools which *might* be needed was 
ever increasing.  The problem was not scalable or tractable for a 
general-purpose distribution.


We only switched to the current solution when it became clear that this 
approach wasn't working and that there were a number of problems which 
couldn't be resolved by this approach.  A lot of effort was put into 
trying to make this approach work.


The current solution *is* scalable and tractable for pretty much any 
esoteric boot requirements you might have.  Because it doesn't depend 
upon having a pre-approved shortlist of early-boot packages treated 
specially, it works for all packages.



(This is why I tend not to waste time hyperventilating about dumb distro
policy decisions:  Submit a bug.  If it's rejected or never acted on,
just make a local configuration that works around the stupid distro
action, and move on to more rewarding parts of life.  If moved to
public-spiritedness, also publish one's fix a part of a third-party
package repo, pro bono public, to help others benefit from your work
without needing to replicate ito.)


This is good advice.  But for the problem in question, it's not a bug 
which can be worked around this way, it's a fairly fundamental design 
choice which just isn't fixable with small patches here and there.  It's 
systemic.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-24 Thread Roger Leigh

On 24/11/2018 02:45, Steve Litt wrote:

On Wed, 21 Nov 2018 12:17:21 +
Roger Leigh  wrote:



Some general points to consider:

1) A separate /usr serves no practical purpose on a Debian/Devuan
system

 Historically, /usr was separately mountable, shareable over NFS.
With a package manager like dpkg, / and /usr are an integrated,
managed whole.  Sharing over NFS is not practical since the managed
files span both parts, and you can't split the package database
between separate systems.


What I hear in the preceding paragraph is that dpkg considers /
and /usr a package deal (no pun intended), and so can't abide an NFS
mounted /usr. Telling people to merge / and /usr for this reason is
fixing the symptom and letting the root cause remain. That's usually
not a good idea, but perhaps in this case fixing dpkg would be too
difficult...


This part isn't a problem with the dpkg tool itself.  It's down to the 
content of packages not having a clean separation between / and /usr. 
Programs, libraries and data on both sides of the split are mutually 
interdependent upon the other.  KatolaZ illustrated this nicely in his 
last post with library dependencies across the divide.  (This was 
forbidden before the MountUsrInInitramfs work.)


There's no clean separation between the "base" system and "everything else".

You can absolutely mount /usr over NFS.  But it's not very practical to 
share that filesystem between multiple systems.  And it isn't a very 
useful configuration choice compared with other possibilities.  If you 
want to use NFS, you can mount the rootfs over NFS (including /usr). 
It's a simpler, more practical arrangement, and it's exactly what tools 
like debian-live do (for example).


Contrast this with the BSDs where there's a defined "base system" and 
then a separate and largely independent collection of packages under 
/usr/local.  But even on the BSDs, the primary split is between / and 
/usr/local, not / and /usr.  /usr/local/etc and /usr/local/var exist, 
while /usr/etc and /usr/var do not exist on Debian (or FreeBSD); they go 
on the rootfs, which is one of the causes of the tight coupling.  And 
it's not necessarily a bad thing.  It's simply a part of the basic 
design of Debian that we've accepted for over two decades.


  Take a copy of e.g. 
http://mirrorservice.org/sites/ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.0-RC1/base.txz


The "base" is the content of / and /usr from a single build of the base 
source tree.  While there's a separate static /rescue and it's 
technically possible to mount /usr separately (there's similar 
convoluted logic in the initscripts to what Debian used to have), the 
system as provided is a collection encompassing both.


Because it's not (yet) managed by a true package manager, you could 
actually set this up in the traditional way if you wished, and share a 
static /usr between several systems.  But it might still interact badly 
with freebsd-update (not tested it myself), and it's planned to come 
under the remit of the pkg package manager in time, so like Debian it 
may run into logistical problems due to the package management.



Modern disk sizes make partitioning a
separate /usr unnecessary and undesirable.  (Both are of
course /possible/, but there is precious little to gain by doing so.)


Well, *you* might not want a mounted /usr, and *I* certainly don't want
a mounted /usr, but we don't speak for every user in every context, and
we can't anticipate every context. So "serves no purpose" as a blanket
statement is false if we find one person using or wanting to use a
separate /usr on a De??an system.


Absolutely.  However, "wanting" to use a separate /usr doesn't imply 
that the reasons for that desire are sound or reasonable.  This is why I 
very much would like to know the underlying rationale for each use. 
Some may be genuinely valid.  Others may not be.  But we need to be able 
to objectively evaluate each one to determine this.


If you look back in the debian-devel and other list archives 5-7 years 
back or more, this was discussed on several occasions, and it was 
increasingly a struggle to identify genuinely valid use cases.  Some 
were bad.  Others were valid, but... pointless.  Over the years, the 
need for a separate /usr has weakened.  Most of the time, a single root 
partition is just fine, and this is the default for the installer, and 
the way the vast majority of people run their systems.  For these 
people, a separate /usr doesn't solve any of their problems.


Several of the uses are borne out of habit rather than necessity.  The 
sharing of /usr over NFS is an instructive one.  In discussions a few 
years back, this was brought up as a valid use case.  One or two users 
and developers brought this possibility up.  Some people claimed to be 

Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-23 Thread Roger Leigh

On 22/11/2018 18:21, Roger Leigh wrote:

Before I follow up on any of the points you (and others) made in 
response, let me begin with some history you may be unaware of.  It 
actually predates systemd, and is largely unrelated to systemd.


I just rediscovered

  https://wiki.debian.org/ReleaseGoals/MountUsrInInitramfs

which has some additional details.  Though it actually contains less 
detail overall than my email yesterday, but there are some additional 
small bits of information there.  It never got fully updated after 
completion of the goal.  But it might be of general interest.


  https://wiki.debian.org/UsrMerge

Looking at the details of the /usr merge in more detail, if you're 
currently running with a separate /usr, this shouldn't have any impact 
on booting via the initramfs with initramfs-tools.  The only 
consideration would be that you would need to have sufficient disc space 
under /usr for the files being moved over from the rootfs.


So if you're insistent upon retaining a separate /usr, that shouldn't be 
a problem.  Though it will definitely break booting of / without an 
initramfs; you won't be able to do a direct boot of the rootfs since all 
the binaries will be missing.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-23 Thread Roger Leigh

On 22/11/2018 22:24, Alessandro Selli wrote:

On 22/11/18 at 19:21, Roger Leigh wrote:

On 21/11/2018 16:11, Alessandro Selli wrote:

On 21/11/18 at 13:17, Roger Leigh wrote:

Hi folks,

I've been following the discussion with interest.



    No, you definitely have not followed it.  In fact you are
disregarding
all the points that were expressed against the merge.


Let me begin by stating that I found your reply (and others) to be
rude, unnecessarily aggressive, and lacking in well-reasoned objective
argument.



   Oh poor show flake, did I hurt your tender feelings when I state facts?


I did find your reply unacceptably rude.  This had nothing to do with 
the "facts" (which were largely absent) or the points you were trying to 
make, and everything to do with the manner in which you said it.


Impolite and intemperate language does not add anything of value to the 
points you are making.  It rather detracts from them and leads the 
reader to devalue and dismiss what you are saying as a result.  This 
doesn't just apply to yourself, but several of the other posters over 
the last few days.  Productive technical discussion is impeded by 
treating others with contempt and accusations of bad faith.


On a personal note: I spent well over a decade working on different 
parts of Debian and loved being a part of it, and had many friends 
there.  I left the Debian project after enduring over two years of 
abusive and disrespectful communications, primarily with regard to the 
systemd "debate" becoming overly personal, rather than sticking to 
technical facts.  It got to the point that I dreaded opening my mail 
client to see what new abuse had been sent.  Over the course of several 
months I became thoroughly stressed out, demoralised and demotivated by 
the continual barrage of negativity and disrespect.  It caused real 
depression which seriously affected my wellbeing in the real world. 
Coupled with severe RSI problems (which might not have been unrelated), 
I decided leave the project.  Not because I wanted to, but for my 
physical and mental wellbeing.  When you're saddled with a huge 
responsibility for keeping everyone's systems working, and have an 
unwritten obligation to do so, and you have to spend a huge fraction of 
your unpaid off-work time on it, and that time has become unpleasant, 
stressful, physically damaging due to the RSI and no longer provides 
even the last bit of enjoyment you once derived from it, it's time to 
call it quits.  It was the RSI which forced the issue; I could no longer 
physically meet my duties and commitments as a developer while also 
doing my day job.  But I'd been desperately unhappy for many, many 
months as well.


The words you write from the anonymity and comfort of your home or 
office do have an effect upon those who receive them.  I'm not a 
snowflake.  I'm a 39 year old professional software developer with over 
18 years of experience in several different fields, and a science PhD as 
well.  I'm very happy to engage in robust technical debate, but that 
isn't what you're doing here.  If people aren't prepared to attempt a 
minimum amount of politeness and respect for their fellow human beings, 
I simply move to work on other projects which have more mature people 
working on them.  Life is too short to tolerate or suffer from 
unnecessary abuse.


In short: Don't remove the joy people get out of participating in free 
software projects by being abusive.  We can all be better than that.



   Again I express something so simple I'm really beginning to lose my mind:


   I do not intend to deprive anyone with the freedom to merge /usr to /,
damn it!

   I want to preserve *MY* freedom of choice, I want to be able to split
from / anything that is not required on *MY* systems or that will never
be required on any system (Java, Apache, Squid, Xorg, LibreOffice etc).

   I am not fighting against people who want to introduce different
filesystem layouts because of their special needs, I WANT THEM TO STOP
FORCING THEIR DESIGN CHANGES TO ME FOR NO OTHER REASON THAT TO THEIR
SYSTEMS THEIR LAYOUT MAKES MORE SENSE!


The individual components of the operating system are not developed in a 
vacuum.  They are developed in collaboration with and consideration of 
the rest of the system they reside within and interoperate with.


For the /usr mounting in initramfs history I posted, this body of work 
took over a year to finish.  Not because of technical complexity, but 
because it required precise coordination between several different parts 
of the system, including: initramfs-tools, initscripts, grub and others 
(including systemd).  And it needed staging in a fixed order to avoid 
breakage.  All this required coordination and collaboration between 
multiple groups of people.  Each group needed to understand the big 
picture issues, as well as how their part was affected i

Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-22 Thread Roger Leigh

On 21/11/2018 16:11, Alessandro Selli wrote:

On 21/11/18 at 13:17, Roger Leigh wrote:

Hi folks,

I've been following the discussion with interest.



   No, you definitely have not followed it.  In fact you are disregarding
all the points that were expressed against the merge.


Let me begin by stating that I found your reply (and others) to be rude, 
unnecessarily aggressive, and lacking in well-reasoned objective 
argument.  It's poor communication like this which caused me to 
unsubscribe from the Debian lists, and also to this list a good while 
back (I only read the digest summary on occasion, and rarely 
participate).  I find it fosters an unfriendly, unpleasant and 
unproductive environment which I don't enjoy working in.  When you're 
doing this type of work as a part-time volunteer, it's extremely 
demotivating and disheartening to be treated this way.  It would be 
unacceptable in a professional setting, and it's equally unacceptable 
here.  Please do think about what you have written before sending it; it 
costs nothing to be nice, even when you are in disagreement with someone.



Before I follow up on any of the points you (and others) made in 
response, let me begin with some history you may be unaware of.  It 
actually predates systemd, and is largely unrelated to systemd.



6-7 years ago, back when I was one of the Debian sysvinit maintainers, 
we had a problem.  The problem was that an increasing number of systems 
could no longer be booted up successfully.  The reason for this was that 
the boot process was becoming increasingly complex.  The process could 
be summarised like this with an initramfs:


  - mount / in initramfs
  - [early boot]
  - mount /usr and other filesystems
  - [late boot]

or directly, without an initramfs

  - mount /
  - [early boot]
  - mount /usr and other filesystems
  - [late boot]

The problems arose from the mounting of /usr part way through the boot 
process.  An increasing number of system configurations required tools 
and libraries from /usr, before it was mounted.  These could be NSS 
modules, LDAP stuff, dependencies of network filesystem tools, or 
others.  In some cases, this was solved by moving individual tools and 
libraries from /usr to /[s]bin and /lib.  But this became increasingly 
untenable as the requirements became more and more complex.  Not only 
did we have to move individual tools and libraries and datafiles from 
/usr to the root filesystem, we also had to move every dependency as 
well for them to be functional.  Also, service dependencies wanted 
services starting before /usr was mounted, moving chunks of the 
dependency graph into the early boot stage.  This was a losing battle, 
since we couldn't move /everything/ to the root filesystem.  Or could we?


It was due to logistical challenges like this that we first considered 
the merge of / and /usr.  Once low-level tools start requiring 
interpreters like Perl or Python, or libraries like libstdc++, or 
datafiles like timezone data, it was clear we needed a more general 
solution which would solve the problems for the long term.


The question arose of how we might handle the migration, or avoid the 
need for a migration entirely.  As the sysvinit maintainer, I did most 
of the investigation and implementation of this work, and you're likely 
using that solution right now yourself.  The solution I chose was one 
which would allow for making /usr available in early boot without the 
need for physically merging the filesystems, so that it wouldn't break 
any of the installed systems on upgrade.  We would add to the initramfs 
the ability to mount additional filesystems other than the rootfs, 
directly from the rootfs fstab.  And we would cater for local and NFS 
filesystems just as we do for the rootfs.  This was one of the more 
costly solutions (in terms of implementation complexity and testing 
needed), but it retained the flexibility some people required.  This was 
implemented 5 years back, and the result is this with an initramfs:


  - mount / and /usr in initramfs
  - [early boot]
  - mount other filesystems
  - [late boot]

or directly, without an initramfs:

  - mount /
  - [early boot]
  - mount other filesystems
  - [late boot]

Thus we could guarantee the availability of files in /usr from the 
moment the system starts, and is independent of all init systems.


The tradeoff is that we no longer supported direct booting of a system 
with a separate /usr; you had to use an initramfs.  You could still boot 
directly, but / and /usr had to be on the same filesystem to guarantee 
the availability of /usr.  But with this solution in place, all stages 
of the boot could rely on tools, libraries and datafiles present in /usr.


This has been in production use since wheezy, and because it was so 
transparent, very few people would even realise that the filesystems had 
been (effectively) unified since then, because I took gre

Re: [DNG] /usr to merge or not to merge... that is the question

2018-11-21 Thread Roger Leigh

Hi folks,

I've been following the discussion with interest.  It's certainly not a 
new discussion, since I remember debating it a good few years back, but 
there are still the same opinions and thoughts on the topic that I 
remember from back then.


Some general points to consider:

1) A separate /usr serves no practical purpose on a Debian/Devuan system

   Historically, /usr was separately mountable, shareable over NFS. 
With a package manager like dpkg, / and /usr are an integrated, managed 
whole.  Sharing over NFS is not practical since the managed files span 
both parts, and you can't split the package database between separate 
systems.  Modern disk sizes make partitioning a separate /usr 
unnecessary and undesirable.  (Both are of course /possible/, but there 
is precious little to gain by doing so.)


   Other systems, like the BSDs, have the effective split between base 
(/) and ports (/usr/local).  / and /usr are still effectively a managed 
whole containing the "base system".


   With those points considered, merging / and /usr would make sense. 
Though equally, keeping the separation doesn't hurt *if they are on the 
same filesystem*.  If they are to be merged, then there are two 
possibilities: moving /usr to / or the contents of /* to /usr.


   The point about /usr being a good place for "static" content is a 
reasonable one.  But for better or worse, / is "the system".  It's still 
part of the managed whole, and hiving off a static /usr rather than 
hiving off the variable changing content isn't really doing more than 
adding extra unnecessary complexity.


2) Moving the content to /usr doesn't preclude moving it to / later

   RedHat/systemd have decided to move everything to /usr, and Debian 
have decided to copy this as they have for most systemd-dictated 
changes.  I'd prefer it to be the other way around; it's cleaner and 
it's what we would choose given a clean slate.  However, when multiple 
filesystems are in use, /usr is often larger and this is potentially the 
safer move *for upgrades*.


   dpkg permits any directory in the filesystem hierarchy to be 
replaced by a symbolic link.  If the contents of /bin, /lib etc. are 
moved to /usr/bin, /usr/lib etc., they can be replaced by symlinks so 
that /bin/sh and /lib/ld.so and other fixed paths continue to work 
correctly.


   Conversely, /usr can be symlinked to /.  This permits /usr/bin/perl 
to continue to work even if the interpreter is in /bin.


   However, dpkg must compare canonical paths rather than the 
package-provided paths, to detect file conflicts between packages using 
/ vs /usr paths.  I'm not sure if it does this already or not.


   There are two parts to the unification:

 a) Cleaning up all packages such that there are no conflicts 
between the contents of /bin and /usr/bin

 b) Moving the files and creating the symlinks

   The important point to note is that once the cleanup is done, the 
symlinks can be made to support either scenario.  dpkg doesn't care, so 
long as there are no duplicate files in either location.  You could do a 
migration to /usr on upgrade (for safety) and make /usr a symlink to / 
on fresh installs.  The important part is (a).  (b) is policy, which can 
be changed at will as distribution defaults or local choice.


3) Upgrade incompatibility

   The point made about the kmod developers switching to /usr/lib makes 
no sense.  If the migration is done correctly, it should be *seamless*. 
Because /lib should point to /usr/lib, any existing users of /lib should 
retain using that path for compatibility purposes.  Indefinitely, if 
they cared about doing it properly.  No user of /lib should transition 
to /usr/lib, just like no user of /var/run should have transitioned to 
/run.  The important part of being compatible across filesystem layout 
changes is not breaking *anything* before or after the unification.


4) None of it actually matters

   The whole discussion is based on the premise that they are separate. 
 In practice, the vast majority of us have them on the same filesystem, 
given that there is no practical purpose to the separation as in (1) above.


   If you are using a container-based system like Docker, or a virtual 
machine image, or a live image, they will be a single filesystem.  If 
you're doing a standard installation, they will be a single filesystem. 
Also, if you're using a modern filesystem like ZFS, on Linux:


% zfs list -r rpool
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool   51.0G  56.5G96K  none
rpool/ROOT  14.6G  56.5G96K  none
rpool/ROOT/default  14.6G  56.5G  12.0G  /
rpool/home  3.18M  56.5G96K  none
rpool/home/root 3.08M  56.5G  3.08M  /root
rpool/opt   16.3G  56.5G  9.63G  /opt
rpool/opt/clion 1.19G  56.5G   616M  /opt/clion
rpool/opt/qt4.34G  56.5G  4.34G  /opt/qt
rpool/swap  14.3G  62.6G  5.82G  -
rpool/var  

Re: [DNG] Making sense of C pointer syntax.

2016-03-28 Thread Roger Leigh

On 28/03/2016 15:35, Steve Litt wrote:

On Mon, 28 Mar 2016 06:03:13 -0400
Boruch Baum  wrote:


Why on this list, of all the possible places in Creation? It's a great
and important topic, but have you found no other, more appropriate
forum?


> Because we're developing software.

I'd have to say that while I'm mostly just an observer here, I do 
consider this thread close to noise with precious little of value in it.


While it's certainly true that the list is related to software 
development (or would that be better stated as distribution development, 
which is somewhat higher-level?), it's also true that there are plenty 
of more appropriate forums for basic and intermediate C questions, and 
also a vast wealth of books and other materials which cover it in great 
detail.



Kind regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Giving Devuan sans-initramfs capabilities

2016-01-03 Thread Roger Leigh

On 03/01/2016 17:11, Simon Hobson wrote:

Roger Leigh  wrote:


The *real* goal here is something rather simpler: having both / and /usr 
mounted in the initramfs.  The primary reason for this is that there are 
genuine problems with stuff on / needed in early boot having library 
dependencies located in /usr.  Libraries were moved to / on a case-by-case 
basis, but it's really just the tip of the iceberg.  E.g. PAM modules needing 
openssl, datafiles from /usr/share, etc.  It becomes a nightmare to coordinate 
and manage, particularly when you also have to consider third-party stuff out 
of your control.  Simply ensuring that /usr is available in the initramfs 
solves all these problems.

...

Thanks for that explanation


See https://wiki.debian.org/ReleaseGoals/MountUsrInInitramfs
Looks like they crippled the /usr mount for the non-systemd case for no good 
reason though.


Well it would be easy to put it down to malice, but rationally it's more likely 
incompetence. Given how many people seem to have gone into "systemd or on your 
own" mode, it's likely that they have probably not considered the combination (or 
just don't care if it's broken for non-systemd users).


I would think it's primarily an omission since sysvinit is no longer 
cared about, so it only needed to work for systemd.  Fixing it to work 
in Devuan would be pretty simple.

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] FW: support for merged /usr in Debian

2016-01-03 Thread Roger Leigh

On 03/01/2016 14:23, Edward Bartolo wrote:

Hi,

People don't understand: it is bad to manage their own OS. That is the
work for EXPERTS who know BETTER. Users should only be allowed to move
the mouse, click its cool buttons, and sometimes use DVDs (CDs) to run
recovery software. Other than that, it is bad management.

[/sarcasm]


Sorry, but I don't understand the point you're trying to make here and 
how it relates in any way to what I wrote.  Perhaps if you state it 
clearly and obviously, and without sarcasm, I'll actually understand it.


Also, there's no need to send me a copy of the reply off-list; I am 
subscribed.



Thanks,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] FW: support for merged /usr in Debian

2016-01-03 Thread Roger Leigh

On 03/01/2016 14:33, Svante Signell wrote:

Hi Roger,

Could you please send your text below to debian-devel too? Here at
Devuan most people are aware of the issues. Unfortunately many of the
Debian developers/maintainers aren't :(


None of this is new.  It's merely repeating the details of discussions 
from years ago which IIRC were on -devel.  The patches I mentioned were 
merged for jessie.  The /usr filesystem should be mounted in the 
initramfs from jessie onwards.


See https://wiki.debian.org/ReleaseGoals/MountUsrInInitramfs
Looks like they crippled the /usr mount for the non-systemd case for no 
good reason though.  Likely needs a tweak to the initscripts which they 
didn't apply.  Either way, the initramfs logic is there.  The missing 
bits are mentioned on the page.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] FW: support for merged /usr in Debian

2016-01-03 Thread Roger Leigh

On 02/01/2016 23:39, Arnt Karlsen wrote:

On Sat, 02 Jan 2016 05:50:15 -0500, Mitt wrote in message
:


Not sure about poetteringisation (of how should this be spelled?)
but take a look at this link:
http://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/

and this
https://fedoraproject.org/wiki/Features/UsrMove (see owners)

and even this
http://lists.busybox.net/pipermail/busybox/2010-December/074114.html

Simplification? Heh.

The same thing: why change something that has been working
(flawlessly?) for four decades.


..to subvert and scuttle the competition.


It's not something that nefarious.  It's about inexperienced people 
looking at historical practices/mistakes and trying to "correct" them. 
That said, you can't ever remove something as entrenched as /usr, at 
least, not without widespread breakage.  No one who cared about 
compatibility would consider that.


The *real* goal here is something rather simpler: having both / and /usr 
mounted in the initramfs.  The primary reason for this is that there are 
genuine problems with stuff on / needed in early boot having library 
dependencies located in /usr.  Libraries were moved to / on a 
case-by-case basis, but it's really just the tip of the iceberg.  E.g. 
PAM modules needing openssl, datafiles from /usr/share, etc.  It becomes 
a nightmare to coordinate and manage, particularly when you also have to 
consider third-party stuff out of your control.  Simply ensuring that 
/usr is available in the initramfs solves all these problems.  The 
requirements coming from systemd/freedesktop about this are essentially 
the same, but in a typically inflexible and our-way-only manner.  The 
actual problem here predates systemd by many years.


Note that I wrote the patches which mount /usr in the initramfs in 
addition to /.  This gives all the primary benefits of the /usr merge 
without actually changing anything.  It means that from this point on, 
you can freely make use of programs, libraries and datafiles in /usr 
during early boot.  I.e. the only change is a guarantee of when things 
are available.  The exception to this is that you can no longer boot 
from a system with a split /usr unless you have an initramfs.



When you look at the /usr merge stuff which can follow on from this, 
there are several steps one might take:

- having / and /usr on the same filesystem
- then symlink programs from /usr/bin to /bin (or vice versa) (and lib etc.)
- or symlink the whole of /usr/bin to /bin (or vice versa) (and lib etc.)
- or symlink /usr to /

RedHat chose to move the whole of / to /usr, and symlink to it from the 
old locations.  It's kind of backward.  It was done since /usr was often 
bigger than /, so makes sense for upgrades where migrating the other way 
would fail.  But as the long-term goal and for fresh installs, it's not 
really solving any real-world problem.  Just having everything on a 
single filesystem (or mounting everything in the initramfs) already 
solved those.


Historically, GNU/Hurd symlinked /usr to /.  It would have been a good 
goal for Debian GNU/Linux.  It needed an audit of potentially 
conflicting files to ensure the package dependencies were OK, but would 
otherwise be doable simply by making /usr a symlink in base-files (for 
new installs).


Whichever way you do the merge, hardcoded paths like /bin/sh and 
/usr/bin/whatever are part of the platform.  They will be required to be 
accessible using the old names indefinitely.  So the actual *need* for 
the merge is moot.


Regarding the comments people made about having separate / and /usr 
filesystems.  While it was common historically, there is little or no 
practical benefit to doing so in 2016.  Storage sizes make it 
unnecessary for pretty much all practical scenarios.  The two are 
managed by dpkg as a coherent whole; they are logically inseparable. 
They serve the same purpose.  Do reconsider whether it's actually 
necessary for you to do this, or whether it's merely habit.  Some 
historical practices continue to have value; others, including this one, 
do not.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Debianising my uploaded version of netman.

2015-12-16 Thread Roger Leigh

On 16/12/2015 20:03, Edward Bartolo wrote:

On 15/12/15 21:00, Rainer Weikusat  wrote:

Some more remarks on the packaging efforts so far: The rules file in the
git repository is



override_dh_auto_clean:
  dh_auto_clean


I am very hesitant about deleting this particular line as it has been
added to clean the sources after using dpkg-buildpackage.


Are you *sure* about that?  You're overriding the default clean action, 
and then setting the custom action to run the default action... 
Deleting these two lines should be harmless, since it will default to 
running dh_auto_clean /anyway/, i.e. the lines here are redundant.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] gcc error: "error: unknown type name ‘GtkObject’"

2015-11-29 Thread Roger Leigh

On 29/11/2015 16:51, Edward Bartolo wrote:

Hi Aitor,

Thanks for taking some time to answer my question. I did as you
instructed me and successfully compile and run the test program.
However, the test program failed to close even though I closed its
window. I had to kill it using Ctrl + C.


g_signal_connect(G_OBJECT (window), "hide", G_CALLBACK (gtk_main_quit), 
NULL);


Terminates the main loop when the window is closed; replace "window" 
with your toplevel window here.  If you need to prompt to save unsaved 
state, then implement your own handler here which does that before 
quitting in place of gtk_main_quit.



https://github.com/rleigh-codelibre/ogcalc/blob/master/gtk/C/gobject-glade/ogcalc-main.c#L40

This is part of the "ogcalc" GTK+ (and now Qt) tutorial I wrote a decade 
ago.  It was on my people.debian.org page, but I've moved it to github 
since that's no longer available. 
https://github.com/rleigh-codelibre/ogcalc/  You might find the tutorial 
itself useful.  It's GTK+ 2.x only though; I haven't felt it worthwhile 
to update for 3.x given the breakage in backward compatibility, and I 
might drop GTK+ entirely in the future.  Run "make html" or "make 
latexpdf" in the doc directory to get the full tutorial text.  I 
converted it from LaTeX to Sphinx; you'll need python-sphinx installed. 
 I'll publish this online properly at some point.


Since I moved on from GTK+ several years ago, I re-wrote the tutorial 
examples with plain Qt, with UI files (like glade) and with PyQt.  I'd 
definitely recommend these over GTK+.


Also, given the debate recently in this thread regarding GNU Make, the 
GNU Autotools and CMake, readers of the list might be interested to see 
that I wrote the Qt examples to use all three systems separately so you 
can directly compare them, e.g.

  https://github.com/rleigh-codelibre/ogcalc/tree/master/qt/plain/make
  https://github.com/rleigh-codelibre/ogcalc/tree/master/qt/plain/automake
  https://github.com/rleigh-codelibre/ogcalc/tree/master/qt/plain/cmake
[note: my early days with cmake--it can be simpler than this!]


Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] lightweight graphics toolkits (was Experiencing with GtkBuilder)

2015-11-28 Thread Roger Leigh

On 28/11/2015 18:46, Godefridus Daalmans wrote:

If your programs depend on CDE, you could try to compile them against
lesstif2,
that's an LGPL implementation of Motif, on top of just the X libraries.

I don't know if it's binary-compatible or if it's actively maintained.


Not sure of its current status, but given the need for Motif is now very 
low, I would imagine it's not too active.


The other thing to point out is that the real Motif toolkit was opened 
up with the relicensing of CDE.  There was some initial activity to make 
it build and run on modern systems, but not sure what the current state 
of some things are, e.g. client-side fonts and unicode (was previously 
old-style XLFD and no unicode).  So the need for lesstif is now moot, 
unless there are lesstif bits to merge into motif.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-27 Thread Roger Leigh

On 26/11/2015 22:50, Rainer Weikusat wrote:

Svante Signell  writes:

On Thu, 2015-11-26 at 19:36 +, Roger Leigh wrote:

On 26/11/2015 17:53, Svante Signell wrote:

On Thu, 2015-11-26 at 17:04 +, Roger Leigh wrote:

On 26/11/2015 15:00, Svante Signell wrote:

On Thu, 2015-11-26 at 15:33 +0100, aitor_czr wrote:



Hi, what's wrong with plain GNU make, and the GNU auto-tools?



Then you are a happy user of cmake. I'm working on porting packages for
GNU/Hurd and every time when I encounter packages using cmake the confusion
increases. It seems almost impossible to find out where to make changes and,
the build process is not traceable. (maybe It's just me :( )


You looked at CMakeFiles/CMake(Output|Error) ?  Most stuff should be
logged there.  And if you need to trace, message(WARNING) will print a
trace along with the diagnostic.


Well, as long as you work with configure.ac, Makefile.am and confiugre.h.in
level for files you won't have any problems with make/autotools. The rest is
mostly hidden (and by now stable) from a user perspective.


 From a user perspective, auto* is a major PITA because ten open source
projects using auto* will require some power-of-ten combinations of
mutually incompatible version of autothis, autothat and
autosomethingelse (But for as long as the developers don't have to read
the make docs, who cares!) and this doesn't even enter into "independent
developer trying to use the code" territory where things start to become
really nasty.

In case this still happens to be different for CMake, it will fix itself
over time as more (mutually incompatible) versions of that end up being
used by more software.


They already thought about this, and have a versioning and policy 
mechanism in place for forward and backward compatibility.  You specify 
the minimum required version, and you can also toggle deprecated or new 
features on or off on a per-feature basis.


While the autotools require the upstream developer to have all the tools 
of the correct version installed, to embed the generated build files in 
the release tarfile, cmake requires a recent enough version to be 
installed on the machine of the person building the software.


In practice, what this means is that the developer needs to pick a 
lowest common denominator cmake version for the platforms they wish to 
support, and if you use an older system you need to get a newer version 
of cmake.  As an example, I currently use 2.8.6 as a minimum.  It's 
supported by anything from the last few years, but I'm going to update 
to 3.2 in the next few weeks since I now need some newer features.  As a 
developer, the tradeoff between the features you can use and the version 
you require is yours to make though.



Regards,
Roger

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-26 Thread Roger Leigh

On 26/11/2015 20:02, KatolaZ wrote:

On Thu, Nov 26, 2015 at 07:14:35PM +, Roger Leigh wrote:

[cut]



That's correct.  You can put the build directory anywhere you like.
But, you do need to chdir to the build directory before running
"cmake /path/to/source", just as you do when running
"/path/to/configure" with autoconf.  You can of course run "cmake
.", and have the source and build directories be the same, but then
there's no separation and you end up writing the cache file to the
source tree (which can cause problems for any future out-of-source
builds since it will be reused).



I am pretty sure I have compiled stuff with cmake, with separate build
and source directories, and without the need to manually chdir
anywhere. I remember it was possible to set a cmake variable for this
purpose (was it anything like EXECUTABLE_OUTPUT_PATH?!?!?), but I
might be wrong on this. Maybe I am too accustomed with autotools :)


It's possible there's some variable which has the same effect, but I'm 
not aware of it myself (and I can't find it in the docs).


Stuff like EXECUTABLE_OUTPUT_PATH is to set where executables are placed 
inside the build tree e.g. /bin instead of the default (the directory 
defining the target).


The default behaviour of both these is identical to autoconf+automake!


Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-26 Thread Roger Leigh

On 26/11/2015 17:53, Svante Signell wrote:

On Thu, 2015-11-26 at 17:04 +, Roger Leigh wrote:

On 26/11/2015 15:00, Svante Signell wrote:

On Thu, 2015-11-26 at 15:33 +0100, aitor_czr wrote:



Hi, what's wrong with plain GNU make, and the GNU auto-tools?


Nothing is wrong with "plain make", providing that it meets your needs.
   But often you want more than plain make can offer.  There's plenty to
criticise with the autotools, the baroque complexity being the primary
one.  CMake is a big upgrade from the autotools; it's vastly more
featureful and powerful, has better portability for modern systems, and
still works with make when generating Makefiles.  The autotools have
failed to keep up to date with modern portability requirements; the
capabilities CMake has to offer are unmatched at the present time,
though it also has its own warts.  After 15 years of autotools use, I
converted all my stuff to CMake over the last two years, and I'm not
looking back.


Then you are a happy user of cmake. I'm working on porting packages for GNU/Hurd
and every time when I encounter packages using cmake the confusion increases. It
seems almost impossible to find out where to make changes and, the build process
is not traceable. (maybe It's just me :( )


You looked at CMakeFiles/CMake(Output|Error) ?  Most stuff should be 
logged there.  And if you need to trace, message(WARNING) will print a 
trace along with the diagnostic.


I do think there is a lot of confusion about CMake.  But I don't think 
this is due to any intrinsic fault of the tool, it's more due to a 
general lack of familiarity with how it works (and why) since it's newer 
than the autotools, which we're all familar with.  I remember how 
confused I was by the autotools back in the late '90s.  After spending 
days reading about all of the internals (autoconf, m4, make, automake, 
libtool), I finally got how it all fit together and was really pleased 
when I got my first project working with the autotools.  But that 
success took a *significant* investment of time and effort, and I 
continued to use and contribute to these tools for several projects over 
the next 15 years, primarily out of inertia and because they were "good 
enough" once you'd got past all the complexity.


A couple of years back, starting a new C++ project at work, I had to 
decide on a build system to satisfy all our requirements.  The autotools 
would have been a candidate were it not for also needing to build on 
Windows, so I took a fresh look at everything out there, including 
scons, cmake and several others.  CMake was the best of the lot.  As for 
the autotools, I spent several days learning it.  I took an existing 
autotooled project (schroot) which used a lot of autoconf/automake 
features including custom stuff, and converted it to use cmake over the 
course of a week.  After that investment of effort, I was 90% there in 
terms of understanding it.


For anyone at home with m4/autoconf/automake/libtool, there's a great 
deal of similarity with cmake.  The main difference is that it's vastly 
simpler--instead of having several tools, each using a different 
language and its own set of configuration files and macros, you have one 
language and one file (which can include others).  Other than that, most 
of the concepts (feature testing, defining targets) are exactly what you 
already know, just done slightly differently.  When working in a team, 
it's much better since they can make changes without being gurus, and 
*everything* is in the manual--it's much easier to get into than the 
autotools.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-26 Thread Roger Leigh

On 26/11/2015 17:16, KatolaZ wrote:

On Thu, Nov 26, 2015 at 05:04:41PM +, Roger Leigh wrote:

On 26/11/2015 15:00, Svante Signell wrote:

On Thu, 2015-11-26 at 15:33 +0100, aitor_czr wrote:

I agree with you: using "cd build; cmake ../" with *the final purpose* of
installing the spinner in the system is a contorsionism.


Not really, it's directly analogous to VPATH builds with make (and
configuring from a separate build dir with the autotools).  It lets
you cleanly separate source and build (and have multiple build
trees).  It also makes cleaning the build tree nothing more than
removing the build tree.

(I use this feature of cmake all the time--source in a git tree on
an NFS-exported filesystem, with build trees on several different
systems so I can build and test on multiple platforms at once.)



Sorry but I don't understand. Are you suggesting that the only way to
separate build and source with cmake is to ask the user to manually
create the "build" directory, chdir into it and launch "cmake ../"? In
other words, that cmake does not allow to do otherwise?


That's correct.  You can put the build directory anywhere you like. 
But, you do need to chdir to the build directory before running "cmake 
/path/to/source", just as you do when running "/path/to/configure" with 
autoconf.  You can of course run "cmake .", and have the source and 
build directories be the same, but then there's no separation and you 
end up writing the cache file to the source tree (which can cause 
problems for any future out-of-source builds since it will be reused).



Regards,
Roger

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-26 Thread Roger Leigh

On 26/11/2015 15:00, Svante Signell wrote:

On Thu, 2015-11-26 at 15:33 +0100, aitor_czr wrote:

I agree with you: using "cd build; cmake ../" with *the final purpose* of
installing the spinner in the system is a contorsionism.


Not really, it's directly analogous to VPATH builds with make (and 
configuring from a separate build dir with the autotools).  It lets you 
cleanly separate source and build (and have multiple build trees).  It 
also makes cleaning the build tree nothing more than removing the build 
tree.


(I use this feature of cmake all the time--source in a git tree on an 
NFS-exported filesystem, with build trees on several different systems 
so I can build and test on multiple platforms at once.)



Although any user on planet Earth will install the spinner in any OS. With or
without systemd.

But, what happen in the case of a developer? For each failed attempt, all the
garbaje generated by cmake must be sent to the trash by hand, because there is
no 'cmake clean'.

So, keeping it separatelly makes sense in such cases.


Hi, what's wrong with plain GNU make, and the GNU auto-tools?


Nothing is wrong with "plain make", providing that it meets your needs. 
 But often you want more than plain make can offer.  There's plenty to 
criticise with the autotools, the baroque complexity being the primary 
one.  CMake is a big upgrade from the autotools; it's vastly more 
featureful and powerful, has better portability for modern systems, and 
still works with make when generating Makefiles.  The autotools have 
failed to keep up to date with modern portability requirements; the 
capabilities CMake has to offer are unmatched at the present time, 
though it also has its own warts.  After 15 years of autotools use, I 
converted all my stuff to CMake over the last two years, and I'm not 
looking back.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-24 Thread Roger Leigh

On 23/11/2015 20:43, Rainer Weikusat wrote:

Roger Leigh  writes:

On 23/11/2015 13:50, Rainer Weikusat wrote:

Roger Leigh  writes:

On 23/11/2015 11:49, Nate Bargmann wrote:

* On 2015 23 Nov 00:53 -0600, aitor_czr wrote:

In my opinion, using C with lists will be the most suitable.


Have you looked at what glib provides?  It is an underlying library of
GTK and seems to contain many such solutions.


Using GLib for structures like linked lists (GList) etc. is a much
better solution than reinventing them unnecessarily.


I beg to differ here: Using a lot of complicated code in order to
accomplish something simple, ie, management of linked lists, may be
regarded as advantageous, eg, because it enables avoiding a (typically
negligible) amount of work or because it's more politically feasible but
$code doesn't become a 'better' solution for some $problem just because
it can be downloaded for free from the internet.


This is true up to a point.  Note the "unnecessarily" qualifier in
what I wrote--sometimes there might be a justification for reinventing
the wheel,


"The wheel" (leaving the issue that wheels are being 're-invented', ie,
new kinds of wheels are being developed all the time, aside) is a
technical device which has been in use without changes to the basic
design for a couple of thousands of years. In contrast to this, most
other human inventions, say, steam-powered locomotives, delay line
memory or CP/M, are very short-lived. This make it a particulary
unsuitable analogy here.


OK, "unnecessary reimplementation" then.  Reimplementing basic stuff is 
wasteful on many levels.



1) Use GLib
2) Use a linked list implementation from another library
3) Write your own
4) Use C++
5) Use another language

As you say (1) isn't necessarily ideal, and this also potentially
applies to (2) depending upon its quality of implementation and how
well it matches your needs.  Where I would disagree is that (3) has a
"typically negligable" cost.  A linked list is conceptually simple,
and yes, not that much effort to write.


One thing I always liked about Jeff Bonwick's 'Slab Allocator' paper was
that he apparently didn't even think about implementing a generalized
library for handling doubly-linked list instead --- he just wrote the
code manipulating the link pointers as needed.


Well, inside the guts of an allocator is exactly where direct pointer 
usage is required for the needed performance and flexibility.  But for a 
list in a frontend GUI, not so much.  It would be a waste of valuable 
time and effort when there are easier and simpler alternatives.  The 
goal is to write a GUI, not mess around with list implementation details.



If you take approach (4), and use a standard container type, the
problem is solved immediately.  "#include ", "std::list
mylist". Job done.


One of the reasons why I stopped using C++ around 2001/2 (A lesser
reason. The more important one was that it was neither a requirement nor
particularly helpful) was that I always regarded it as very nice
language with the millstone of an atrociously implemented standard
library around its neck while I came to realize that a certain Mr
Stroustroup seemed to regard is a rather atrocious language he could
only sell because of the wonderful library requiring it.


So this is pre-Standard C++ given the timeframe?  It was particularly 
bad around this time, and it took several releases of GCC3.x before the 
bugs in libstdc++ were shaken out (so ~2004-5 when it became widely 
available).  Not that the C++98 standard library is without its warts, 
but it's perfectly functional.  With C++11, the library became properly 
usable--direct initialisation of containers makes it vastly better.


If you came to me with a problem, and that required maps, lists etc. to 
solve, I would nowadays automatically discount C.  I'd look and C++ or 
Python first.  I'd have an implementation done and tested well before 
I'd even got started on it in C--where I'd be obliged to create all the 
low-level pieces before I even got started on the problem itself. 
There's no advantage to using C in a situation like this--other than for 
masochistic bragging rights--it doesn't do a better job, and it takes 
longer, requires more effort and will be more likely to contain bugs.


std::vector list{"foo", "bar", "baz"};

Just works.  How much low-level C would be required to implement that? 
Lots.  Would it be worth the cost?  Almost certainly not.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-23 Thread Roger Leigh

On 23/11/2015 18:13, Edward Bartolo wrote:

Hi All,

The backend, in a way, already handles lists. All I need to do is
extract the code and put it in a struct. This 'reinventing of the
wheel' will avoid having to content ourselves with what libraries
offer whatever that may be. With netman, the goal was to avoid as many
dependencies as possible: using huge libraries defies that purpose.


Avoiding unnecessary dependencies makes sense.  But this thread was 
about GtkBuilder for the frontend, with some suggestions regarding GTKmm.


If you're using GTK+, you already have a transitive dependency upon 
GLib.  In consequence, using GLib features is not adding any additional 
dependency.  Not using e.g. GList doesn't reduce your dependencies--you 
are linking against GLib whether you want it or not, so if using it 
helps, it doesn't add any extra cost.


And if you're using GTKmm, you're going to be linking against the C++ 
standard library libstdc++ by default, so using the standard C++ 
containers like std::vector, std::list is also essentially "free".  And 
these integrate transparently with the GTKmm API, so you can use them 
directly in the frontend.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-23 Thread Roger Leigh

On 23/11/2015 13:50, Rainer Weikusat wrote:

Roger Leigh  writes:

On 23/11/2015 11:49, Nate Bargmann wrote:

* On 2015 23 Nov 00:53 -0600, aitor_czr wrote:

In my opinion, using C with lists will be the most suitable.


Have you looked at what glib provides?  It is an underlying library of
GTK and seems to contain many such solutions.


Using GLib for structures like linked lists (GList) etc. is a much
better solution than reinventing them unnecessarily.


I beg to differ here: Using a lot of complicated code in order to
accomplish something simple, ie, management of linked lists, may be
regarded as advantageous, eg, because it enables avoiding a (typically
negligible) amount of work or because it's more politically feasible but
$code doesn't become a 'better' solution for some $problem just because
it can be downloaded for free from the internet.


This is true up to a point.  Note the "unnecessarily" qualifier in what 
I wrote--sometimes there might be a justification for reinventing the 
wheel, but that shouldn't be the norm.  The options here are likely 
limited to:


1) Use GLib
2) Use a linked list implementation from another library
3) Write your own
4) Use C++
5) Use another language

As you say (1) isn't necessarily ideal, and this also potentially 
applies to (2) depending upon its quality of implementation and how well 
it matches your needs.  Where I would disagree is that (3) has a 
"typically negligable" cost.  A linked list is conceptually simple, and 
yes, not that much effort to write.  But... this is code which needs 
writing, testing, debugging and then maintaining and which is not the 
core purpose of your program.  You need to reinvent this before you can 
even get to the meat of the problem you are trying to solve.  This is 
wasted and unnecessary effort, spent on something which is boring and 
counterproductive--it's just low-level infrastructure.  If it was just 
about one linked list, it might not be such a problem, but as you 
continue with the application, you'll need some other container, 
structure and/or algorithm which you'll /also/ need to reimplement, and 
again over and over until you've eventually got your own GLib 
equivalent.  A limited language like C results in this wheel reinvention 
due to its inherent lack of generality.  If you can reuse an existing 
tested and functional implementation, that's for the best.  It's a 
disservice to recommend pointless wheel reinvention--most C programs 
suffer from it to some extent, but that's not in any way an endorement 
of the practice.  So I would say "use GLib GList if the cost of doing so 
is less than the cost of reimplementing it yourself".  And if the cost 
is greater, then reconsider the use of C before anything else.


If you take approach (4), and use a standard container type, the problem 
is solved immediately.  "#include ", "std::list mylist". 
Job done.  On to the next problem.  This would be my recommendedation 
over wasting time and effort working around the limitations of C.  Your 
goal is to write an application: just write it without being diverted 
down unproductive tracks which aren't needed to achive that goal.  Using 
a linked list is not the goal, it's a trivial detail.  If your goal is 
to write the whole thing in C, that's fine, but do so understanding that 
it's going to take a lot more time and effort, and the likelihood of 
bugs creeping in will be much higher.


If you use (5), e.g. Python, Perl or some other language with native 
lists/arrays, this becomes a non-issue--you just use the standard type 
for that language.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-23 Thread Roger Leigh

On 23/11/2015 11:48, Hendrik Boom wrote:

On Sun, Nov 22, 2015 at 01:06:04PM +0100, Edward Bartolo wrote:

Hi All,

Is it possible to use classes and objects while using gtk2/3? I
noticed that only functions are used and that class use is ovoided by
prepending functions with a group string. As far as I know, C can
still use structs that are attached to a set of data and a set of
functions, but without inheritance and polymorphyism.

Edward


I believe gtk has its own run-time system of classes and inheritance
built into it in its C version.  The C++ version is called gtk+, and
pastes C++ inheritance on top of this.

(not sure of the details, but it's worth looking up the details in case
I get some of it wrong)


It's Gtkmm:

  https://developer.gnome.org/gtkmm/2.24/classGtk_1_1Widget.html

The C "interitance" is awful to use and awful to create your own classes 
with.  In all seriousness, you should never ever use it.  The GTK 
developers themselves invented a whole new language ("Vala") to 
preprocess into C simply to avoid its horrors.  It only took 15 years 
for them to realise that you can't robustly implement OO in plain C, but 
you can /generate/ it.  Doing it by hand is a maintenance nightmare. 
Note, this isn't an endorsement of Vala either, which has its own 
(different) set of problems, particualarly when it comes to debugging 
(since debugging generated code is always painful, and even more so if 
the generator itself has bugs).


Gtkmm is a thin wrapper around the C "classes", but it makes them into 
first-class C++ classes, along with all the properties, signals etc.  If 
you want to create your own classes, this is the way to do it.  "class 
MyWindow : public Gtk::Window".  Job done.  Override the parent's 
virtual methods, add your own properties and signals, it's simple and 
just works.  And it's debuggable.  You can break in the debugger and get 
a meaningful stacktrace showing exactly where you are in your own code, 
through the C++ wrappers to the C functions (and back again via signals).


After using plain GTK+ and several of its bindings (Perl, Python, C++), 
I'd say GTKmm is the best binding in terms of its quality of 
implementation and ease of use; the Python bindings are a close second. 
 But if you're using C++, I'd suggest you look at Qt5 which is vastly 
better than GTK+, especially with the advent of GTK+3.  It's even nicer 
to use, and doesn't have the added baggage of being a wrapper.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Experiencing with GtkBuilder

2015-11-23 Thread Roger Leigh

On 23/11/2015 11:49, Nate Bargmann wrote:

* On 2015 23 Nov 00:53 -0600, aitor_czr wrote:

In my opinion, using C with lists will be the most suitable.


Have you looked at what glib provides?  It is an underlying library of
GTK and seems to contain many such solutions.


Using GLib for structures like linked lists (GList) etc. is a much 
better solution than reinventing them unnecessarily.  That said, many of 
them are poorly implemented--e.g. GList appending is O(n) rather than 
O(1) if you only keep track of the first element; there's no "list head" 
to manage this (a major omission on their part).  So then you need to 
keep track of both ends by hand and keep them both up-to-date... and it 
ends up creating additional overhead on your part.  So it's far cruder 
than e.g. C++ std::vector or std::list.  If you're going to do a 
lot of insertion/deletion/traversal then you'll find the C++ containers 
both simpler and more efficient in most cases, not to mention more 
robust.  If you use e.g. std::vector you can still use it with C 
functions which want a T[] or T* via its data() method.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Detailed technical treatise of systemd

2015-10-16 Thread Roger Leigh

On 16/10/2015 23:09, Neo Futur wrote:

A couple days back, I was playing with Trinity on a PCLinuxOS live CD.
Starting the applications **from the CD** was faster than doing the same
from a KDE4 desktop *from an SSD*.  At the time, I recall GNOME2 and KDE3
being slower than their earlier incarnations, but the sheer bloat and
inefficiency of the current forms of all these desktops is incredible.  In
Trinity, I was shocked that I could click on System->Konsole and get a
terminal... not in a second or two, or even half a second, but right there
and then.

  confirmed ! and theres another important point usability and
intuitivity, I very often install linux to new users who never used
linux before, everytime i installed them a kde4, they were completely
lost at first, then tried to customize it their way and broke it all,
then I put kde3/trinity and they re happy and never need me again ;)


I'd go as far as to say that KDE3 was for me the pinnacle of desktop 
usability.  It was clean, discoverable, configurable, and was a coherent 
and consistent interface from the window manager to the file manager and 
all the applications.  It was very professionally done, a real credit to 
all the developers who worked on it, and an absolute pleasure to use. 
It's quite sad that it's all been downhill from this point on, not just 
for KDE but for all of its contemporary peers.


I switched to KDE4 with Debian, but it felt subjectively far slower, and 
I never got the point of activities, hated akonadi, and while the UI 
looked nice it never felt quite as right as KDE3, which fit like a 
glove.  The current stuff is almost unusable.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Detailed technical treatise of systemd

2015-10-16 Thread Roger Leigh

On 16/10/2015 20:39, Rainer Weikusat wrote:

Neo Futur  writes:

I pretty much stopped reading after the following line in the
composition:
==
Fourthly, I will only be dealing with systemd the service manager (of
which the init is an intracomponent subset, and also contains several
other internal subsystems and characteristics which will prove of
paramount importance to the analysis), and to a lesser extent journald.
==

  Same here, if systemd was just an init system, i d probably still
avoid it and fight it, but the main problem is that its much more than
that, eating everything around it (
http://neofutur.net/local/cache-vignettes/L200xH133/arton19-b28db.gif
), and that is the main problem, for sure.


In case you like a nice piece of irony: Both GNOME and KDE perform like
shit. According to the opinion of both the GNOME and the KDE developers,
the reason for this must be somewhere in all the code they didn't
write. Hence, it has to be replaced. Especially considering that it's
all "But that's not how Microsoft does it!" stuff --- and you can't get
more fishy than that, can you?


The performance of GNOME3, KDE4 and Unity are all terrible.  Too much 
shiny bling and too little care for *real* usability.


A couple days back, I was playing with Trinity on a PCLinuxOS live CD. 
Starting the applications **from the CD** was faster than doing the same 
from a KDE4 desktop *from an SSD*.  At the time, I recall GNOME2 and 
KDE3 being slower than their earlier incarnations, but the sheer bloat 
and inefficiency of the current forms of all these desktops is 
incredible.  In Trinity, I was shocked that I could click on 
System->Konsole and get a terminal... not in a second or two, or even 
half a second, but right there and then.  That's how bad the current 
desktops are.  I shouldn't have been surprised at being reminded how 
snappy a user interface could be--it should be a standard expectation. 
I'm not even using low-end hardware; it's an 8-core 4GHz CPU with 16GB 
RAM and a 4GB GPU!  Using a current KDE system, I found the amount of 
sliding-fading-semi-tranparent bling really got in the way of using the 
thing.  When every hovering popup on the taskbar slides in from random 
directions as you moved the mouse around, I found this massively 
distracting, and that's only the start of it.  The other major flaw is 
the use of animations and transitions; they typically only start after 
you initiate an action, leading you to wait until they complete to avoid 
getting confused as to what will happen; previously such actions were 
immediate.  The most jarring example I can think of is the alt-tab 
switching animation where you have to wait while there's visible 
movement of the selection, but the modern kickoff menu is also victim to 
this, and it's seen in many other places.  These little details all make 
the system less efficient and less predictable--they make you second 
guess what action will take since you're unsure of what will happen due 
to waiting on the animations/transitions to catch up with your input.



Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] libpam-xdg-support / libpam-systemd

2015-09-17 Thread Roger Leigh

On 17/09/2015 12:29, Daniel Reurich wrote:

On 17/09/15 21:01, tilt! wrote:

Hi,

On 09/17/2015 10:12 AM, Roger Leigh wrote:
 > On 11/09/2015 02:33, Daniel Reurich wrote:
 >> We could either use $USER_$SESSIONID or $USER/$SESSIONID to implement
 >> multiple sessions per user.
 >
 > This is definitely possible.  It would probably need some thought on
 > how to determine which "session" you are in when cleaning them up via
 > PAM or whatever.  Especially since it's not tied to the PAM session.

Sorry, i still don't get it. What session?


For an X Session (which is the scope of this use case), the client_id as
chosen by the X Session Manager.

  - See  https://en.wikipedia.org/wiki/X_session_manager and
ftp://ftp.x.org/pub/X11R7.0/doc/PDF/xsmp.pdf

That said, it should be general enough to cover any service that wants
to provide a session management capability.


We're working at a lower level than X session management though, aren't 
we?  While you can layer X session management on top of the lower-level 
stuff, for better or worse the XDG runtime stuff works for all login 
types, not just X.  So it needs to work for console logins, SSH, su/sudo 
etc. via PAM.



Regards,
Roger

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] libpam-xdg-support / libpam-systemd

2015-09-17 Thread Roger Leigh

On 17/09/2015 10:01, tilt! wrote:

Hi,

On 09/17/2015 10:12 AM, Roger Leigh wrote:
 > On 11/09/2015 02:33, Daniel Reurich wrote:
 >> We could either use $USER_$SESSIONID or $USER/$SESSIONID to implement
 >> multiple sessions per user.
 >
 > This is definitely possible.  It would probably need some thought on
 > how to determine which "session" you are in when cleaning them up via
 > PAM or whatever.  Especially since it's not tied to the PAM session.

Sorry, i still don't get it. What session?


The "session" terminology is ambiguous.  There are several:

1) The PAM session

   Created and ended with pam_open_session/pam_close_session.  A PAM 
session is typically tied to the lifetime of a process.  For example 
login/su/sudo/ssh where the process handling auth will open the session, 
fork off the user shell/program, and then when the user process ends, 
will close the session.  It can also run in separate processes (schroot 
does this), but this is less typical.  For example:


   login
  pam_open_session() [ create xdg runtime dir / increment usage count ]
  fork()
 --> bash
  wait()
  pam_close_session() [ decrement usage count / delete xdg runtime 
dir ]


2) The XDG session

   The XDG spec only supports one existing at once due to their 
shortsightedness when writing the spec.  But more than one could in 
theory be supported.


Currently, the XDG session, including the runtime dir, can be shared by 
multiple logins, i.e. separate PAM sessions.  This means that when you 
are running pam_close_session, the PAM module handling XDG cleanup needs 
to be able to tie itself back to the XDG session to be able to clean it 
up.  This may be non-trivial, though if 
pam_open_session/pam_close_session are running in the same process, 
you'll have this information in the environment.  If they run in 
separate processes, it will become hard, but it's a case which needs 
handling even if it only means "skip cleanup in this case".  This is 
where the "only one XDG session" limitation may have arisen--a cop-out!


So the outstanding question is: when my XDG cleanup PAM module is 
invoked via pam_close_session, how do I

1) know it is safe to clean up the runtime dir
2) know the location of this runtime dir
In the logind case, it's handling the usage count for (1) and ignores 
(2) since the location is hard-coded.


If every PAM session creates a new XDG session (i.e. a new runtime dir 
per PAM session), then you could use the PID of the login process as the 
unique session ID and use this in the runtime dir path.  This avoids any 
requirement to have a usage count, but at the expense of not sharing 
session state with any other XDG session.  [Not a big loss IMO, but 
worth mentioning.]


Some of the other posts in the thread mentioned the awfulness of 
logind/policykit/PAM and how we would be better off without them.  While 
I wouldn't argue over logind and policykit, PAM is essential and is used 
for all login/auth on Debian systems--it must be supported.  PAM has its 
limitations, but it's generally a well thought-out system which does its 
job well.



Regards,
Roger



___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] libpam-xdg-support / libpam-systemd

2015-09-17 Thread Roger Leigh

On 17/09/2015 10:31, Tomasz Torcz wrote:

On Thu, Sep 17, 2015 at 11:23:10AM +0200, Jaromil wrote:

If rundirs are on non-volatile storage, my implementation can delete
them at system shutdown (assuming an orderly shutdown is performed).


if you use the /tmp prefix as default then you don't even need to handle
such a deletion in the default case: it all falls back into the handling
of /tmp, simplifying much the site configuration for those not willing
to go into details, plus we avoiding unlinks in /var/run


   But XDG_RUNTIME_DIR files should stay for the duration of user's session.
If user stays logged in for couple of weeks, /tmp cleanup removing
files older than 1 month can remove user files mid-session.


This is why, as mentioned in my first reply, the tmpreaper needs 
configuring to explicitly *not remove* files under this location.


___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] libpam-xdg-support / libpam-systemd

2015-09-17 Thread Roger Leigh

On 11/09/2015 02:33, Daniel Reurich wrote:

On 10/09/15 23:46, Roger Leigh wrote:

On 10/09/2015 12:11, tilt! wrote:


Since i already use $HOME/.config for configuration data,
which more precisely is the default setting of XDG_CONFIG_HOME
(according to [1]), i would like to consider the pendant
XDG_RUNTIME_DIR for the tempfile i have described.

Unfortunately, the specification [1] does not provide a default
for XDG_RUNTIME_DIR as it does for XDG_CONFIG_HOME.

In Ubuntu, there used to be libpam-xdg-support (see [2]). It
sets up a  directory in "/run/user", if neccessary, at login
time of the user. More recently, this task has been assumed by
pam-systemd (see [3]).

Question open for debate:

On a systemd-free system, should an alternative exist which
assumes the task of initializing XDG envvars as described by
[1] in the way done by [3]?


This part of the XDG specification is pretty terrible.  It's poorly
specified, and the behaviour as specified is only implementable by
systemd (i.e. its lifetime by refcounting all the logins/logouts).  It
also precludes having more than one session per user.  By design...  It
wouldn't have required much effort to make this more flexible, but the
authors of this "specification" don't generally bother with
forward-looking flexible design.


We could either use $USER_$SESSIONID or $USER/$SESSIONID to implement
multiple sessions per user.


This is definitely possible.  It would probably need some thought on how 
to determine which "session" you are in when cleaning them up via PAM or 
whatever.  Especially since it's not tied to the PAM session.



There's no technical reason for /run/$user to live under /run.  It could
be just as easily live under /tmp (or /var/tmp).  So you could default
it to use /tmp/$user or /tmp/xdg-runtime/$user or whatever and make this
the default.


Why not /var/run/xdg-runtime/$USER - it's a better place and more likely
to have quota's enabled then /tmp


If it's in /var/run it's in /run by default since /var/run is just a 
symlink to /run.


If you meant /var/tmp, this won't be cleaned on reboot, while /tmp will 
be.  Given the emphemeral nature of the user session data, /tmp is 
therefore preferable to /var/tmp.



So my recommendation here would be to
- place /run/user in a subdirectory of /tmp
- configure XDG_RUNTIME_DIR to use this location either in a PAM module,
or even by hardcoding the default to use this location--the
specification might not provide this default, but an implementation
certainly can.


indeed, although I'd argue that /var/run//$USER or possibly
/var/lib/xdg_runtime/$USER would be better then anything in /tmp.


I think the expected lifetime of the data would make these locations 
sub-optimal, as mentioned above.



Regards,
Roger


___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] libpam-xdg-support / libpam-systemd

2015-09-10 Thread Roger Leigh

On 10/09/2015 20:33, tilt! wrote:

Hi,


in Ubuntu defaults are in X11/Xsession.d/60x11-common_xdg_path
which is shipped by x11-common and sourced by XSession

 >

BTW there is some XDG_* env setting also in X11/Xsession.d/00upstart


Ok, now if only we knew what to actually use as a default for
XDG_RUNTIME_DIR; it is a per-user setting, and defining the
prefix to be /tmp/run is not enough.

Come to think of it, my choice of

${XDG_RUNTIME_DIR:-/tmp/run/$USER}

is no good, because, just as an example, if $USER comes from
an AD domain or LDAP it might contain '\'; in the least,
$USER had to undergo some transformation (escaping?) to ensure
that it's filesystem-safe.

No wonder XDG issues no default value, and it all disappears
into implementation code, it's a potato of above-average warmth
  - makes me feel sorry i brought it up in the first place :-D


You can always use the uid instead of the name?  One saving grace of 
this facility is that since it's entirely defined by XDG_RUNTIME_DIR, 
you can construct the path however you see fit.  The name is purely an 
internal implementation detail.  You could even do it by some other 
means such as the login number since startup e.g. /tmp/session/1.  Or a 
timestamp and PID: /tmp/session/201509102055-9230.  Or a random 
mkdtemp(3) direcory.  My point is simply that the path doesn't need to 
contain the username, so you can use whatever makes most sense to make 
it unique.  Depending upon your chosen level of conformance with the XDG 
spec, you might also want to make provision for sharing it between 
sessions, so a basic uid would give you that, but in combination with a 
random part would allow you to have separate sessions (which isn't 
covered by the spec).



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] libpam-xdg-support / libpam-systemd

2015-09-10 Thread Roger Leigh

On 10/09/2015 12:11, tilt! wrote:


Since i already use $HOME/.config for configuration data,
which more precisely is the default setting of XDG_CONFIG_HOME
(according to [1]), i would like to consider the pendant
XDG_RUNTIME_DIR for the tempfile i have described.

Unfortunately, the specification [1] does not provide a default
for XDG_RUNTIME_DIR as it does for XDG_CONFIG_HOME.

In Ubuntu, there used to be libpam-xdg-support (see [2]). It
sets up a  directory in "/run/user", if neccessary, at login
time of the user. More recently, this task has been assumed by
pam-systemd (see [3]).

Question open for debate:

On a systemd-free system, should an alternative exist which
assumes the task of initializing XDG envvars as described by
[1] in the way done by [3]?


This part of the XDG specification is pretty terrible.  It's poorly 
specified, and the behaviour as specified is only implementable by 
systemd (i.e. its lifetime by refcounting all the logins/logouts).  It 
also precludes having more than one session per user.  By design...  It 
wouldn't have required much effort to make this more flexible, but the 
authors of this "specification" don't generally bother with 
forward-looking flexible design.


There's no technical reason for /run/$user to live under /run.  It could 
be just as easily live under /tmp (or /var/tmp).  So you could default 
it to use /tmp/$user or /tmp/xdg-runtime/$user or whatever and make this 
the default.


[I argued for doing this originally, since /run/user would allow one to 
easily harm the system or other users by filling /run and/or /run/user 
depending on how the mounts are set up which would prevent other user's 
sessions and system services working properly, but I was told this was 
not a problem.  And also, that /tmp could not be used because of 
tmpreaper.  However, it's in reality another case of RedHat-specific 
constraints and workarounds being used to dictate policy.  They have 
tmpreaper running by default, and don't have it set to ignore certain 
directories.  In their world, this means using /tmp is an unreliable 
nightmare.  However, using /tmp is entirely possible, and it's also 
possible even when tmpreaper is installed if it is configured 
appropriately (they considered this impossible...).  Obviously 
configuring an optional service is preferable to a poorly-configured 
default influencing your system design, but very little these people do 
makes much objective sense.]


So my recommendation here would be to
- place /run/user in a subdirectory of /tmp
- configure XDG_RUNTIME_DIR to use this location either in a PAM module, 
or even by hardcoding the default to use this location--the 
specification might not provide this default, but an implementation 
certainly can.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Systemd Shims

2015-08-20 Thread Roger Leigh

On 20/08/2015 11:27, Rainer Weikusat wrote:

Roger Leigh  writes:

On 19/08/2015 17:39, Rainer Weikusat wrote:


[...]


static void saveFile(char* essid, char* pw) //argv[1], argv[2]
{
char *path;
FILE *fp;
unsigned p_len, e_len;

p_len = strlen(IFACES_PATH);
e_len = strlen(essid);
path = alloca(p_len + e_len + 2);

strcpy(path, IFACES_PATH);
path[p_len] = '/';
strcpy(path + p_len + 1, essid);

fp = fopen(path, "ab+");
fprintf(fp, IFACE_TMPL, essid, pw);
fclose(fp);
}

int main(int argc, char **argv)
{
saveFile(argv[1], argv[2]);
return 0;
}


I'm not picking on this post in particular out of the rest of today's
thread, but I did think this was a good example.  While I don't want
to act like a rabid C++ zealot, stuff like this really makes me
shudder due to the fragility and unnecessary complexity for something
which is really trivial.

While the relative safety and security of C string handling can be
debated, I do think the question needs asking: Why not use a language
with proper safe string handling and avoid the issue entirely?
It's only "safe" until it's refactored to break the existing
assumptions and make it accidentally unsafe.  The constants such as 2,
1 plus the strlen() calls are prime candidates for future bugs.  It's
not like this /needs/ to be done in C.


The 'constant 2' follows from the fact that the length of the result
string is the length of the path plus the length of the essid string
plus two more bytes/ characters, namely, the '/' and the terminating
zero.

The 'constant 1', in turn, follows from the fact that the essid has to
be appended after the string made up of the path and the added '/'.

That's how string processing in C happens to look like because of how
the language (library, actually) defines a string and because of the
operation supposed to be performed here (combine three different parts,
one with constant length).

Could you perhaps elaborate on what you were writing about? Minus trying
to start a language war, that is. The original author chose C. Because
of this, I wrote and posted a simple example how to do string processing
in C without relying on 'large enough' fixed size buffers.


The rationale for the use of the constants is fine.  But consider that 
the code does not document where those numbers come from, and fact that 
code to calculate the buffer size and the code to copy data into the 
buffer are separate steps.  This is where problems can occur.  Maybe not 
right now, after all you got it right when you wrote it, one would hope. 
 But when it comes to future modifications, you must update all the 
size calculations and constants in line with any changes to how the 
buffer is filled, and this is a prime candidate for mistakes and 
consequent crashes and/or buffer overflows.  When it comes to making 
modifications, you or whoever is making the change, needs to work out 
exactly what the intent of the orginal code was--i.e. re-derive all the 
constants and re-compute them correctly to match the new behaviour. 
This is often non-trivial depending on the nature of the string 
manipulation.


The fact that C code is a continual source of such quality and security 
defects is a clear indication that in general people *don't* get this 
right even when they think they are geniuses who know their own code. 
This example is short, but people continue to routinely screw up even 
stuff this simple, and it only becomes more likely with more complexity.


IMO, stuff like this just doesn't belong in any program that claims to 
be secure.  Especially in one that's setuid root doing privileged 
operations.


When I wrote the schroot(1) tool, which is setuid-root out of necessity 
since chroot(2) is privileged, I originally wrote it in C, but converted 
it to C++ primarily due to the security considerations, of which 
avoiding insecure and buggy string handling was the prime consideration. 
 It removes this entire class of bugs and security exploits at a 
stroke.  And if I .reserve() space in a string for efficiency, it's as 
efficient as all the manual C manipulations, but safe from overflow, and 
if I get it wrong it'll merely perform another allocation rather than 
being exploitable.  i.e. it's as efficient as string manipulation in C, 
only vastly less complex and with safety factored in.


All too often the attitude of C programmers is "it'll be perfect and 
efficient since I won't screw up".  But we all screw up at some point. 
And the question then is, "what are the consequences of screwing up". 
And here, the consequences are severe.  But with std::string, are 
entirely avoidable.  So we both reduce the chance of a screwup by making 
the API match our intent exactl

Re: [DNG] Systemd Shims

2015-08-19 Thread Roger Leigh

On 19/08/2015 17:39, Rainer Weikusat wrote:


#define IFACE_TMPL \
"auto lo\n" \
"iface lo inet loopback\n\n" \
"iface wlan0 inet dhcp\n" \
"wpa-ssid %s\n" \
"wpa-psk \"%s\"\n"

#define IFACES_PATH "/tmp"

static void saveFile(char* essid, char* pw) //argv[1], argv[2]
{
char *path;
FILE *fp;
unsigned p_len, e_len;

p_len = strlen(IFACES_PATH);
e_len = strlen(essid);
path = alloca(p_len + e_len + 2);

strcpy(path, IFACES_PATH);
path[p_len] = '/';
strcpy(path + p_len + 1, essid);

fp = fopen(path, "ab+");
fprintf(fp, IFACE_TMPL, essid, pw);
fclose(fp);
}

int main(int argc, char **argv)
{
saveFile(argv[1], argv[2]);
return 0;
}


I'm not picking on this post in particular out of the rest of today's 
thread, but I did think this was a good example.  While I don't want to 
act like a rabid C++ zealot, stuff like this really makes me shudder due 
to the fragility and unnecessary complexity for something which is 
really trivial.


While the relative safety and security of C string handling can be 
debated, I do think the question needs asking: Why not use a language 
with proper safe string handling and avoid the issue entirely?  It's 
only "safe" until it's refactored to break the existing assumptions and 
make it accidentally unsafe.  The constants such as 2, 1 plus the 
strlen() calls are prime candidates for future bugs.  It's not like this 
/needs/ to be done in C.


void saveFile(const std::string& essid, const std::string& pw)
{
  std::string path(IFACES_PATH);
  path += '/';
  path += essid;

  std::ofstream fp(path);
  if (fp)
  {
boost::format fmt(IFACE_TMPL);
fmt % essid % pw;
fp << fmt.str() << std::flush;
  }
}

No leaks.  No buffer overflows.  Safe formatting.  No file handle leaks. 
 And it's readable--the intent is obvious since there's no extraneous 
buffer memory management.  And it will compile down to something 
equivalent or even more efficient.


If you use std::string you can still work directly with C functions 
using "const char *"--just use the .c_str() method and you get a 
suitable pointer.


In my own code I use boost.filesystem, so rather than using "std::string 
path" you could then do


path p = IFACES_PATH / essid;

and have the path concatentation handled directly, and then use 
p.string() to get a string back.  Even safer and more maintainable--work 
with path components directly rather than mangling strings.


void saveFile(const std::string& essid, const std::string& pw)
{
  path p = IFACES_PATH / essid;
  std::ofstream fp(p.string();
  if (fp)
  {
boost::format fmt(IFACE_TMPL);
fmt % essid % pw;
fp << fmt.str() << std::flush;
  }
}

This is obviously easier and faster to write and maintain, so your 
energies are spent productively on the problem at hand, rather than 
faffing around with manual buffer management.


And if efficiency isn't the prime consideration (and given the context, 
it isn't), then an interpreted language is likely an even better choice.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Devuan and upstream

2015-08-15 Thread Roger Leigh

On 15/08/2015 05:57, T.J. Duchene wrote:

On Fri, 14 Aug 2015 22:38:35 -0700
Isaac Dunham  wrote:




To elaborate on this, GCC 5.1 (I think) has changed the ABI for C++11
support.
Packages using C++11 need to be rebuilt with the new library;
libreoffice has already been rebuilt, but not KDE.


That's a very good point, Isaac.  C++11 is a very interesting revision,
although C++14 is technically the highest available standard. I'm never
a fan of rapidly changing standards, because they tend to be a mess,
poorly considered. I understand they plan another revision for 2017,
and I think they are nuts.


I don't.  I write C++ code for my day job, and I'd have to say that 
these revisions make C++ better than ever to write.  It's cleaner, 
simpler, and more maintainable.  Just last week I wrote some prototype 
code in C++11, and later had to change it to use C++98 features to 
comply with the project's requirements.  It doubled the line count and 
made it vastly less readable, and this was using only two features: auto 
types and range-base for loops.  The benefits it provides are not 
insignificant.


When you say they are "nuts", are there any changes in C++14 or C++17 
which you have found to be ill-considered?  While no standard is ever 
"perfect", I have no complaints about C++11 or C++14.  Since these are 
ISO standards, the realities of the process means there's little scope 
for pushing in lots of poorly thought out changes at the last 
minute--most of the changes have been planned and implemented for many 
years.  There's only one feature I can think of which was bad--template 
export--and this was in C++98; and I think they learned their lesson 
from that one--never put in a standard a features which hasn't been 
implemented and tested in the real world.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Devuan and upstream

2015-08-15 Thread Roger Leigh

On 15/08/2015 05:38, Isaac Dunham wrote:

On Fri, Aug 14, 2015 at 02:42:14PM +0200, Adam Borowski wrote:

On Fri, Aug 14, 2015 at 02:02:22PM +0200, Didier Kryn wrote:

 Seems to me there's something weird, both, in libreoffice depending on
just one single version of libstdc++, and in libklabxml being broken by this
version of libstdc++, be it the fault of kde or libstdc++ developpers.


That's the GCC-5 transition, unstable is broken as it's supposed to.  You
can use testing if you're bothered by uninstallable packages (or any other
form of the sky falling).


To elaborate on this, GCC 5.1 (I think) has changed the ABI for C++11
support.
Packages using C++11 need to be rebuilt with the new library; libreoffice
has already been rebuilt, but not KDE.


Technically, there's no ABI break.  Since GCC5, libstdc++ uses symbol 
versioning to provide the old and new std::string etc.  So existing C++ 
code will continue running without any changes.


The need to rebuild is to avoid incompatibilites due to transitive 
dependencies between libraries using the old and new ABIs, which means 
that in practice all C++ libraries need rebuilding using the new ABI.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Devuan and upstream

2015-08-15 Thread Roger Leigh

On 15/08/2015 00:19, James Powell wrote:

Slackware is maintained by 3 core people with extra help as needed. The
rest of the packages are pushed by the community at large contributing.
Devuan doesn't have to maintain every package possible. That's ludicrous
to think so.

Debian got in over its head by allowing this. Thousands upon thousands
of packages that required a committee to oversee.


While this might be true to an extent, you can also view it this way: 
the organisation and structure of Debian, the project, allowed it to 
scale to 1000 developers who could maintain 2 packages without 
stepping on each others toes too often.  That's a considerable 
achievement--how many other projects have been able to achieve an 
equivalent scale?


Now I think there may have been benefits to separating the "base" system 
from "everything else", and indeed it was discussed on several occasions 
in Debian, but this never happened for various reasons.  In retrospect, 
I think it would have been a good move.



Honestly, what is needed by a distribution? Look at Slackware or BLFS
that's a lot of packages, but it's manageable by a small team. Why can't
Devuan follow suit? There doesn't need to be a bagillion packages
maintained by the main devs. If the rest need to be passed back to the
community at large, then do it. This also hits the point of why do we
need 5 different for things like, for example, SDL packages for -bin,
-lib, -doc, -dev, -src, and such? One package is easier to maintain than
five individual ones. It lightens the load significantly, especially for
the poor soul having to make 5 or more different scripts for 5 packages
from the same source tarball. Yes, it's nice to have small packages for
embedded stuff and small disks, but do you really want to raise the
workload of the maintainer that much?


I think this is barking up the wrong tree.  Maintaining multi-binary 
source packages isn't the huge burden you're making out.  There's just 
one script to build all the binary packages, so the overhead on that 
side is unchanged.  The separate packages are generally just lists of 
files/directories to be copied into each package, and this is fairly 
easy to maintain.


Having everything in a single package is inflexible and wastes disc 
space.  Undoing all the effort that went into this in the first place 
would be a significant regression.


That said, I do think multi-binary package generation could be automated 
for the common cases.  It's pretty trivial to distinguish headers, 
libraries, documentation, translations, source, debug symbols from the 
filesystem locations they live in.  This is also something which has 
been discussed over the years.  The tool changes to accommodate it are 
not insignificant though.



Regards,
Roger

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] A better default windows manager

2015-07-25 Thread Roger Leigh

On 25/07/2015 10:53, James Powell wrote:

CDE was the defacto desktop for many UNIX branded systems like IRIX,
Solaris, HP-UX, and others until many replaced it with Gnome2, Xfce,
KDE, and others.

Sun/Oracle replaced CDE with Java Desktop Environment back on Solaris 10
I believe when OpenSolaris was still being developed. I think Solaris
uses a more traditional DE now though.


I used CDE on Solaris back in 1997-98, and found it to be pretty usable. 
 It was the default DE on the universities UNIX systems, including all 
their HP-UX nodes which were essentially dumb X terminals (via remote X 
to a bigger Solaris system).  Not much different to XFCE to be honest in 
terms of its panel, though it did include a Glade-like UI development 
application, a file manager and other facilities as well.  At the time I 
thought it quite heavyweight, but today it's probably much smaller than 
even XFCE.


Solaris was using GNOME2 in the mid 2000s which is why it became much 
more polished and usable when the Sun usability folk were involved, and 
took a big dive in usability with GNOME3 when they were replaced by 
hipsters who only cared about mobile phones.  Looks like they are still 
using GNOME2 today.


The main sticking point for CDE on modern systems is its lack of support 
for UTF-8 font handling.  IIRC its maintainers were working on that, but 
I'm not aware of its recent status.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] GTK (was Will there be a MirDevuan "WTF"?)

2015-07-25 Thread Roger Leigh

On 25/07/2015 10:23, Jaromil wrote:

On Fri, 24 Jul 2015, Roger Leigh wrote:


I imagine the reason why Glib was written in C is because binding to
other languages is easier with C than C++.


I expect so. C is fairly straightforward.


This was certainly the original intent.  But having used the bindings,
my experience was that they were of wildly variable quality, had
varying degrees of wrapper coverage, and some were not exactly usable.
All in all, the number of production quality bindings could be counted
on one hand with some fingers to spare.  Maybe it's improved since I
last looked.


I share most of the criticism to GTK in this thread. I think the best
thing it really produced was glade/libglade, but then is it worthed?


Glade and libglade were very nice, and I used to use these extensively. 
 However, I still ran into nasty bugs here.  For example, if you load a 
glade widget tree and reparent into a custom widget, it loses all the 
accelerators (keyboard bindings).  And there were issues with the 
libglade bindings as well, both bugs and defects such as not being able 
to use signal autoconnection.  And later they made an incompatible break 
with GtkBuilder with then with GTK+3, which was the final straw in 
dropping GTK+ for good.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] GTK (was Will there be a MirDevuan "WTF"?)

2015-07-25 Thread Roger Leigh

On 24/07/2015 23:58, T.J. Duchene wrote:



On 7/24/2015 3:57 AM, Roger Leigh wrote:

First, thank you for the reply, Roger. I supremely appreciate it.


The C++ compiler objectively does a better job here.  It's simpler,
quicker to write, safer, easier to refactor.  And when I've done a
full conversion of GObject-based C code to C++, the C++ compiler has
picked up on real bugs which the C compiler hadn't picked up on
(because the type information had been deliberately cast away).  GTK+
programs can be riddled with bugs, without the developer being any the
wiser in the absence of compiler diagnostics.


That is true to some degree. I don't agree entirely.  It really depends
on the compiler and what settings are being used.  GCC is not the
world's greatest compiler by any stretch of the imagination, and there
are a lot of extraordinarily lazy FOSS programmers who ignore warnings,
and do not initialize variables before using them.


It's worse than this, and is nothing to do with the compiler and 
everything to do with unsafe practices in the libraries.  With 
GTK/GObject, suppose I use one of the macro casts for polymorphism such 
as GTK_WIDGET().  This cast casts a pointer to a (GtkWidget*) with a 
little checking.  But it's essentially just the same as (GtkWidget*).
This "works" if you use the correct cast macro with a variable of the 
correct type.  BUT.  What if you get it wrong?  You have explicitly cast 
away the real type and told the compiler to use a completely different 
type.  There is *no way* the compiler can issue a diagnostic if you got 
it wrong.  In C++ this simply doesn't happen; upcasting is completely 
transparent, downcasting with dynamic_cast is completely safe.  This can 
lead to long standing latent bugs in the codebase that are well hidden.


That's the start.  Now consider stuff like signals and closures, 
g_signal_connect, g_cclosure_invoke etc.  What happens to type-safety 
here?  Again, it's almost always completely lost.  What happens if you 
screw up?  Nothing.  You would get no diagnostic from the compiler, and 
this would result in bad things at runtime.  In C++ with libsigc++, 
Boost.Signals etc. the signals and handlers are templated and specific 
for the signal and handler types, even going so far as to be able to 
rebind and elide arguments.  If you get it wrong it will fail to 
compile, not crash and burn at runtime.  And with the gpointer (void) 
data pointer in the signal, used to pass arbitrary data or object to the 
handler, that's completely unsafe--the handler has no guarantee it's 
going to get what it expected.  Again, the C++ handlers don't do 
that--it's completely safe and checked at compile time.


This is not about "programmer laziness"--they were not in a position to 
be notified of any mistakes they made, at all.



No one can blame you
for disliking the fact that GTK tends to leak references  (at least from
warnings I seen on the console) when using Linux.  That's entirely out
of your hands.


No.  It means the programmer screwed up the reference counting.  *Every* 
instance is programmer error.  Since different object types handle 
reference counting differently and inconsistently, this is a very 
difficult thing to get right.  Do you get a newly-created object with a 
refcounf of 1?  With a floating reference?  Do you need to sink it? 
It's a nightmare.  This is one reason the GTKmm bindings are nicer--they 
handle it correctly for you and give you a consistent and transparent 
way to handle it (GLib::RefPtr).  No more manual refcounting, and no 
inconsistency makes it impossible to screw up.



If I had to critique GTK based on what little you have
told me, I'd say it was probably a bad design, trying to force OOP in C.


It's definitely both of these things.  C is entirely unsuited to OOP if 
you care about the result being correct and robust.



That is not to say that OOP cannot be done in C, but as a programming
paradigm, I personally feel that OOP is a very flawed idea.  I've been
programming for a long time, and to be perfectly honest, I have seen
very few cases where OOP methodology actually makes things "better".
There are some nice features, but I don't buy the whole idea as actually
making software development easier. I'd sacrifice most of those features
in a heartbeat not to have to deal with exceptions.


I think this is at odds with general consensus to be honest.  OOP can 
certainly be overused where other approaches would be more appropriate, 
but in the general case it provides many concrete benefits, so long as 
you don't try to do polymorphism in C.



This too also has its limits, but put it this way: I haven't had a
memory leak in C++ code in a decade due to the use of shared_ptr, and
this isn't something I can say for my GTK+ C code, or GTK+ C code in
general.


I would  qualify th

Re: [DNG] A better default windows manager

2015-07-24 Thread Roger Leigh

On 24/07/2015 23:48, James Powell wrote:

CDE is a classic UNIX desktop, but it has long been since viable for
modern usages.

Xfce, in truth, was a modern replacement for it using Xforms since Motif
was, at the time, under a different license. It bears the same classic
layout minus some differences.

However, last I had heard CDE was still unstable with some operations.


My first thought on reading this was that sounds just like CDE used to 
be!  (Being a CDE user back in the mid-late '90s.)


I used to really like CDE, and am very tempted to give it a try out 
sometime soon.

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] GTK (was Will there be a MirDevuan "WTF"?)

2015-07-24 Thread Roger Leigh

On 24/07/2015 23:24, T.J. Duchene wrote:



On 7/24/2015 5:03 AM, Didier Kryn wrote:


Hey T.J., you seem to contradict yourself when saying "C and C++
are strongly typed" and "Type checking is never C's job." :-)

Actually, yes, C and C++ are typed, but weakly. They silently do
type conversion in pretty much every instruction. One assumes the
programmers knows it and takes care...


I stand corrected.  You are quite right, it is "weakly" typed.  I assume
that everyone could figure out what I meant anyway.  It's been a long
time since language theory. I think the error came from spending too
much time reading about languages like C#.  Even Einstein made major
errors, so at least I am in good company.


This really only applies to the grandfathered-in C numeric types subset 
of C++.  As soon as you use or own types (or wrap the primitives), you 
have control over the conversions and can define your own policies.



Yes, encapsulation in scope is a good thing, but much of that can be
achieved in other ways without using objects. When you take languages
like C++, while the language itself is standardized, the name mangling
that the compilers introduce is not standardized at all.  That makes
prepackaged libraries and sometimes linking questionable unless you are
using the same version and compiler.  It is enough of a problem every
version of Linux recommends not updating your installation piecemeal (eg
if I update XFCE, I better recompile everything that is linked to XFCE -
even if everything is backward compatible).


While this was historically the case, this hasn't been true for GCC for 
many years.  The ABI has been stable for all of the 4.x series, and it's 
unbroken in 5.x (though 5.x does introduce a C++11 ABI in parallel). 
I've been producing C++ shared libraries since GCC 3.x times, and have 
yet to run into any breaking problems.



Microsoft got around it
somewhat by using their COM specification.


To an extent.  But if you want to actually expose a C++ interface in a 
DLL you're stuck--every Visual Studio version breaks the ABI by having a 
new runtime and no compatibility guarantees.  The PE-COFF binary format 
is so dated it's basically unusable for C++ interfaces (try freely using 
templated class members, which works fine on ELF and Mach-O platforms).



The major reason I object to OOP is exceptions.  They make it impossible
to write code with 100% predictable exit points.   When code can exit at
any point above and below your own, that means that things can leak.
"Add a garbage collector," you might say.  What happens when the
collector has a flaw and you do not know about it? Leak again.


Exceptions are not really related to OOP though; you can use them 
without OOP being involved.  With modern C++ and RAII, code is made 
exception-safe by releasing resources in destructors so leaks are not 
possible.  Before RAII and memory management techniques such as 
shared_ptr and unique_ptr became commonplace, leaks and exception-unsafe 
code was quite commonplace, but that's now a rarity except in legacy code.



 I imagine the reason why Glib was written in C is because binding
to other languages is easier with C than C++.


I expect so. C is fairly straightforward.


This was certainly the original intent.  But having used the bindings, 
my experience was that they were of wildly variable quality, had varying 
degrees of wrapper coverage, and some were not exactly usable.  All in 
all, the number of production quality bindings could be counted on one 
hand with some fingers to spare.  Maybe it's improved since I last looked.


Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] GTK (was Will there be a MirDevuan "WTF"?)

2015-07-24 Thread Roger Leigh

On 24/07/2015 05:14, T.J. Duchene wrote:



On 7/23/2015 10:41 PM, Isaac Dunham wrote:



Now then, as for Roger's comments, I find them confusing.

[snip]

The C API is overly complex and fragile.  You don't want to base your
project on a sandcastle.  And the expertise required to use it is
very high. Implementing dynamic typing and manual vtable construction
rather than using C++ which does proper type checking?  Just use C++!


C and C++ are both strongly typed, so I am assuming that he must be
referring to GTK using a pointer in C presumably to dynamically handle
function names and data for polymorphism.  He can't help it if GTK is
sloppy, but I can't make sense of his grievance either. Type checking is
never C's job, it's his!  That is explicit in the design of the
language.  Everyone who uses C knows that.  C++ is the same for that
matter.  Neither language checks your variable data for type.


I'm referring to the *GTK* "C API" here.  Not C in general.  If I create 
a GObject-based "class", either as a subclass of a GTK class or as an 
independent class subclassed from the root GObject, I have to *by hand* 
construct a virtual call table and *by hand* register all the class and 
object construction and release functions, along with the instance and 
class structures with the GType type system.  This is work which the C++ 
compiler would have done for me, with no added complexity, and no chance 
for me to introduce bugs into the overly complex mess.  On top of that, 
the C++ class would be completely type-safe in all respects where the 
GType system requires allmost all type safety to be discarded to 
implement polymorphism.


The C++ compiler objectively does a better job here.  It's simpler, 
quicker to write, safer, easier to refactor.  And when I've done a full 
conversion of GObject-based C code to C++, the C++ compiler has picked 
up on real bugs which the C compiler hadn't picked up on (because the 
type information had been deliberately cast away).  GTK+ programs can be 
riddled with bugs, without the developer being any the wiser in the 
absence of compiler diagnostics.



I've always noted the GTK code tends to leak.  If  programmers with
experience can't be bothered to clean up after themselves, I'm glad GTK
is dying off.


It's really not so much "can't be bothered" as much as it's very 
difficult to manage all the manual reference counting.  In C++ with 
GTKmm it's all handled with smartpointers, making it impossible to screw 
up.  In regular C++ you have unique_ptr, shared_ptr etc. to do it for 
you as well.


The main point of my original post and in the above comments isn't that 
GTK+ and C can't work.  They can.  It's that the effort required to 
achieve a bug-free program is absurdly high, and in practice it's not 
even possible to verify that your code is actually bug free, and this is 
a problem that becomes increasingly intractable as the program size and 
complexity increases.  This is because the underlying unwritten 
assumption is that "the programmer will be perfect and not make 
mistakes".  It's quite obvious that no programmer is perfect, even great 
programmers make mistakes.  The contrast with C++ is that the assuption 
here is that the compiler will shoulder the burden and will fail to 
compile if there are any mistakes, since the presence of strong typing, 
RAII and smartpointers will guarantee correctness.  This too also has 
its limits, but put it this way: I haven't had a memory leak in C++ code 
in a decade due to the use of shared_ptr, and this isn't something I can 
say for my GTK+ C code, or GTK+ C code in general.



In fact, you have to be an expert in C++ compiler internals just to be
able to understand and use it effectively.

[snip]


That's somewhat true, but if you write C++ code PROPERLY - i.e. make
sure you references are clean, resources released, and you aren't
leaving any hanging exceptions, what he is claiming is pretty much a
non-issue in the context of any OOP language.  A C++ compiler is no more
mysterious that any of the other OOP crap we are forced to endure. C++
code is simply not as robust as C.   You can mitigate a lot of the
annoyance;  like exceptions that cause premature exits - but you are
never really rid of it.


This is not what I meant.

The sentence above should read "you have to be an expert in C++ compiler 
internals just to be able to understand and use [the GTK/GObject type 
system] effectively".  I'm referring to the fact that you have to 
construct GObject virtual call tables by hand, something which the C++ 
compiler does for you, and which a C++ programmer doesn't in fact need 
to care about (it being an internal implementation detail of the 
compiler's code generation).  That is to say, to effectively use a C 
library you have to be more familiar with the internal details of C++ 
than most C++ programmers, and for absolutely zero benefit.



I think a lot of GTK's problems could be solved if it were rewritten to
take advant

Re: [DNG] GTK (was Will there be a MirDevuan "WTF"?)

2015-07-24 Thread Roger Leigh

On 24/07/2015 06:37, Didier Kryn wrote:

Le 24/07/2015 04:52, Jude Nelson a écrit :


I don't care for it myself - because it is C++.

Minor correction:  GTK is written in C, and relies on GLib, which is
also written C.  However, it's open to debate as to how
similar/different C-plus-GLib is to C++ in practice.


It's quite clear what the differences are.  GLib broadly provides C 
equivalents for what you get in the C++ standard library for free.
  However, the GLib implementations are often naive, inefficient, not 
type-safe and are awkward and complex to use.  Doing all the manual 
type-casting, refcounting and manual memory management etc. is necessary 
in C for generic containers, but completely unnecessary in C.


Consider something trivial, a vector of strings:

std::vector list({"item1", "item2", "itemn"});
list.emplace_back("newitem");
for (const auto& item : list)
  std::cout << "Item: " << item << '\n';

How much code is required to implement the equivalent with GLib strings 
and a list container?  It's a good bit longer and with much more 
complexity to achieve the same thing.  And as we get more complex with 
stuff like GObject, it becomes much, much worse.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] ZFS root help needed.

2015-07-05 Thread Roger Leigh

On 05/07/2015 00:38, James Powell wrote:

Hey guys. I know this is out of the norm, and yes I know all about the
CDDL, Btrfs, and other things, but I'm working with ZFS and need some help.

I'm try to get my root file system which is at mountpoint=legacy with
zpool ztank/linux/root dataset able to boot with the kernel.

I have spl and zfs built into the kernel with spl and zfs 0.6.4.2
against linux-4.1.1, and I'm booting with syslinux.

I don't know what to set on the APPEND line for the kernel options to
make my zfs zpool readable by the kernel to boot the system. I also am
uncertain as to what to set my fstab as though currently is it set to
the zpool dataset at / with file system type ZFS in rw mode.

Does anyone have any experience in this area to help me out?


I've only used ZFS root on FreeBSD; I've not set up booting from ZFS on 
Linux myself.


Did you work through 
https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem 
at least for ideas?Note the "mountpoint" and "bootfs" properties in 
particular--mountpoint=legacy is probably wrong; you don't need entries 
in fstab for zfs, and this is likely to be particularly important during 
early boot.  It also gives guidance for configuring GRUB.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Dng Digest, Vol 10, Issue 12

2015-07-04 Thread Roger Leigh

On 04/07/2015 12:23, Nate Bargmann wrote:

* On 2015 04 Jul 05:02 -0500, John Jensen wrote:



A lot of software is built using GNU Autotools.  It is a very extensive
system that has a very steep learning curve in proportion with its
power.  The GNU documentation serves more as a reference manual than a
HOWTO, however, one site I found very useful was the Autotools Myth
Buster:

https://autotools.io/index.html

More packages are using Cmake, but unless the package you're interested
in is using it, you can safely avoid its details for now.  Also, if
you're developing in Qt you'll need to be familiar with Qmake.  These
are just the more frequently found alternatives to the Autotools.  The
alternative is writing Makefiles by hand.


Mostly agreed on all the points you made.  But WRT the autotools, they 
are such a baroque collection of tools, requiring knowledge of a minimum 
of five languages to use effectively (Bourne shell, m4, make, autoconf 
and automake), I can't really recommend learning them over learning 
CMake.  CMake is not the cleanest scripting language either, but you 
only need to learn one rather than five; on top of that, it's portable 
to more systems, more powerful and vastly simpler to learn.  Unless 
you're heavily invested in existing autotools-using projects, I don't 
think it's worth the pain, to be honest.  [I say this as an autotools 
user of 15 years, who switched his projects to CMake over the last two 
years.]


It not that the autotools don't work, but they are still focussed on 
solving the portability problems of two decades back; CMake is much 
better at solving the portability problems of the present.


And with respect to learning C, it's certainly useful.  However, I would 
highly recommend also learning other languages such as C++ and Python. 
In the free software world, C use is still widespread, but it's a 45 
year old language which has been improved upon many times over by other 
languages, but despite that we continue to use it in situations where 
it's inappropriate.  Don't limit yourself.  If you've not used C++ 
before, try out C++11/14 and look at something like 
http://shop.oreilly.com/product/0636920033707.do -- it's a much nicer 
language than it used to be, and you can be massively more productive in 
it than with C.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [Dng] epoch feature request

2015-06-17 Thread Roger Leigh

On 16/06/2015 19:47, Anto wrote:

On 16/06/15 20:42, Roger Leigh wrote:

On 16/06/2015 18:14, Anto wrote:

I was not really sure if script similar to update-rc.d would be relevant
to epoch as the way the runlevel is being managed in epoch is different
from sysvinit. That is why I am looking for other options.


update-rc.d is an *interface* to update service registration by the
packaging system (or the admin).  It doesn't matter if you don't use
rc.d/init.d directories; it's just a name resulting from historical
practice.  You *do* need to use this mechanism to integrate into the
system.  Ignore its name, and just think of it as the public interface
to the init system service management (which is init-system-agnostic).

You can basically ignore the runlevel stuff (it's delegated to the LSB
headers for sysv-rc, so only of historical interest).  It basically
has four actions:
- defaults (add [with default runlevels])
- remove
- enable
- disable
So long as you cater for this, that's sufficient.  With sysv-rc,
insserv processes the LSB headers for runlevel information and
dependencies; you change them there if you want to adjust them. epoch
can deal with that side of things however it sees fit (it's entirely
an implementation detail)



Thanks a lot Roger for your explanation.

However, I still fail understand how to implement what you explained
without changing anything on any other packages that have daemons to be
managed by epoch. As I mentioned on one of my emails on this thread, the
implementation has always been to include the files specific to the init
systems into those packages. I still can not believe that this is the
only way. Could we not have some kind of man-in-the-middle (I believe
the programming term is API) to be used by all packages including the
init systems that are totally independent, to talk to each other? I am
sorry for those silly questions, but it would be great if you could
explains the reasons why the implementation is always like that and the
impacts if we would divert from that.

I think for me that would also explain why there is no other way to
avoid lock-in to systemd rather than forking Debian.


Historically, every new init system (or at least, new in Debian) has 
strived for some degree of compatibility with sysvinit.  Both upstart 
and systemd will run traditional init.d scripts, and both plug into 
update-rc.d to act on changes.  Had we been able to go forward with 
openrc support, we would have done the same there as well (I think 
patches were made for this already).  upstart didn't make use of the LSB 
dependencies; systemd does to an extent but in practice it doesn't work 
as well as it could.


In short, if epoch can't make use of the existing init scripts provided 
by all packages, it's going to have a very big struggle.  It absolutely 
must be a drop-in replacement for the existing system in order to be 
viable.  There's over two decades of accumulated knowledge in the 
existing scripts (not for all daemons, but particularly initscripts and 
other core scripts).  Witness the large number of regressions and now 
unsupported configurations with the systemd switch--this is largely due 
to dropping all that without reimplementing it properly.  Now systemd 
will use its own configuration files preferentially, and use the init 
scripts where necessary, so from its POV it's a relatively smooth 
transition where individual packages can add systemd units relatively 
easily.  epoch much consider doing something similar, since having flag 
day where every package adopts a brand new format is simply logistically 
impractical.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [Dng] epoch feature request

2015-06-16 Thread Roger Leigh

On 16/06/2015 18:14, Anto wrote:

I was not really sure if script similar to update-rc.d would be relevant
to epoch as the way the runlevel is being managed in epoch is different
from sysvinit. That is why I am looking for other options.


update-rc.d is an *interface* to update service registration by the 
packaging system (or the admin).  It doesn't matter if you don't use 
rc.d/init.d directories; it's just a name resulting from historical 
practice.  You *do* need to use this mechanism to integrate into the 
system.  Ignore its name, and just think of it as the public interface 
to the init system service management (which is init-system-agnostic).


You can basically ignore the runlevel stuff (it's delegated to the LSB 
headers for sysv-rc, so only of historical interest).  It basically has 
four actions:

- defaults (add [with default runlevels])
- remove
- enable
- disable
So long as you cater for this, that's sufficient.  With sysv-rc, insserv 
processes the LSB headers for runlevel information and dependencies; you 
change them there if you want to adjust them.  epoch can deal with that 
side of things however it sees fit (it's entirely an implementation detail)


Just as a side note about epoch in general.  It looks like it's using 
numerical dependencies.  While this does in one sense give control over 
ordering to the admin, in reality numerical ordering is a real PITA.  We 
spent half a decade moving sysv-rc away from that to a dependency-based 
system due to all the horrible problems numerical ordering caused. 
Everything the numerical ordering does can be expressed more 
descriptively via dependencies as in the LSB headers, since numbers on 
their own don't describe why it's at that particular position in 
relation to the rest, and you quickly run out of "gaps" in the numbering 
system as you add more services.  The dependency information in practice 
gives more control to the admin; so long as it's possible to dump the 
ordered list so you can see how it gets the list.



Regards,
Roger

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] printing (was Re: Readiness notification)

2015-06-15 Thread Roger Leigh

On 15/06/2015 14:57, Steve Litt wrote:


Just so I understand your answer in relation to my question, you're
saying that "Start after" means "start sometime after", not "start
immediately after". Right?


Yes, exactly this.  It's just a prerequisite condition: "b must start 
after a is started" and the converse requirement "b depends on a" is "a 
must be started before b", which in the graph would be


  b -> a

for both cases.  If other things happen between a and b being started 
and stopped, that's totally fine.


___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] printing (was Re: Readiness notification)

2015-06-15 Thread Roger Leigh

On 15/06/2015 14:35, Steve Litt wrote:

On Mon, 15 Jun 2015 08:46:13 +0100
Arnt Gulbrandsen  wrote:



I really appreciate upstart's way of declaring "start x after y". (I
believe systemd does the same, which I would like if it weren't one
of 500 features.)


I've been confused about this for a long time.

I know that every service has a "provides", that basically gives the
service a uniformly agreed upon name. And it has zero to many
"requires", which I believe means that the current service (call it A),
requires another service (call it B), so it won't start A unless B is
started. But then what does "after" mean? Does that mean *immediately*
after, or does that mean *sometime* after, and if the latter, how is
that different than "requires"?


It's not that much different AFAIK.

The LSB header specification also had an extension to do this 
(X-Start-Before and X-Stop-After).  These are no different to 
Required-Start/Required-Stop except for the fact that the dependency is 
reversed.  When it comes to constructing the directed dependency graph, 
these edges are inserted backwards so they end up being semantically the 
same--just a different way of having a different package provide the 
same input to the graph.  When you flatten the graph to get the 
ordered/parallelised lists, it's all the same.



Regards,
Roger

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] OT: separate GUI from commands

2015-05-27 Thread Roger Leigh

On 27/05/2015 14:54, Laurent Bercot wrote:

On 27/05/2015 12:12, Hendrik Boom wrote:

I'm in the process of writing (yet) a(nother) editor and output
formatter,
and on reading this, I started to wonder -- just how could one separate
a command-line version from the UI?  I can see that the output
formatter can be so separated (and very usefully), but the actual
editing?


  Maybe I'm misunderstanding the question, as I'm unfamiliar with the
internal of editors. But IIUC, an editor is, by definition, a user
interface, so the command-line / UI separation looks impossible or
irrelevant to me here.


No, it's still very much needed here as well.

You have the data model (buffer of file contents, likely as separate 
lines) and the view of the data (presentation in UI) in any editor worth 
its salt.  As an example, take Emacs where you can have multiple 
windows/split windows with one or several views of a single file, each 
with its own current position and view in the file.  Doing this requires 
the data be completely separated from the view.  And this then lets 
macros manipulate the underlying data without any need for the UI--which 
is exactly what all the Emacs macros are doing under the hood e.g. M-x 
replace-regexp etc.  They might take input from the UI--current point, 
selection range etc., but the underlying implementation of the macro 
functionality is then operating on the data model.


Even a humble terminal emulator has (or should have) a model/view in its 
implementation, called the data layer and presentation layer, 
respectively.  See e.g. ECMA-35/48.  This is used for example to render 
and manipulate bidirectional script--the data layer is the logical 
order; the presentation layer is the rendered visible order.  Commands 
allow nagivation in either layer with the cursor.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] A novice attempt to speed up Devuan development

2015-05-17 Thread Roger Leigh
On 17/05/2015 22:38, Isaac Dunham wrote:> On Sun, May 17, 2015 at 
12:38:59PM -0400, Jaret Cantu wrote:

>> PATH="/usr/sbin:/usr/bin:/sbin:/bin"
>>
>> Probably not a wise decision.  I just threw in the one absolute path 
since

>> that is what was stopping me from booting (since it appeared before the
>> modified PATH, which I had added earlier).
>
> While that probably works for most people, it isn't quite up to the level
> that Debian expected formerly:
>
> Anything before mountall, networking, and mountnfs.sh should not use 
paths

> under /usr, because /usr could be a (possibly remote) mount.
> Configuring /dev with something in /usr is particularly bad;
> coincidentally, eudev was forked partly because systemd-udev started
> complaining about configurations with a separate /usr.
>
> On configuration files:
> Roger Leigh would probably know the proper way to handle this.
> You might try looking at the "microcode.ctl" package source, or sysvinit.

Note that a goal for jessie was to have /usr mounted before init starts. 
 /usr should be mounted in the initramfs; if you're not using an 
initramfs, you can't use a separate /usr any more.


I did write a set of patches for initramfs-tools and initscripts to make 
/usr get mounted in the initramfs, which are these:


https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=652459
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=697002
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=757083

These did get applied for Jessie, though I wasn't involved in doing 
this, but there was a screwup and I think it's currently broken if 
you're not running systemd:


https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=763157

If this isn't working for you, it's likely because in the above bug it 
was explicitly disabled for the sysvinit/initscripts case, rather than 
fixing whatever the problem was properly.  So really, that's what needs 
addressing I think.  In jessie you should be able to use /usr under all 
circumstances, and in the original patchset I wrote this is exactly what 
you got.



Regards,
Roger
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] What do you guys think about "Suggest" and "Recommends" dependency?

2015-04-06 Thread Roger Leigh
On Thu, Apr 02, 2015 at 07:43:26PM -0500, T.J. Duchene wrote:
> 
> 
> > -Original Message-
> > From: Franco Lanza [mailto:next...@nexlab.it]
> > Sent: Thursday, April 2, 2015 5:36 PM
> > To: dng@lists.dyne.org
> > Subject: [Dng] What do you guys think about "Suggest" and "Recommends"
> > dependency?
> > 
> > Personally on debian i was using from date
> > 
> > APT:Install-Recommends "0";
> > APT:Install-Suggests "0";
> > 
> > in all my install apt.conf.
> > 
> > I don't like apt downloading and installing things that are not required
> but
> > just recommended or suggested, expecially in server or embedded envs, but
> > also on my desktop.
> > 
> > What do you think if we make this the default in devuan?
> 
> 
> [T.J. ] Personally, I love the idea.  However, in certain instances, like
> "-dev" packages or "build environments", where the recommended should really
> be followed.  I'd follow it wherever Perl or Python is involved as well,
> even if you aren't working on code, just to make sure everything works
> smoothly.

No, this is fundamentally incorrect.

Building should always be deterministic.  This means never ever using
Suggests or Recommends.  sbuild, for example, always sets
APT::Install-Recommends=false.  In addition, it will also drop
conditional dependencies so that only the first will be used, again
for determinism.

Less strict behaviour is fine when installing packages for a developer
to use on their development machine, but for automated/final builds
for deployment elsewhere, such as Debian package building, it's
essential that the necessary packages are completely and
unambiguously specified with plain Depends.


Regards,
Roger

-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linuxhttp://people.debian.org/~rleigh/
 `. `'   schroot and sbuild  http://alioth.debian.org/projects/buildd-tools
   `-GPG Public Key  F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng