Hello, I apologize for the very long message. Yes, replies to three people are in there. The responses were appreciated and I have replied to them where replies were warranted.
On Fri, Jul 28, 2017 at 2:39 AM, Joachim Durchholz <[email protected]> wrote: > Am 28.07.2017 um 04:35 schrieb R0b0t1: >>> >>> The earliest versions of the Rakudo Star build system started out by >>> trying to use Git submodules to manage packages, but it quickly proved to be >>> unwieldy and almost impossible to understand and maintain. Perhaps the >>> submodule ecosystem has changed since then, though. >> >> >> Can you give an example of how submodules were insufficient? > > > I don't know what was unwieldy for the Perl6 guys, but having to manage > multiple repositories for a given task is always some extra steps when > synchronizing new code to the public repositories. If one of those steps is > forgotten, everybody will see repositories that won't work, or show weird > problems. > > Submodules are built for the use case that repositories evolve independently > of each other, subtrees for the case that they evolve in sync. It's possible > that submodules were the wrong approach, or that subtrees didn't work well > enough at the point in time, or that nobody found the time to set everything > up well enough to make it really work, or for lack of knowledge how to get > submodules to work well. > Since everybody's time is constrained, and Perl6 is still a work in > progress, there is a long list of things that could be improved, so it's no > surprise to see defects. The more important question is whether defects are > important. > That is a decent enough explanation. If anyone can chime in with specifics I am still interested, as I don't see how learning the dance for submodules is any different than learning the dance for Git in general. That explanation makes sense if submodules were investigated before being used in depth, but Something being hard to teach to people to use is a valid concern, but I am still surprised at the amount of effort spent to replace submodules. I still think there is something I am missing. >> Most issues I have seen that arise with submodules come from people >> trying to treat the submodule directory in a way that is different >> than other objects tracked by Git. If you treat it like a source file >> you're tracking most problems should disappear, at least in theory. > > > Dunno what the problems were for Perl6. > Versioning generated (non-source) files can indeed create problems, but > that's independently of whether it's submodule or not, so don't know what > problem you're seeing here. > No, I meant that I think people were expecting symlink or directory like behavior from submodules based on the criticism I was reading. They behave more like a tracked file and are fairly opaque. >> There's still some unfortunate submodule command names. > > > git's ergonomics is generally not that good. Projects still stick with it, I > have several theories about the reasons but don't know which of them apply. > >> NQP? I was told that has bytecode in it. If possible I would request >> that this is changed in the future. > > > NQP is just the language that large parts of the Perl6 compiler are written > in. > > Bytecode is what the AST is compiled to. One could write a different > backend, translating either AST or bytecode down to machine code; it's just > that nobody did it yet. It would be a pretty large project, so I don't count > on that happening anytime soon (but then the Perl community can be pretty > amazing and incredible things happen on a semi-regular basis, so I'm not > counting on that *not* happening either). I wasn't referring to the use of bytecode, but the inclusion of bytecode blobs in the distribution. If there are handwritten pieces that must remain as blobs then I would hope they are commented. On Fri, Jul 28, 2017 at 3:57 AM, Steve Mynott <[email protected]> wrote: >>> On 27 July 2017 at 09:13, R0b0t1 <[email protected]> wrote: > >> Of course there is still the problem of communicating the release keys >> to someone in the first place, but if the release key is on a public >> keyserver and its ID is referenced on the project site somewhere that >> typically works well enough. > > The main problem is management of the *private keys* and passphrases. > You shouldn't keep private keys on shared servers but secured personal > laptops, which are exactly the sort of systems which suffer data lose > due to a lack of secure backups. > If you have no interest in security then you really have no business developing for a large, widely consumed project and are doing all of your users a disservice. Despite the risk of sounding paranoid, I think it is very important that I point out the likely target audiences of Perl 6: developers with expensive workstations, researchers using high performance computing clusters, or web developers with very capable servers that have high bandwidth connections. Hopefully these people and things register as attractive targets to a capable attacker. A capable attacker will have no trouble using Perl 6 as an infection vector, especially if they can also leverage content servers beyond your control that are hosting your signatureless binaries. This can be taken further and I honestly expect all Perl 6 developers that use Windows or that do not take basic security precautions have been compromised by a non-state actor already. You have giant targets painted on you. Perl 6 is not the only project for which this is a problem, although for most of the other projects I have found problems in the issue was much subtler. Haskell only recently fixed the issue I already referenced where it would download code over HTTP and in some cases execute it as root. All of its binaries available for download had signatures and it looked like the developers were security conscious. At least with Perl 6 I didn't find that the issue was already known and had been ignored. > It's hard enough to create a Rakudo Star release with the present > process (as you yourself have shown) and putting more obstacles in the > way of people to do this by requiring keys isn't progress. I'd rather > see an easier, simpler more robust process. I'd rather see more > people forking the repo and creating their own distros rather than > rubberstamping an official release. Centralised, authorised releases > isn't really in the Perl spirit. > Yes, this is what I am trying to address. My point of view is that the build process is convoluted and there are simpler methods within reach. I am trying to figure out why the build process is the way it is so that I or someone else might be able to simplify it. I think some of the friction comes from the fact that someone(s) (I have no idea who), quite unfortunately, spent a lot of effort doing something that could or should simply be deleted. I'm trying to figure out if that is the case. The potentially grating title is due to my annoyance with things reliably refusing to work because most programs, even with a Unix focus, are not very portable. I apologize. > We can sign tags but if we aren't creating new tags like 2017.07.1 for > NQP and are releasing 2017.07 with known issues this really isn't good > enough. I'd rather use working code which isn't signed than broken > code that is. > If you don't want to do this then I think it is safe to call you lazy. There's many projects that sign every released patch, even if they are released individually. Doing this inside of Git has the bonus of enforcing good project maintenance. To keep reality in line with your expectations all you need to do is delay a release until the code is working (this seems like it is already part of the release instructions). I would suggest among people who use Windows or Linux for more than tinkering with languages and toy projects security is an integral part of the definition of "working." > Signing isn't a Magic Security Bullet and coming to Open Source > projects saying "Sign All The Things!" isn't really helpful. > Unfortunately you are wrong, and signing is a magic bullet. More properly asymmetric key cryptography is the magic bullet, for without it it would be exceedingly hard to ensure that broadcast media (like distributed software) has been received untampered with. Should the releases not be signed my only adequate method for ensuring I am receiving the release, as released by the developers, is to visit one personally, but this goes further and I am also relying on the developers all visiting each other personally. Otherwise the project could have been tampered with during any of the exchanges. >> What I want is to verify the code I run before I run it. From my >> position the easiest way to do this was to try to grab code from Git. >> The star repository was the only one that looked like it had >> everything in one place and a way to use those things. >> >>>> Are there any signed releases, or do I have to do the equivalent of >>>> curl|sudo? >>> >>> Extracting a tarball and manually running scripts isn't the same as >>> running curl|sudo since you have the chance to read what the scripts >>> do before running them. >>> >>> There is one subsystem where this isn't the case and its left as an >>> exercise for the reader :-) >>> >> >> NQP? I was told that has bytecode in it. If possible I would request >> that this is changed in the future. > > Yes we ship binary blobs to bootstrap. I spent a day trying to > reproduce the current binary blobs and it's not possible. Well maybe > it is possible if you reproduce the exact directory (Windows) > directory structure of the original build system and spoof the system > clock but its an incredibly difficult process and quite frankly a > waste of thing right now since there are more pressing issues like bug > fixing, speed increases and wider adoption of perl 6. > I am glad that reproducible builds have been considered. Sadly, like security, this is something that is easiest accomplished if it is designed in from the start. If there is not time for it now there may never be time for it. If you would like wider adoption of Perl 6, one way to achieve that is to show security-conscious programmers that Perl 6 cares about security. Reproducible builds could be a large part of this - there isn't more talk of them simply because it's various levels of impossible for other projects. It sounds like Perl 6 is close, please do not give up. >>> The whole rakudo (and star) build process doesn't fit in well with any >>> third party systems (gentoo, cmake whatever). Having Perl 5 as a >>> dependency for Perl 6 seems to be more reasonable than using anything >>> else. >>> >> >> Why are those things unsuitable? I have used them enough to know that >> they can not solve every problem, but it has eventually become clear >> to me that it is almost always easiest to try to add whatever >> functionality is necessary to an already existing build system than >> trying to manage it myself. > > Yes and our existing build system is the one we have now. The way to > fix it is gradually by submitting small fixes not talking about > replacing it by your pet build system. > If you think Git is my pet project I am afraid you have mistaken me for Linux Torvalds. I am at the stage where I am trying to figure out what the build system does. I think I understand it, but if I do then there are some silly things happening and large projects typically do not do silly things so I thought it best to check. >> Please understand that even should I have the time to contribute, I >> still have to read and understand the project. My message is more a >> question of why the build system is the way it is. > > MoarVM is used to build NQP (a subset of Perl 6) which is used to > build Rakudo (Perl 6) > > These sorts of technical issue are not really explainable in emails or > IRC. You actually have to use it to understand it. Start with building > R* from a tarball and then try using the github version to build the > tarball. It's probably easier to understand if you start with 2017.01 > for example since the last two versions used awful hacks with NQP and > MoarVM. You are probably safest using Debian stable (or oldstable). > It will be probably hard and frustrating and not easy. You will > probably end up spending more time on it than you thought. You at > least will end up with more knowledge of the sort of things you are > asking people to spend their free time on. > I'm not entirely sure I need an explanation and I think I understood what was happening before I was able to see the process to completion in a virtual machine. Most of your exposition in your previous message wasn't helpful, because what I asked was "Why is this the way it is?" and you responded with what was, not an explanation of why. In any case I got it to work in Kubuntu and will now try to generate R* per the star repository, but if hacks like the one used now are to go away then I think it is important to figure out what is actually needed. As far as I know Git's submodules will fix everything and they are a drop-in replacement. I am not sure why they are not being used. I guess at this point my explanations are failing me and I will have to demonstrate what I mean, which is regrettable as I will likely not have the time for a while. It was partially my hope that if anything needed to be changed the necessary changes would be so obvious that it would eventually be accomplished piecemeal as it makes sense to fix smaller portions of it. > Security is desirable but not currently a main objective of this > project which is more about creating a reasonably fast and very > expressive computer language. If your main interest is security then > you might be better off with a project like OpenBSD and helping update > their Rakudo ports. MoarVM has problems with their W^X system if you > run certain of the roast tests which needs investigation. Security is something that needs to be designed in from the start, otherwise it can be almost impossible to accomplish. It is unfortunate that security and usability are often at odds, but contemporary software engineering practices do dictate some level of security as part of usability. If your hardware isn't available because someone else is using it without your consent, it's not there for you to use, is it? The reliance on W^X violating behavior is something I would like to see removed, as in every case I am aware of what is wanted can be accomplished in some other way. In the simplest of cases you just make sure the memory is never writeable and executable, as W^X implies; W^X doesn't mean you can generate code, make it read only, and then make it executable. Only with very restrictive mandatory access control systems are there blanket bans on executing memory that was at some point writeable. Someone in #perl6 helped me track down the most immediate issue, which is in the dyncall library. The implementation of dyncall does not need to be as it currently exists (this is stated in the comments) but fixing it would be a contribution to dyncall. I can say with no small amount of surety that if an OpenBSD developer ever read what you just had to say about security they would laugh in your face and call you an ignorant boob. Please try to separate what I am saying from any personal attack. I use strong language to indicate how severely misinformed you are. On another note, I am please that I remind you of an OpenBSD developer, however that discounts the amount of expertise that they have. I have not contributed to the project due to a lack of hardware support. Should it ever get better hardware support I would be ecstatic, I could finally quit trying to harden my Linux installations. I will need to remember to check up on its virtualization support periodically. On Fri, Jul 28, 2017 at 4:26 AM, Aleks-Daniel Jakimenko-Aleksejev <[email protected]> wrote: > > TL;DR complain in a useful way by creating tickets or writing plans on how > to improve things > I think the discussion occurring is an even worse fit for a ticketing system than it is a mailing list. This topic was started because what I wanted to suggest was very hard to define and I thought I needed more information. R0b0t1.
