Re: [Discuss] Recent discussions and our list archives
On Fri, Sep 20, 2019 at 10:46 AM Daniel Barrett wrote: > > > We've had some discussion recently that speculated about an > individual's psychological state. Just a friendly reminder that our > discussions are archived and visible to the public and include the > writer's name and email address. Fortunately for the authors, said individual probably doesn't have the resources to be litigious. (To the best of my knowledge, they have been involved in very few lawsuits in their primary area of interest. I doubt if this would bother them.) Still your reminder is warranted as a general warning. On a techno-humour bent, the emails weren't PGP signed and we all know how easy it is to forge emails headers, plausible deniability, etc. Good luck explaining that to a judge and jury... Bill Bogstad (I don't even have a PGP key so I can deny everything.) ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Cron fail lesson of the day
On Sun, Sep 15, 2019 at 12:49 PM David Kramer wrote: > > TFW your script works perfectly from the command line every damn time > but fails when called from cron, even after you put the fill path for > all commands. > > ... then you remember cron may not be running the same shell Your web site hosting account doesn't allow shell logins, but does allow cron jobs (emails output back to you). You enter a job to run 2 minutes in the future and wait to be emailed the results. No output received. You try just running the date command. No output received. You give up and go to bed. And find the output in your email the next morning. You live in the Eastern timezone and the server lives in the Pacific timezone. Instead of 2 minutes in the future, you were requesting 3 hours and 2 minutes. It's been too long since I did remote system administration... Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Reliable external HD enclosure for Linux?
On Sat, Aug 31, 2019 at 1:45 AM Marco Milano wrote: > > > > On 8/29/19 9:21 PM, Steve Litt wrote: > > >> > >> Thanks, > > > > What makes you sure it's the enclosure and not the drive. As long as I > > can remember, Seagate drives had the reputation of unreliability. > > That may have been the case maybe 10/15 years ago, but not anymore. > Their high end and high capacity drives are very reliable. Data not anecdotes. Six years of data on hard drive reliability by a large scale user https://www.backblaze.com/b2/hard-drive-test-data.html ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] full disk backups
On Wed, Aug 21, 2019 at 1:39 PM Kent Borg wrote: > > On 8/18/19 1:36 AM, Steve Litt wrote: > > When I reformat my thumb drives as EXT4 they're much more reliable. > > Something I've noticed recently: When my link-dest rsync encrypted back > up to a recent model USB-3 WD disk is done, unmounted, and ready to be > unplugged...I can still feel it vibrating and doing something. > > I've decided this must be some useful housekeeping, and have waited > until it quieted down. > > Anyone know what it is? Guessing... Can you tell if this is caused simply by the spindle rotating or are the heads actively seeking? If the drive has some kind of auto spindown after going unused for a while, then it is likely to rotate for quite a while after you unmount it. That may be what you are seeing. Unplugging it early shouldn't be a problem. If it is actively seeking, that seems a little strange. It is unlikely that the drives write cache can hold too much data. Still it is possible that if the final writes were not contiguous, it might take a little while (maybe a second or two?). Another thought is that it might still be actively trying to remap bad blocks. That could take a while (multiple seconds?), if it is having to retry. If you aren't already monitoring the drive for bad blocks (smartctl), you probably should. It is harder to do on a USB drive; but if you fiddle with the right options to the software, you can usually make it work. You really want to know if your backup drive is failing... Good Luck, Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] couple of cheap (free?) Asterisk telephony cards
So a local call center near me closed down and was giving away all of their office furniture for free. By the time that I got there, they seemed to have gotten rid of any interesting tech. There were a couple of add in cards lying around that they said that I could have. I think they are some form of Quad-span T1 cards. Marking are: Front side of board: Compuamt 2011/4/EC Back side of board: Compuamt 2011/4/EC Asterisk 2-4 Port E1/T1/J1 PRI PCIE Card www.asterisk.com The boards appear to be identical and unlike most of the boards that I can find online they have 4 dip switches labeled JPO on the front. I know some people on this list work with Linux based PBX systems, so maybe someone familiar with the product line can clarify what they are. I have no use for them and would be happy to pass them on to a good home. I only picked them up because I thought that they might be useful and they were going to end up in the trash if I didn't take them. I have no idea if they work so buyer/taker beware. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Flash player on Google Chrome and Fedora 30
On Thu, May 16, 2019 at 10:37 AM Jerry Feldman wrote: > > Good idea Not if you want to protect her from malicious web sites (or just ad networks). All of the major browser vendors are slowly deprecating flash. Even Adobe is planning to stop putting out patches at the end of 2020. https://theblog.adobe.com/adobe-flash-update/ Rather then trying to use a general purpose browser to access apps that require flash, I suggest that you install an ESR release of Firefox which still supports Flash. Tell your wife to use that browser only for the app that requires Flash. You can install the current Firefox at the same time and have her use it for everything else. You might as well go back to Fedora 30 while you are at it as well. I did this for a while both to continue Java plugin support (printer scanner support) as well as old Firefox extensions that I was still using. I've since eliminated both use cases, but it can work for you. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] WSL 2
On Mon, May 6, 2019 at 9:43 PM Bill Bogstad wrote: > > On Mon, May 6, 2019 at 5:40 PM Rich Pieri wrote: > >... > > WSL 2 will ship with a fully GPL compliant (including patches), > > reasonably current Linux kernel running in a lightweight virtual > > machine. Reasons cited are better performance, particularly filesystem > > performance, and native Docker capability. > > > > https://arstechnica.com/gadgets/2019/05/windows-10-will-soon-ship-with-a-full-open-source-gpled-linux-kernel/ This 60 minute video from Microsoft answers a number of my questions: https://www.youtube.com/watch?v=lwhMThePdIo The answers to my questions are inline below.. > Kind of short on details... Is this going to be some kind of highly > optimized Hyper-V type environment with a customized > Linux kernel? Yes. Using an optimized Hyper-V system developed for servers. > Is it going to still have full filesystem visibility > from both operating systems? How? Both Windows and Linux will support the Plan 9 file server protocol as both a server and as a client. This gives full filesystem visibility from both sides. For example, their init automounts /mnt/c via this mechanism. >Will I still be able to run > Window's binaries from a Linux command line? Yes, They use the Linux kernel's ability to provide user-mode interpreters (binfmt?) to start the windows binaries and connect IO between the two sides. This works for both command line and graphical windows' programs. You can safely edit files in either direction. >Will I be able to run > all standard system daemons without hacks? (i.e. > will init actually be systemd (or at least be replaceable with systemd > or at least some Linux native init)). They are still using their own init and nothing was said about doing anything else. They mentioned that the way they are running multiple simultaneous distributions is by using Linux's privileged container system with a single instance of the Linux kernel supporting all of them which probably requires the special init they provide. As they are privileged containers, they said that you could break out into other containers (underlying Linux system?). So starting a new Linux distro in WSL2 is like starting a distro via lxc in Ubuntu. They also said that they are using ext4 with a virtual HD for the root filesystem. There is even a way to move your distro back and forth between WSL1 and WSL2 and everything should work more or less the same (except for the stuff that WSL1 just can't do (some system calls, no FUSE filesystems, etc.)). Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] WSL 2
On Mon, May 6, 2019 at 10:30 PM Rich Pieri wrote: > > On Mon, 6 May 2019 21:50:22 -0400 > Bill Bogstad wrote: > > > It looks like both the current emulated environment as well as full > > Linux kernel will be usable on the same system. > > Yup. Which, amusingly enough, grants end users a freedom that is absent > from many contemporary Linux distributions: running with or without > systemd and all of its hard-coded interdependencies. I am ambivalent about systemd. I just want things to work when I install them, which for the most part is the case with systemd at this point. On the other hand, I don't like WSL1's bare shell environment either. Even in the current incarnation which allows daemon processes to continue to run after you close your bash window is a pain. I had to write my own shell script to start "necessary" daemons (cron, rsyslog, sshd, cups) which I have started from a Windows batch file when I login to Windows. I never log out of windows and I effectively get the benefits of the automation of system upkeep which my Linux distribution (Ubuntu) provides. Unfortunately, some of the standard cron scripts on Ubuntu check the system for the status of systemd before running and since WSL1 doesn't use systemd, so they fail. I'm hoping that WSL2 will allow a "full" Ubuntu distribution to be run so I can concentrate on getting work done rather than figuring out why "standard" things don't work. If they are going to be supporting a full Linux kernel environment, it seems to me that it is a pretty short step to allowing an arbitrary init process, whether it be systemd, SystemV init, or something else. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] WSL 2
On Mon, May 6, 2019 at 5:40 PM Rich Pieri wrote: >... > WSL 2 will ship with a fully GPL compliant (including patches), > reasonably current Linux kernel running in a lightweight virtual > machine. Reasons cited are better performance, particularly filesystem > performance, and native Docker capability. > > https://arstechnica.com/gadgets/2019/05/windows-10-will-soon-ship-with-a-full-open-source-gpled-linux-kernel/ Kind of short on details... Is this going to be some kind of highly optimized Hyper-V type environment with a customized Linux kernel? Is it going to still have full filesystem visibility from both operating systems? How? Will I still be able to run Window's binaries from a Linux command line? Will I be able to run all standard system daemons without hacks? (i.e. will init actually be systemd (or at least be replaceable with systemd or at least some Linux native init)). [I know you don't know the answers, but that article is pretty vague...] Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] WSL 2
On Mon, May 6, 2019 at 9:43 PM Bill Bogstad wrote: > > On Mon, May 6, 2019 at 5:40 PM Rich Pieri wrote: > >... > > WSL 2 will ship with a fully GPL compliant (including patches), > > reasonably current Linux kernel running in a lightweight virtual > > machine. Reasons cited are better performance, particularly filesystem > > performance, and native Docker capability. > > > > https://arstechnica.com/gadgets/2019/05/windows-10-will-soon-ship-with-a-full-open-source-gpled-linux-kernel/ I just found a Microsoft dev blog which has more info about this: https://devblogs.microsoft.com/commandline/announcing-wsl-2/ It looks like both the current emulated environment as well as full Linux kernel will be usable on the same system. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] ssh_dispatch_run_fatal error with FIOS router?
On Fri, Dec 7, 2018 at 8:53 PM Daniel Barrett wrote: > > > Anybody use Verizon FIOS with the old ActionTEC MI424WR router? If so, > are you seeing the following fatal error if you scp large files from a > remote internet host? > > ssh_dispatch_run_fatal: Connection to xx.xx.xx.xx port xxx: message > authentication code incorrect > > ? > > I can scp huge files with no problems between computers on my home > network, but whenever I try to scp something large (multi gigabytes) > from remote internet Linux hosts, scp dies with the above error(s). I > don't know what else to blame than my Verizon router, unless the > problem is outside my home. > > The OpenSSH team says this isn't an SSH bug but is caused by certain > buggy network hardware: > > https://bugzilla.mindrot.org/show_bug.cgi?id=2941 > > I know that Verizon has a newer router now (and tried to force > everyone to upgrade about a year ago), but at the time I didn't feel > like redoing my whole incoming network/firewall setup from scratch > > Thanks for any insights. I vaguely recall hearing once of a network card with bad memory buffers and certain bit patterns in certain buffers couldn't be successfully written and then read. If have the ability to set up an encrypted/compressed VPN from your home to the outside world, you might try doing your copies over that link. You might get lucky and not hit whatever is causing your problem. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Feedspot and other RSS Readers
On Tue, Nov 6, 2018 at 7:24 AM Dan Ritter wrote: > > Bill Bogstad: > > On Mon, Nov 5, 2018 at 8:53 AM Nancy Allison > > wrote: > > > > > > Hi, all. > > > > > > What do you use to aggregate the things you read? I've stumbled upon > > > Feedspot, which costs $$, and I'm wondering if it is necesssary. > > > > I've never seen the point in using an external web site as an RSS feed > > aggregator. > > There are two particularly useful bits. > > 1. It operates with a daemon that collects feeds around the clock, > so everything is at your fingertips immediately. Perhaps I subscribe to too many feeds, but I never run out of things that I could read. I leave Firefox open on my desktop all the time and the extension is currently set to scan every 30 minutes. That's plenty fast enough for me. Your requirements might be different. > 2. It's consistent when you access it through different clients (at home, > on your phone, at work...) so you don't find yourself re-reading articles. The previous extension that I used for Firefox (SAGE) kept things in bookmarks which could be synced using Mozilla's bookmark syncing mechanism. This sufficed to prevent me from re-reading stuff. Even with my current setup, which doesn't sync, I can usually remember what I've read in the past 24 hours sufficiently well that the subject line/teaser text is enough so I don't end up rereading stuff. I don't even use the same client on my phone that I do on my desktop and I still manage to avoid rereading stuff almost all the time. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Feedspot and other RSS Readers
On Mon, Nov 5, 2018 at 8:53 AM Nancy Allison wrote: > > Hi, all. > > What do you use to aggregate the things you read? I've stumbled upon > Feedspot, which costs $$, and I'm wondering if it is necesssary. I've never seen the point in using an external web site as an RSS feed aggregator. At the moment, I use the "feedbro" extension for Firefox. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Linux has 100% of Market Share.
On Mon, Jun 11, 2018 at 5:59 AM Marco Milano wrote: > > > > On 06/09/2018 12:11 PM, Bill Bogstad wrote: > > On Sat, Jun 9, 2018 at 11:11 AM Bill Ricker wrote: > >> > >> the Top500 statistics include Historical charts. You can see how recently > >> the last 2 AIX systems were pushed out of Top500, how "mixed OS" had a > >> brief surge early last decade, etc. > >> > >> https://www.top500.org/statistics/overtime/ > > > > Thanks for the pointer to those charts. 100% Linux was apparently > > reached in last November's list so that is a very recent thing. > > The dominance of Linux goes back well over a decade with which I was aware. > > Most of the computational power is coming from the GPU units. > You can probably use almost any OS and still come on top on those > benchmarks as long as you have sufficient number of GPU units. > I don't think the top500 as relevant as before, it is more like a pissing > match between USA and China these days. I think it is just as relevant as it ever was. Systems on this list have always been unusual in some way. And the uses to which they are put are far from typical as well. Massive simulations of various sorts remain very important in science, technology, biology, medicine, etc. and even the latest entries on these lists still aren't able to do as much as some researchers would like. Whether research in these areas is relevant is undoubtedly a question of values. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Linux has 100% of Market Share.
On Sat, Jun 9, 2018 at 11:11 AM Bill Ricker wrote: > > the Top500 statistics include Historical charts. You can see how recently the > last 2 AIX systems were pushed out of Top500, how "mixed OS" had a brief > surge early last decade, etc. > > https://www.top500.org/statistics/overtime/ Thanks for the pointer to those charts. 100% Linux was apparently reached in last November's list so that is a very recent thing. The dominance of Linux goes back well over a decade with which I was aware. > > ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Linux has 100% of Market Share.
in the TOP 500 Supercomputer list So I was reading an article about how a new supercomputer at Oak Ridge is going to put the USA back on the top of the world supercomputer list https://www.theregister.co.uk/2018/06/08/us_summit_supercomputer/ The article included a link to where to see the list and after some looking around, I found the following which lets you look at pie charts of various characteristics of the 500 entries of the list: manufacturer, country, architecture, etc... If you select "operating system family", you will find that 100% of the TOP 500 supercomputers run Linux. https://www.top500.org/statistics/list/ I knew that Linux was big in supercomputing clusters, but I didn't realize that it now owned that market. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Fwd: [Ietf-hub-boston] Meeting SOON: June 12 5-7pm at Volta Networks (Central Square)
I know there are some Linux/Unix users/admins who also do networking and might be interested in this upcoming IETF-Boston meeting. Unfortunately, I will be out of town; but hopefully other BLU members will be able to get something out of this meeting. Bill Bogstad -- Forwarded message - From: Alia Atlas Date: Thu, Jun 7, 2018 at 9:58 AM Subject: [Ietf-hub-boston] Meeting SOON: June 12 5-7pm at Volta Networks (Central Square) To: Our next IETF Boston Local Community meeting is next Tuesday, June 12 5-7pm. Many folks do go out for an informal dinner afterwards - to continue conversations, catch up, and get to know each other better.. PLEASE do send out information on your social media or relevant mailing lists and try to come. I'd like to see us doing a better job of outreach to folks who haven't come or are fairly new to IETF. Please RSVP at: https://goo.gl/forms/Vb0yC5ItzwdK4odJ3 We are hosted by Dean Bogdanovic at Volta Networks. I believe that he is generously sponsoring beer. We are meeting at: The Engine, 501 Massachusetts Avenue, Cambridge All the details for the meeting are on the wiki at: https://trac.ietf.org/trac/edu/wiki/BostonLocal#June2018Mtg The agenda is: Discussion on Network Automation and Intent-Based Networking by Dean Bogdanovic (45 minutes) Writing internet-drafts in markdown by Dan York (20 minutes) Routing Around DMARC censorship by John Leslie (40 minutes) As always, these will have a shorter presentation part and lots of discussion. I look forward to seeing you there! ___ Ietf-hub-boston mailing list ietf-hub-bos...@ietf.org https://www.ietf.org/mailman/listinfo/ietf-hub-boston ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] LibreOffice and .docx files (Crossover 17)
On Mon, Dec 11, 2017 at 11:48 PM, Shirley Márquez Dúlcey <m...@buttery.org> wrote: > The other option is to run Office under Wine or CrossOver. (The latter > is a commercially supported non-free offshoot of Wine.) I haven't > tried it but it's supposed to work reasonably well, because that's one > of the software packages that they concentrate on supporting. For those not keeping track, Codeweavers just announced Crossover 17. They claim to support Microsoft Office 2016 for the first time. If you own a copy of Office, you can download a time-limited copy of Crossover 17 and test out their claims. If you do, please post back here about your results as some people I know might want to do this, but I don't have a copy of Office 2016 to use for testing purposes. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] LibreOffice and .docx files
On Tue, Dec 5, 2017 at 7:48 PM, Nancy Allison <nancythewrit...@gmail.com> wrote: > Thanks, all. > > I appreciate the background information on why compatibility is not assured > or easy. I will upgrade to 5.4x when the time is right (when I have at > least one grizzled Linux veteran nearby to help out if the unexpected > occurs) ... Upgrading Libreoffice is very easy once you know the process. Her e is a link which appears to have correct info: http://www.omgubuntu.co.uk/2017/07/how-to-install-libreoffice-5-4-on-ubuntu Reverting back is harder, but it certainly doable. Just having a Linux person available by phone is probably all you would need. Bill Bogstad > > --Nancy > > On Tue, Dec 5, 2017 at 11:15 AM, Derek Martin <inva...@pizzashack.org> > wrote: > >> On Mon, Dec 04, 2017 at 08:25:45PM -0500, Nancy Allison wrote: >> > Is there a Word-compatible open source program that truly does not trash >> > the formatting? Or is it simply not possible to pass documents back and >> > forth between Word and an open source program like LibreOffice or >> > OpenOffice? I used Open Office years ago, haven't used it recently, it >> used >> > to trash the files, too. >> >> The Open Office project still exists with different leadership (it's >> now an Apache project), and while my understanding is on the whole >> Libre Office's Word compaitibility is better, it may be possible that >> a recent version of Open Office handles your particular document >> better, depending on exactly what features it is using. Could be >> worth a try. As others said, also try upgrading your Libre Office >> version... >> >> -- >> Derek D. Martinhttp://www.pizzashack.org/ GPG Key ID: 0xDFBEAD02 >> -=-=-=-=- >> This message is posted from an invalid address. Replying to it will >> result in >> undeliverable mail due to spam prevention. Sorry for the inconvenience. >> >> ___ >> Discuss mailing list >> Discuss@blu.org >> http://lists.blu.org/mailman/listinfo/discuss >> > ___ > Discuss mailing list > Discuss@blu.org > http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Future-proofing a house for networking -- what to run?
On Tue, Sep 12, 2017 at 1:19 PM, Bill Ricker <bill.n1...@gmail.com> wrote: > > > Why is conduit everywhere not an option? > Cost of material? Time-consuming bending & fitting? > Does Code there require _steel_ conduit for low-voltage DATA cables, or can > you use certain plastics? > (Plastics are nasty when it burns, but w/o power lines inside, fire is > coming from > outside; by the time the fire gets to it, it's pretty much over > already. Allowed for plumbing, so why not for data?) Your comment about "code" caused me to go out and do a Google search on: low voltage conduit residential code a few of the interesting results: https://www.bicsi.org/pdf/bicsinews/2008/JanuaryFebruary2008.pdf http://www.sdmmag.com/articles/84601-what-technicians-need-to-know-about-cable-the-nec They are more oriented towards commercial buildings, but it seem like "code" is routinely violated for low voltage cable installs. I expect that residential installs are even worse. Something to consider I suppose. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] CrashPlan Home is discontinued - what's next?
On Fri, Sep 1, 2017 at 2:53 AM, Rich Braun <ri...@pioneer.ci.net> wrote: > >> That would be etckeeper which I've used for some time. > > If you're still editing /etc config files, consider taking the time to learn > how to administer them in a centralized revision-controlled manner. This is for a home environment where I will NEVER have more than 2 or 3 systems to manage and I am the only person who changes system configuration. Between etckeeper and nightly incremental backups, I feel that I am adequately covered. While I completely understand the utility of configuration management systems for larger (or potentially larger) installations, I just don't think they are needed for my use case. The extra resources (both human and system) will never get paid back as far as I can see. Maybe if I used such systems on a daily basis in a job, I would feel differently. However, I stopped doing professional system management before configuration management became ubiquitous. In addition, it seems like every couple of years the "correct" CM package changes. I keep hoping that the market will eventually stabilize. Alternatively, if one of the major Linux distributions were to adopt a particular open source CM system as part of their default install and document using it for system configuration; I would probably switch to that distribution and start using CM. Unfortunately, it seems like all of the major distributions managed by commercial entities have made CM an extra cost addon. The ones managed by non-commercial entities seem uninterested in picking a single CM and just going with it. Instead they make all of them available, but don't setup any of them. Bill Bogstad P.S. If I'm wrong about the state of CMs or Linux distributions please let me know. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Crashplan is discontinued
On Thu, Aug 31, 2017 at 10:02 PM, Mike Small <sma...@sdf.org> wrote: > John Abreau <abre...@gmail.com> writes: > >> I've heard of tools using MD5 or SHA1 hashes to identify duplicates, and >> potential issues with hash collisions causing false positives. > > By accident or maliciously? The numbers seem off for accidental > collisions. An md5 sum is a 16 digit hex number. That gives > 340282366920938463463374607431768211456 potential hash sums (or does the > algorithm offer only a smaller subset?). I'm not going to bother to > compute the probability of a collision. It's a very remote possiblity, > yes? For the malicious case, if someone's able to mess with the hashes > used by deduplication code in your file system or in your hopefully > almost as good userland equivalent (which of course must use git in some > way or another for reasons that are not clear to me) you have unsolvable > problems. Does git only compare the checksum or does it also look at file size as well? I would think that comparing file size might make it even harder to get a collision. The only duplicate checksum that I've ever seen in practice was on 0 length files. Zero length files are, of course, all perfect duplicates of each other... :-) Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] CrashPlan Home is discontinued - what's next?
On Thu, Aug 31, 2017 at 9:47 PM, Dale R. Worley <wor...@alum.mit.edu> wrote: > Mike Small <sma...@sdf.org> writes: >> How do you handle file permissions? > > It doesn't do any more than Git does normally. OTOH, I've never used it > for bulk restoration, either. Do you actually put the entire subtree under your home directory into Git? My home directory has lots of pictures, movies, ISOs, etc. in there. Where do you put that kind of thing? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] CrashPlan Home is discontinued - what's next?
On Thu, Aug 31, 2017 at 12:50 PM, Mike Smallwrote: > wor...@alum.mit.edu (Dale R. Worley) writes: > >> I have a cron job which commits my home directory into a Git repository >> every minute. Surprisingly, this puts no noticeable load on the >> computer. > > How do you handle file permissions? E.g. .ssh directory contents or PGP > key files having restricted permissions and a git checkout pulling them > out of the repository with more lax default permissions based on your > umask (at least I think that's what it does). IIRC Joey Hess wrote some > kind of tool to use git to track /etc changes and had to add something > on top to deal with permissions and file ownership. I'd think you'd need > to do something similar. That would be etckeeper which I've used for some time. It does have a "-d" argument that can be used to specify directories other than /etc. I'm not sure what would happen if you pointed it at your home directory. Bill ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] open RFP/business idea for open source security business
I wrote the text below in response to a comment on Slashdot on how open source doesn't help the non-programmer have more secure software. I suggest a reason why that might not be the case theoretically and a potential business opportunity for someone who wants to make it a reality. I'm posting it here in the hopes that someone with the skills/initiative to make it happen will take up this idea. I welcome discussion of potential problems with the idea. Please feel free to forward it on to other communities/individuals who might find it interesting and might act on this idea. Just so its clear, I don't have the combined skills/drive to want to work on this. I'm hoping that someone else will take it up. I would, of course, enjoy hearing about any efforts to make it happen. Thanks, Bill Bogstad bogs...@pobox.com === If the US, Russian, Chinese, North Korean governments, and the EFF were to all certify a particular piece of open source software, then I would say that I am pretty safe in not having to analyze it myself. Clearly this hasn't happened yet, but open source at least makes it possible. It even makes it easy for outside experts (governmental or otherwise) to do their analysis which means that I might be able to pick and choose from a large set of outside experts that I trust. This is because any private or governmental entity could trivially set itself up to be such an expert. With efforts like Debian's reproducible builds, I may not even have to compile it myself. I can just verify the appropriate checksum(s)/signature(s) on the binaries that I downloaded from some random web site. I can even see this as a commercial service. The equivalent of the current anti-virus industry (with yearly subscriptions) would probably be viable. They could compete on how fast they analyze new releases and how many bugs (security or otherwise) they find in the code. It would probably be necessary to embargo their reports on new releases for a short period to maintain an incentive for subscription and to give time for the original developers to fix the problem, but much like the anti-virus industry they would want to publicly release their results as well for PR purposes. Any large entity that used open source and didn't subscribe to some of these services would probably be considered negligent by its customers and might even be considered legally negligent as well. Obviously, not every piece of open source software would be considered important enough to draw such scrutiny, but I suspect that all of the major network facing open source software (server or client) would be viable for such treatment. The above seems so obvious to me in retrospect that I wonder why it hasn't already happened. Perhaps there is a chicken and egg problem? There would be a fairly large up front cost for the initial checking of a major piece of software and no certainty that there would be a sufficient level of subscriptions to justify this cost (or pay for the lower costs of checking future releases). One solution might be to do a kickstarter campaign. I would be happy to contribute a modest sum ($100) if someone with expertise was to agree to check all releases of a major open source program for a year. It wouldn't even have to be a program that I used for that first year as I would want to encourage the creation of an industry of this type. Now you might argue that I should just give my money to the actual developers of the program. The problem with that is that I may be happy with the current feature set of a program, but would like more emphasis on checking for security problems (or QA in general). Nor would this allow me to select the people doing the checking so they were less likely to be in a position to be influenced by other organizations. If there are any security experts reading this, please consider trying this out. Other then the time to write up a proposal with your qualifications, it seems to me like you would have little to lose. [Oh, I would also support a similar campaign to write documentation for a major open source software package (say Libreoffice) if there are any documentation writers out there.] ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Eclipses Re: Great talks last night, however...
On Mon, Jul 24, 2017 at 12:10 AM, grg <grg-webvisible+...@ai.mit.edu> wrote: > On Sun, Jul 23, 2017 at 04:59:08PM -0400, Richard Pieri wrote: >> On 7/23/2017 3:42 PM, grg wrote: >> The ground can hold a lot of heat energy but it doesn't conduct it much. >> That's why a GHP spreads its ground loop system out across a large area. >> You're not getting that from burying big battery packs unless you also >> install the same kind of extensive ground loop system which costs to >> install and maintain. > > Look at it this way: if you put a battery in the ground underneath a solar > panel, the warming of the ground from that battery is going to be strictly > less than the warming of the ground from the sunlight hitting it directly > before the solar panel was installed. With an 85% charge/discharge > efficiency, the ground is being warmed only 15% as much as under direct > sunlight. Since there wasn't runaway heat buildup under sunlight, only 15% > of that amount of heating is also not going to exceed the earth's ability > to sink the heat away. Grg, I suspect that the above section is correct overall; but you do seem to be assuming that the absorption/radiation characteristics of a solar panel installation across the entire spectrum are the same as bare ground. Of course, you have the advantage that a nominal 85% of the energy you put into the batteries is going to be delivered back out to some "remote" location so that energy isn't going to be warming up the local battery environment anyway. That probably provides sufficient breathing room to make it even less of an issue for small scale installations. For larger utility grade systems, we might be looking at flow batteries in the long run anyway and there is no reason that the storage system needs to be that close to the generation system. Build them wherever cooling/heating/physical space/proximity to load considerations make the most sense. This article from ars technica: https://arstechnica.com/business/2017/07/german-energy-company-wants-to-build-flow-batteries-in-old-natural-gas-caverns/ talks about a commercial project to do just that in Germany as well as other projects elsewhere. Without any pricing info, it is difficult to say if this is viable, but it seems like a number of groups think that it might be. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Eclipses Re: Great talks last night, however...
On Thu, Jul 20, 2017 at 6:22 PM, Richard Pieri <richard.pi...@gmail.com> wrote: > On 7/20/2017 4:03 PM, Bill Ricker wrote: >> I wouldn't worry about solar power lost to the coming eclipses. Over the >> next 10 or 100 years, you'll lose far more to thunderstorms blotting out >> the sky; they cast bigger shadows more frequently. > > Yeah. And I'm even more concerned with the 10-14 hour (or more in some > parts of the world) solar outages we experience every day. Or night if > you prefer. For reals. Because despite Elon Musk's assertions, chemical > batteries are terrible ways to store electricity. They're inefficient > (~90% waste as heat). They're dirty (strip mining for rare metals, > hazardous chemicals needed to manufacture). They're non-renewable (while > some of the plastics and rare metals can be reclaimed, most of a > worn-out battery is landfill). Without an affordable, efficient, clean, > renewable and scalable way to store electricity, ground-based solar > can't be a solution. [sometimes stating the obvious below] So lets say that I accept everything you say about both the inefficiency and unclean characteristics of solar PV + battery storage. Are the current incumbent solutions (Oil, Coal, Natural Gas) any better on either characteristic? When doing your efficiency calculations, please don't cheat. i.e. Do total life cycle back to when the material was first buried underground. I suspect that even turning corn into ethanol is more energy efficient then the process that created fossil fuels. >From an economic perspective, it is beginning to look like residential solar + batteries might be preferable in the near future to current fossil fuel based grid power. Or at least that is the argument that many people are starting to make. Are they wrong? If they aren't wrong, is there some reason other than economics why switching from fossil fuels to solar + batteries would be a bad idea. I suspect you have some other energy solution in mind then the ones that have been mentioned so far on this thread. Care to share? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Great talks last night, however...
On Thu, Jul 20, 2017 at 2:51 PM, Shirley Márquez Dúlcey <m...@buttery.org> wrote: > If you want to use PowerPoles for something other than DC power > connections in the 12V neighborhood, you should make them distinct > from the ham power connection standard in some way. Use different > colors, a different configuration of jacks, whatever. That way nobody > will plug your other thing into a 12V power supply. The above seems to confirm my impression from looking at the web sites mentioned. PowerPoles are current rated and (perhaps?) have a maximum voltage rating. Unfortunately, I could see having 5V, 12V, 24V, and 48V DC in one installation. Am I wrong about this? Are there well known conventions for idiot proof use of these connectors? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Great talks last night, however...
I have to repeat my negative reaction to the idea that one would consider using polarized AC plugs/sockets for low voltage DC interconnects. That is probably as bad an idea as the AC Power over Ethernet adapter in the attached JPEG. Other examples of this class of devices can be found at: http://www.fiftythree.org/etherkiller/ Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Is there a supported browser for Linux that still runs Java applets? (also Flash)
As the web marches on, older technologies fall by the wayside. Java applets seem to be one of them. All the major browsers seem to have stopped supporting them at this point. Unfortunately, this can result in broken systems. For me, my biggest use case is an older all-in-one printer from Lexmark that uses a java applet to enable its scan to PC functionality in a browser. I seem to recall that java applets were even used to support enterprise level hardware at one point. Is this an issue that other people are having? Any suggestions on how to maintain browser/java applet support going forward on a Linux system? I'm not looking for a solution that will let me use the same browser for all of my web browsing. Just something which will continue to work as I apply "mandatory" security upgrades to my systems. (i.e. the latest browser updates) Oh, same question about Flash. It isn't quite as dead as Java applets, but it is pretty clear that it is on its way out. There islots of kid oriented content out there that was created using Flash that is going to die when it goes away. My biggest use case is MIT's Scratch project. They switched to implementing Scratch in Flash some years ago. My kids are avid scratchers and I'm worried that they may lose access in the near future. Unfortunately, I've been unable to find anything on the Scratch web site about plans to deal with Flash becoming unusable. In the modern Scratch world everything is done on their web site so this could be a real problem for their kid/teen oriented user base. Thoughts/solutions? Thanks, Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] etckeeper (tool to store /etc/ in version control)
On Sun, Apr 2, 2017 at 9:56 PM, David Kramer <da...@thekramers.net> wrote: > https://opensource.com/article/17/3/etckeeper-version-control > > Sounds good, and I can't think of any downsides. I think the value add of > etckeeper over just adding /etc to git is that it ties into package > management and every change in /etc/ caused by a package install gets > committed to git as such. I'm not sure if those silent commits are a good > thing or not. > > Thoughts? I've been using etckeeper on one-off/personal systems for years now. I just checked and one system has a git log that goes back to 2011. I think I started on Ubuntu 10.04. At the time, I didn't see the need for a configuration management system for an environment that was never going to scale very large. I mentioned it on this list in 2012 and 2014 in the context of other discussions. I still think it is particularly good for my use case, but the article you link to also points out that many configuration management systems only track explicitly stated files. Etckeeper tracks everything unless told not to do so. I can see that as useful even when configuration management is being used. Unfortunately :-), I have been lucky and have never had to use etckeeper to save my bacon. To be fair, etckeeper does have issues. It doesn't provide loss protection as a result of hardware failure. CMS systems pretty much always have a separate server which acts as a backup for your systems' configurations. If you use git with etckeeper you can push changes to a remote repository. Alternatively, you can always add the entirety of /etc to your backup process. It's not like it is that much data. /etc on the system that has had etckeeper since 2011 is under 280Meg. The other problem with etckeeper is that not all configuration lives in /etc and it really doesn't handle this case at all. I think there might be some patches floating around to let it handle more than one directory, but it hasn't been a big enough issue for me to figure them out. Crontab files in /var are the particular class of configuration file that comes to mind for me. You might have others. Still for minimal effort etckeeper provides me with a lot of peace of mine. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] deadmanish login?
On Thu, Feb 2, 2017 at 4:38 PM, Richard Pieri <richard.pi...@gmail.com> wrote: > On 2/2/2017 2:51 PM, Kent Borg wrote: >> Does have 40-bits of entropy, that is. > > Not really: > https://www.schneier.com/blog/archives/2014/03/choosing_secure_1.html Yes really. IF the word selection is based on a random process. Schneier is correct that if a human selects (either you personally or by quoting from another source then you lose entropy. If you are just writing down a 40 bit random number by encoding it into words, there is no problem (modulo offline vs. online attacks). Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] deadmanish login?
On Wed, Feb 1, 2017 at 12:03 PM, Richard Pieri <richard.pi...@gmail.com> wrote: > On 1/31/2017 8:48 AM, Kent Borg wrote: >> "15-ladder-bamboo-sierra" is an easy password to remember and type, yet >> it has 40-bits of entropy. Even if some bizarrely configured sshd > > It also uses dictionary words. Using dictionary words (read: not random) > reduces the effective entropy of the key. My quick estimate is that just the 3 words in his password gives him something close to 40 bits. That's assuming a dictionary size of 1 words. If you assume that an attacker has to do a rate-limited on-line attack to search that 40bit space, that seems adequate to me. On the other hand, if you allow for the possibility of an attacker obtaining the password hash file and attacking it offline; then maybe that isn't enough. Kent's concern seems to be that because your SSH private key file is encrypted, many people will put it lots of places where they shouldn't. If just one of those places is compromised even briefly the attacker can do an off-line attack against the key file. Aside, since others have noted their non-standard security procedures... I regularly reuse passwords between different systems. Specifically, systems/web sites in which I have no significant stake. I really don't care if someone who manages to crack the InfoWorld web site can then read the NY Times using the same credentials. Each financial and email account on the other hand gets a different password. Bill Bogstad > > -- > Rich P. > ___ > Discuss mailing list > Discuss@blu.org > http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] disappearing memory
On Tue, Jan 3, 2017 at 4:17 PM, Laura Conrad <lcon...@laymusic.org> wrote: > > My system has been running really slow lately, and I have been swearing > at firefox and websites that load 200 ads before showing any content. > > But then I noticed that I was using only 4G of memory, and I knew I had > more than that. >. > > Does anyone have any ideas? Do you have integrated graphics in your computer or a separate graphics chip? With integrated graphics, the system grabs some memory from main memory for exclusive use by the graphics system. I'm not sure how that would affect meminfo output and it would probably depend on whether your BIOS grabbed the memory or the Linux kernel/X server did. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Weird new non-empty Dell disk
On Fri, Oct 28, 2016 at 5:08 PM, Kristian Erik Hermansenwrote: > Lol you should have made a copy and posted it somewhere ;) I guess it wasn't obvious in my note, but I haven't overwritten the disk yet. As for posting it, I think I'm okay against claims of copyright violation if it was installed on a product they sold me. However, if I was to post it, Dell might be unhappy. :-) Bill ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Weird new non-empty Dell disk
Summary: Dell shipped me a no-OS server with a pile of factory diagnostics software "hidden" on the disk. Was this a mistake at the factory? Has anybody had something like this happen before? Any curiosity about what a top 10 ODM (Wistron) uses for testing/configuration? Other thoughts? Bill Bogstad Details: I bought a low end server directly from Dell (PowerEdge T20) without an OS, but with a hard drive. This was to replace an older system. When the new system arrived, I ran the UEFI based diagnostics on it and in the process noticed that the drive seemed to have no partitions or OS (as expected). I physically moved the boot drive containing an Ubuntu installation from the old system to the new T20. I've done this kind of thing before with little or no difficulty and was able to quickly bring up the new computer with the old system install/configuration. WEIRDNESS STARTS here: Before formatting the drive that came with the new system, I decided to check and see if there was anything on it. When, I found it wasn't blank; but clearly had software on it; I verified it had no partition table and used testdisk: http://www.cgsecurity.org/wiki/TestDisk to recover the "lost" partitions. TestDisk found two partitions and reinitialized the MBR with the following results: # fdisk -l /dev/sda Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x Device Boot Start End Blocks Id System /dev/sda1 * 632261951911309728+ c W95 FAT32 (LBA) Partition 1 does not start on physical sector boundary. /dev/sda222619520 1953520064 965450272+ 7 HPFS/NTFS/exFAT I now had two partitions which basically cover the whole disk. I mounted them read-only: # mount | grep /mnt/tmp /dev/sda1 on /mnt/tmp type vfat (ro) /dev/sda2 on /mnt/tmp2 type fuseblk (ro,nosuid,nodev,allow_other,blksize=4096) # df /mnt/tmp /mnt/tmp2 Filesystem 1K-blocksUsed Available Use% Mounted on /dev/sda1 11298672 379824 10918848 4% /mnt/tmp /dev/sda2 965450272 7822948 957627324 1% /mnt/tmp2 In total, over 8 Gigs of files, software, etc. are on the two partitions. The first one is a MS-DOS 5.0 bootable partition while the second seems to be some version of Windows. They are both filled with what appears to be testing and system management software etc. There are Windows WIM system image files, Norton's ghost utility, management utilities written in Perl and Python (with the required dos/windows based interpreters), etc. There are references to Wistron (a Taiwanese ODM for Dell). Most files have dates going back years, but there are some from September which appears to be when the system was tested. Some of those have the unique Dell Service Tag # of the computer in them. I tried booting off of the drive and it booted into some kind of graphics environment with a command prompt window running scripts. I chickened out before it got too far as I hadn't disconnected my Ubuntu data/OS drives. I'm guessing that this is a test image used by the manufacturer. Rather then wiping the whole drive, they just zeroed out the partition table after testing. It seems that lots of it has nothing to do with my system, but it was just easier for them to include everything they might need in their test image then it would be to strip it down for the particular production run. I bought two T20s at the same time. I'll let people know if the second system has the same thing on its empty drive or if I figure more things out. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] EOMA68 Computer
On Wed, Jul 20, 2016 at 8:42 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 7/20/2016 3:26 PM, Steve Litt wrote: >> Also, the computers of 2000-2008 were much more robust than today's >> thin, light, cut glass ornaments that pass for laptops. I bought two > > Certainly, most of what's out there is pretty flimsy but it has nothing > to do with being thin and sleek. It's the BGA (ball grid array) surface > mountings which has replaced PGA (pin grid array) in consumer > electronics. BGA is crap. It has a lot of mechanical problems which make > it significantly less durable and much harder to inspect for defects > than PGA mountings. Even something that looks like it can take a beating > probably can't due to the BGA mountings. Main board gets flexed? Dead > computer. Actually everything is worse. I bought a $300 special for occasional use with Windows two years ago and the keyboard gave out after a year. Amazingly, I was able to order a compatible replacement from China; but the touchpad stopped working shortly after I installed the keyboard. Although, I could have used it with an external mouse; I ended up retiring it from even that use. I booted it up yesterday to get my "free" Windows 10 license and the touchpad is working again. I'm glad that I'm not depending on it for my day to day computing. Bill Bogstad > > -- > Rich P. > ___ > Discuss mailing list > Discuss@blu.org > http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Whence distributed operating systems?
On Thu, Apr 28, 2016 at 10:41 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 4/28/2016 8:30 PM, Bill Bogstad wrote: >> The fact >> that this was done via a fibre data bus vs. a faster local bus would >> seem to me to be an implementation detail. It still sounds like a >> NUMA with three levels of memory access (CPU local, QBB local, remote >> QBB). But all of the memory transparently visible in a program's >> address space. > > You might think that but it wasn't. The fibre bus never actually worked > well. It was too slow mixed with PCI, with too much latency. It got > congested and bogged down under even moderate loads. I still see that as an attempt at NUMA that didn't work because of the latency problems. Not really surprising actually. We all know what happens when a program/system tries to actually use more memory then is physically installed (i.e. actually using paging). The latency difference between local and remote QBB RAM might not have been as bad as between RAM and disk; but it sounds like it was more then enough to make it not work. I suspect that if they hadn't insisted on making the differences in latency completely invisible to applications, it is possible that it would have worked better. A way for a program to provide hints might have helped. Something like pin(address, length)/unpin(address, length). I think this would have been better from a programming perspective then having to explicitly manage moving data between local RAM and remote RAM/disk. You would still have a single flat memory space programming model which would always work (albeit slowly) and then you could throw in judicious pin()/unpin() function calls to help the OS keep active memory local. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Whence distributed operating systems?
On Thu, Apr 28, 2016 at 3:17 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 4/28/2016 4:28 AM, Bill Bogstad wrote: >> So memory was shared? between the QBBs? This sounds more like a NUMA >> architecture environment. What would you say are the differences >> between this definition >> of SSI and NUMA? > > In a NUMA machine, memory is directly attached to the CPUs but not all > of that memory is local to each CPU. Galaxy wasn't NUMA. Each QBB was > NUMA: 4 processors with 512MB local to each with the rest being > non-local but still directly attached. Memory in one QBB was not > directly attached to the processors in other QBBs; it was shared via > software over a fibre data bus. Can you clarify what you mean by "shared via software"? Did the virtual memory system, page fault data from remote QBBs as needed or was there a fibre bus transaction every time a local process accessed remote memory? I understood your original note to mean that from a programs perspective it could allocate/use memory in other QBBs transparently (except for possible performance differences). The fact that this was done via a fibre data bus vs. a faster local bus would seem to me to be an implementation detail. It still sounds like a NUMA with three levels of memory access (CPU local, QBB local, remote QBB). But all of the memory transparently visible in a program's address space. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Whence distributed operating systems?
On Sun, Apr 24, 2016 at 10:35 AM, Rich Pieri <richard.pi...@gmail.com> wrote: > nb: I received this Sunday morning. Not sure where the delay was. > > On 4/21/2016 10:14 AM, Jack Coats wrote: >> In many ways, we do have a single system signon, with a 'single system >> image' for well developed systems today. > > That's not single system image, though, even if it presents the > illusion. Lemme give you a concrete example, one of the last serious > attempts at practical, commercial SSI that I'm aware of: Digital's > Galaxy architecture. > > Galaxy was an evolution of VAX/VMS clustering. The idea was to be able > to construct arbitrarily large VAX clusters from what they called quad > building blocks or QBBs. Each QBB was 4 Alpha CPUs and 2GB RAM (IIRC) > and was plugged into a fast fibrecchannel backplane. The prototype I saw > in Nashua was a 4 QBB system. > > Galaxy had two configurations. The first was a traditional VAX cluster > with each QBB operating as a single node, so the prototype was a 4 node > cluster with 4 processors and 2GB RAM each. Traditional VAX clusters are > sometimes seen as partial SSI due to the cluster-aware file system (one > of the first ever) and live process migration capabilities (again one of > the first ever). This is in fact what I think of when I hear the term SSI. In the Unix/Linux world, there was Locus (and IBM's AIX for 370 & PS/2s). You could even have both 370s and PS/2s in the same SSI. Obviously processes couldn't migrate between them, but otherwise they were treated the same. I worked with this system a bit many years ago. There is also Mosix which I believe may still be around and had a Linux based version. No experience with this > The second was the namesake galaxy configuration. All of the QBBs were > glommed together into a single large computer. In this configuration, > processes on the prototype saw 1 processor with 16 streams and 8 GB RAM > rather than a 4-node cluster. A processes could allocate more than one > QBB's worth of RAM and not hit swap. It could have threads running on > any or all of the QBBs without having to be programmed specifically for > it. Galaxy was full SSI. So memory was shared? between the QBBs? This sounds more like a NUMA architecture environment. What would you say are the differences between this definition of SSI and NUMA? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] I need some dual-booting advice
On Mon, Apr 11, 2016 at 11:44 AM, Rich Pieri <richard.pi...@gmail.com> wrote: > I have a couple of workstations I need to get set up. The owner wants to > have both Windows and RHEL6 on them. Either dual-boot or running Windows > in a virtual machine on Linux hosts is acceptable to the owner. Both > operating systems need to be available to multiple users on the console > (not simultaneously) and if Windows is virtualized then automatic USB > pass through from the console is required. > > These last two have me stumped for the best way to get these set up. My > first attempt was with KVM but USB pass through is a manual process that > has to be performed by an administrator for each device. This is not > viable. My second attempt was with a VMware workstation shared virtual > machine which I discovered the hard way doesn't do USB pass through at all. > > Is there some virtalization trick that I'm missing, something that will > do a shared virtual machine with automatic USB pass through or is > dual-booting the only way to get this working? At one time, I used VirtualBox's USB guest access to allow the Windows software from TomTom to update my GPS. This was a number of years ago and I haven't used it since then. As I recall, it was possible to specify particular USB ids which were "captured" by a virtual machine so that the host OS (some version of Ubuntu at the time) didn't try to use them. It did work, but my memory is that it was quite slow. This was probably 3 or 4 years ago and it is likely that things have changed (improved?) since then. I don't recall if there was a way to have all USB devices passed through to a VM so this may or may not satisfy your use case. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Profiting from GPL software
On Tue, Nov 10, 2015 at 12:04 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 11/10/2015 11:57 AM, Chuck Anderson wrote: >> >> The latest OEM firmware downloads from >> http://www.linksys.com/us/support-article?articleNum=148550 for both >> hardware versions appear to show that they still use Linux: > > > Okay. I'll concede that point. Linksys still have one product that ships > with GPL software. At the page: http://www.linksys.com/us/wireless-routers/c/wrt-wireless-routers/ You can find 4 WRT models which you can buy directly from Linksys' web site and which they are promoting as "Open Source Ready" on this page: http://www.linksys.com/us/c/wireless-routers/ Linksys' GPL page has software downloads for 3 of the 4 models sold as "Open Source Ready" http://www.linksys.com/us/support-article?articleNum=114663 with the fourth being a hardware upgrade and probably uses the same firmware as it's slower twin. Now it is possible that these 4 models don't actually ship with GPLed software and are just sold as compatible, but I doubt that is the case. Looking instead at their "newest arrivals", I find that the E1700 ($59.99) and EA9200 ($299.99) have GPLed firmware releases listed. Now given how fast and loose vendors tend to be in the consumer space with keeping the same model number while completely changing the hardware/software that doesn't prove that what they are currently selling ships with GPLed software. For me at least, it seems likely that Linksys continues to be happy to ship products using GPLed software when it suits their business interests. And that remains true as of today. I'm not surprised by this and don't understand why anyone would think anything different. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Local ISP Recommendations?
On Mon, Jan 18, 2016 at 10:10 AM, Kent Borg <kentb...@borg.org> wrote: > I am looking for ISP recommendations. In Somerville. > > > > Suggestions? You seem to want unfiltered IP access and good bandwidth in a residential setting. In general, that doesn't seem to be available. My suggestion is to buy your bandwidth and unfiltered IP access from different sources. i.e. Go with whatever residential service gives you the bandwidth you want at a good price and then run a VPN over that to someplace that will give you unfiltered IP access with whatever number of static addresses you want. For bandwidth, your choices seem to be Comcast and RCN. For VPN service, you might consider one of those services that let people avoid geo-restrictions on web sites or try a DIY setup via a hosting service that gives you full OS & Internet access with real IP addresses. My guess is this will easily cost more than $100 a month. Good Luck, Bill Bogstad > > > Thanks, > > -kb, the Kent who wishes Google Fiber would come to town, or our > aren't-I-so-terribly-hip Mayor Curtatone would find us some good bits. > ___ > Discuss mailing list > Discuss@blu.org > http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Experiences virtualizing: Linux hosted in Windows vs Windows hosted in Linux
On Sat, Nov 14, 2015 at 9:53 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 11/14/2015 11:56 AM, Steve Litt wrote: >> So would that be a "yes"? > > Will it work? Yes. > > Will it work well? Not really. Memory management for Windows 95/98/Me is > a mess and it noticeably hinders performance in virtual machines > compared to Windows 2000 and XP with the same virtual resources. Of course modern machines are 10x (or more) faster then the machines that Win 9X was originally used with so that may not matter depending on your application. In the end, the only way to know for sure is to try it. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Dropping obsolete commands (Linux Pocket Guide) (dump/restore)
On Tue, Nov 10, 2015 at 11:24 AM, Chuck Anderson <c...@wpi.edu> wrote: > On Tue, Nov 10, 2015 at 12:06:03PM +, Edward Ned Harvey (blu) wrote: >> If you want to backup the entire filesystem in such a way that all the above >> is unnecessary - you instead boot from rescue media, partition & format the >> hard disk, and simply run "restore" and boot back into the restored system >> as if no problem had ever occurred, then assuming you're using ext >> filesystems, you need dump & restore. (Or you need storage on some >> snapshotting storage system external to the system you're restoring.) > > According to Ted Ts'o (filesystem developer), it is NOT a recommended > way to backup your filesystem: > > http://www.gossamer-threads.com/lists/linux/kernel/1197768 > > "It does read the mounted block device directly, and so it's certainly > not a _recommended_ way to back up your ext4 filesystem. It should > work, though, since it just uses the high-level libext2fs functions > --- and a while back, I think I did a quick test and found that it > really did work. So I'm not sure what broke, but it might not be that > hard to fix. That being said, it may not be worth it to fix it, since > with delayed allocation, backups using dump will be even more > unreliable than they were before. " - Ted Ts'o Note that this is from 2010 AND it was for a live (mounted filesystem). I've used the rsync method myself to copy a system disk, but I've always been worried that if I didn't get the options just right I might lose an ACL or some other extended attribute and not know it."Runs fine" doesn't mean some subtle problem (possibly security related?) hasn't been created. For stuff in /home, I worry much less about this and see no reason not to use rsync. I'm about to add an SSD to a system with an HD and I'm going to give "dump | restore" a try. One interesting feature of the Linux dump is that you can specify inodes not to backup and if it is a directory the whole subtree will not be copied. The system in question has /, /var, and /home all on one partition and I'm going to split them up in the new configuration so this will be helpful. /home is going to stay on the HD while / is moving to the SSD. Not sure about /var yet. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Fwd: Hey FCC, Don't Lock Down Our Wi-Fi Routers | WIRED
On Sat, Nov 7, 2015 at 7:03 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 11/7/2015 5:08 PM, Bill Bogstad wrote: >> >> With the exception of Apple and their AirPort products, I'm not aware >> of any "manufacturer" of products of this type who doesn't frequently >> sell products using the Linux kernel (i.e. GPLed software). > > > Cisco. Netgear. Linux shipping on their devices is a rare exception last I > knew. Apparently we are ignoring the fact that the Linksys brand/company was owned by Cisco from around 2003 to 2013. I'm pretty sure they sold lots home routers during that time period under the the "Linksys by Cisco" brand name. Given the wide range of products that Cisco sells, it is certainly the case that most of them don't use Linux. Of course most of those products have nothing to do with the home networking market. In that market, "Linksys by Cisco" was (as far as I know) the bulk of their offerings and Linux was common on those devices. More recently their IOx efforts to marry IOS and Linux seems to indicate that Cisco isn't afraid of selling Linux based products. As for Netgear, at least some of their ReadyNAS product line apparently use Linux and this web page of source code for firmware for various Netgear products shows many products using GPLed (usually? linux) software: http://kb.netgear.com/app/answers/detail/a_id/2649/~/netgear-open-source-code-for-programmers-%28gpl%29 Since neither one of us has defined "frequently" or "rare exception", I suppose we can both argue that we are right. I would willingly acknowledge, however, that networking products targeted at businesses (particularly large enterprises) are less likely to use Linux/GPL based firmware than SOHO targeted products. Why that is I couldn't say for sure. Perhaps market segmentation efforts? "You don't want to run your enterprise network on commodity software. Instead use our homegrown software". >> supporting new hardware. > > ^^^ > > That's the answer I already gave you. It's phrased differently but it's the > same answer. It all comes down to who supports what. Fine with this. Time to market/ability to use the cheapest chips as fast as possible for SOHO products is certainly a reason to not use OpenWRT. But I don't see that as having anything to do with OpenWRT being GPLed. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Fwd: Hey FCC, Don't Lock Down Our Wi-Fi Routers | WIRED
On Sat, Nov 7, 2015 at 12:04 AM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 11/6/2015 10:40 PM, Bill Bogstad wrote: >> >> Given that the majority of cheap home routers do ship with GPL'ed >> software (i.e. the Linux kernel), I am having a real problem >> understanding this argument. Some GPL'ed code is okay whereas other >> GPL'ed code is toxic? Or maybe it has something to do with code that >> they change vs. simply compile? Okay, I'm going to stop guessing >> now. Please explain if possible. > > > Because you're on an unrelated tangent? The point was about the OEMs which > don't open things up, not the ones which do (and yes, I realize just how > incorrect I was for naming D-Link and TP-Link). There are lots of cheap routers out there and their relationship to GPLed software is certainly varied. It ranges from having no GPLed software at all (for example using a VXworks RTOS) to a fully open system for which the manufacturer provides full and complete source for all? software as well as making it easy to flash users' own firmware images. With the exception of Apple and their AirPort products, I'm not aware of any "manufacturer" of products of this type who doesn't frequently sell products using the Linux kernel (i.e. GPLed software). That's not to say that they don't put proprietary software on top of the Linux kernel or even always make the required sources for the Linux kernel available. From what I can tell, the GPLed Linux kernel doesn't scare them enough so they won't ship it if it allows them to ship faster with the features they want. What they don't do is take the extra step to make it easy (possible?) for end users to build and install customized firmware images. If you have examples; besides Apple; of companies who don't sometimes ship products in this space with Linux kernels then your arguments about liability issues of GPLed software or FSF? lawsuits would make more sense to me. As it is, it seems to me that there must be some other reason that they don't just ship something like OpenWrt. I have my own theories involving the code they get from the chip manufacturers and their unwillingness to wait for OpenWrt to get around to supporting new hardware. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Debian adds another systemd dependency, Busybox drops it
On Sat, Nov 7, 2015 at 2:43 AM, Mike Small <sma...@sdf.org> wrote: > On Fri, Nov 06, 2015 at 10:47:30PM -0500, Bill Bogstad wrote: >> On Fri, Nov 6, 2015 at 9:10 AM, Rich Pieri <richard.pi...@gmail.com> wrote: >> > Tangentially, we've had genuinely unprivileged X servers for a long time. >> > VNC's standalone X servers do not require root and to the best of my >> > knowledge never have. Combined with DirectVNC, a Linux framebuffer VNC >> > client, and you can have X without root without systemd hackery. >> >> True. But I think most people want X servers that take advantages of >> all the graphics acceleration features in modern graphics cards. >> Those X servers have in my experience usually required running them as >> root. > > OpenBSD's privilege separated X uses acceleration though doesn't > yet support as many graphics chipsets as X on Linux. E.g. Nouveau > (for nvidia) hasn't made it over yet, but perhaps that will change > now that someone at NetBSD is working on it. Interesting, maybe X Window System developers for Linux systems didn't care enough about the potential issues of privileged X servers to spend the time. That wouldn't be surprising. Most Linux users are probably going to buy their graphics hardware based on performance/support not security concerns so said developers would have little pressure to change their priorities. I confess that I haven't really thought about it myself. Given that I run Linux rather than OpenBSD, I've already made the decision to value something else more than ultimate security. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Fwd: Hey FCC, Don't Lock Down Our Wi-Fi Routers | WIRED
On Thu, Nov 5, 2015 at 2:33 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 11/5/2015 1:21 PM, Shirley Márquez Dúlcey wrote: >> >> A problem for the makers of high priced products. But if you are a >> commodity router maker, your value is in price and reliability, not >> software uniqueness. So why not open up? > > > There are a few reasons I'm aware of. The most practical one is simply that > they can't afford it. Every state in the Union has implied warranty laws > that manufacturers must comply with. Some states, including Massachusetts, > outright prohibit "as-is" sales. Meanwhile, most GPL software, including the > Linux kernel, expressly comes "as-is" with no warranty whatsoever. Google > and Verizon can afford to eat the costs of adding warranty support to > unsupported code; not so much D-Link and TP-Link. Given that the majority of cheap home routers do ship with GPL'ed software (i.e. the Linux kernel), I am having a real problem understanding this argument. Some GPL'ed code is okay whereas other GPL'ed code is toxic? Or maybe it has something to do with code that they change vs. simply compile? Okay, I'm going to stop guessing now. Please explain if possible. As for lawsuits by the FSF (from your previous note), just because code is covered by the GPL doesn't mean that the FSF has any rights over it. The Linux kernel and Moodle are two examples of large packages that use GPL that as far as I know have no connection to the FSF at all. But maybe you meant more generically that they could be sued by whoever happened to own the copyright. But again that hasn't precluded them from using the Linux kernel. Nor has it stopped manufacturers of Android smartphones. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Debian adds another systemd dependency, Busybox drops it
On Fri, Nov 6, 2015 at 9:10 AM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 11/3/2015 10:31 AM, Thompson, David wrote: >> >> Systemd drama aside, an unprivileged Xorg sounds very useful. > > > Tangentially, we've had genuinely unprivileged X servers for a long time. > VNC's standalone X servers do not require root and to the best of my > knowledge never have. Combined with DirectVNC, a Linux framebuffer VNC > client, and you can have X without root without systemd hackery. True. But I think most people want X servers that take advantages of all the graphics acceleration features in modern graphics cards. Those X servers have in my experience usually required running them as root. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Fwd: Hey FCC, Don't Lock Down Our Wi-Fi Routers | WIRED
On Mon, Oct 5, 2015 at 1:00 PM, Edward Ned Harvey (blu) <b...@nedharvey.com> wrote: >> From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On >> Behalf Of Shirley Márquez Dúlcey >> >> A router locked down in that way could not incorporate any GPLv3 code. > > I don't see any reason locked-down firmware would violate GPLv3. As long as > you announce what code you're using, and distribute the code. Actually one of the changes in GPLv3 was to add requirements for installation instructions for certain classes of products. From section 6 of: http://www.gnu.org/licenses/gpl-3.0.html === A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. “Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), * the Corresponding Source conveyed under this section must be accompanied by the Installation Information. *** But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). === So basically under GPLv3 if a product is sold to consumers and the firmware is updatable then the end user has to be given all information required to updated the firmware. So WiFi router, cell phones, TVs, streaming media devices, etc. if they contain GPLv3 covered source code must provide installation instructions/keys. Of course, the Linux kernel is under GPLv2 which doesn't have this provision. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Fwd: Hey FCC, Don't Lock Down Our Wi-Fi Routers | WIRED
On Mon, Oct 5, 2015 at 11:39 AM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 10/5/2015 3:20 AM, Bill Bogstad wrote: >> >> So what does it mean when the FCC's own documents suggest otherwise? >> For example, the document at: > > > What it means is that you are taking one document to be something it isn't. > > FCC guidelines are not rules. They are not requirements. They are not even > recommendations. They are suggestions as to what vendors can do to ensure > compliance -- even when they're laced with a lot of "MUST" clauses. > > The vendors know this. It's not a big deal for them; it's business as usual. > The ones that have been locking their devices all along will continue to do > so. The ones that have not will implement other mechanisms to ensure > compliance. Vendors will take the path of least resistance. Tivo and others have already shown how to only allow signed firmware updates. Vendors don't want people to be able to replace the firmware on their products anyway as that just means people won't buy more expensive/newer products as often. The only reason all vendors haven't gone to requiring signed firmwares is the additional cost. If the least expensive way to fulfill FCC requirements is to lock down the hardware that's what they will do. They certainly won't spend extra money to continue to allow end users to update upper level software while at the same time locking down radio parameters. So while FCC rules might not mandate such a lockdown that is going to be the almost inevitable result of the compliance requirements. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Cloud-backup solutions for Linux?
On Mon, Sep 28, 2015 at 12:35 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 9/28/2015 8:24 AM, Edward Ned Harvey (blu) wrote: >> >> something. The guest storage device is a precise snapshot of what the >> guest storage would have looked like at the instant that the storage >> snapshot occurred. > > > No, it isn't. Issuing a flush command on the host does not flush guests' > buffers so what resides on the host's file system will not include what is > buffered and not committed in the guests. If you make a snapshot of the host > file system in this state then the guest file systems on the snapshot will > be inconsistent and possibly unusable depending on the nature of the data. I think you are both right. Ed is right because it is a precise image (snapshot) of an albeit inconsistent filesystem. While you are correct, that the filesystem may be inconsistent with unpredictable results. While filesystem based snapshotting systems generally give you snapshots that are file consistent, disk image based systems typically do not; and neither of them guarantees that your applications left their data in a consistent state. The appropriate way to backup/snapshot/whatever a system depends on what you care about and exactly how your apps/filesystems/os do things, etc. In the old days, people would run dump on filesystems which were actively being modified and they got away with it (most of the time...) What is your risk tolerance? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Cloud-backup solutions for Linux?
On Mon, Sep 28, 2015 at 8:24 AM, Edward Ned Harvey (blu) <b...@nedharvey.com> wrote: >> From: Bill Bogstad [mailto:bogs...@pobox.com] >> >> > 2- Use a snapshotting filesystem like btrfs or zfs in the host, so the >> > host can >> replicate the guest storage to another location seamlessly. >> >> I don't see how this can work in a way that would be useful. >> Filesystem snapshots of your emulated disk images by the host OS may >> give you a single point in time copy, but they don't guarantee that >> the copy is in any way consistent. > > This is one of my favorite modes of operation. I run a ZFS host, and have > guest VM's residing in zvol's, which get snapshotted and replicated > periodically to additional attached storage, and offsite and offline. > > If something happens, like the whole machine explodes or whatever, then I > restore the guest snapshot, and power it on. The behavior of the guest > machine is exactly as if the guest machine had been running and then suddenly > the guest power was yanked or kernel panic or something. The guest storage > device is a precise snapshot of what the guest storage would have looked like > at the instant that the storage snapshot occurred. Well, we are on the same page on that anyway. It is exactly like pulling the plug on a running system. > If you're running an OS or some daemon that can't survive a power > interruption, time to find a new OS or switch to a different daemon. While some OSes/filesystems handle power interruption well at this point, it seems to me that there are lots of apps/servers which do not and which people still need to use. Particularly in a VM environment where you might be running legacy OS/app combinations because you can't replace them, it seems to me that suggesting this method as a generic way to backup VMs is not really appropriate. Sure we should all replace our old software systems with ones that use transactions to protect against this kind of failure, but I don't think we are there yet. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Cloud-backup solutions for Linux?
On Mon, Sep 28, 2015 at 3:37 PM, Rich Pieri <richard.pi...@gmail.com> wrote: > On 9/28/2015 2:56 PM, John Abreau wrote: >> >> I'm not familiar with MongoDB, but I would be surprised if it didn't have >> a >> similar option to dump its data to a text file. > > > Be surprised. MongoDB lacks the tools to do full text dumps. It has a > limited export function which is useless for production backups because it > can't export everything and a dump function that dumps to a binary format. This web page makes for some "fun" reading on how to back up MongoDB: http://docs.mongodb.org/manual/core/backups/ But at least the binary format (BSON) is documented so you might be able to create a BSON to text converter for migration to some other database system. Still it makes very clear to me that one size fits all backup strategies don't really exist. Unless you are willing to have backup time windows where you shut everything down, you are going to have to really dig into the details of your apps/databases to figure out how to do consistent backups. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Cloud-backup solutions for Linux?
On Sun, Sep 27, 2015 at 8:16 AM, Edward Ned Harvey (blu) <b...@nedharvey.com> wrote: >> From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On >> Behalf Of Daniel Barrett >> >> One piece I've never fully worked out is backing up the live VM's >> (VMware Workstation) running on my Linux box. > > For VM's, you only have three choices: > > 1- Install backup software or something in the guest, let the guest back > itself up. > 2- Use a snapshotting filesystem like btrfs or zfs in the host, so the host > can replicate the guest storage to another location seamlessly. I don't see how this can work in a way that would be useful. Filesystem snapshots of your emulated disk images by the host OS may give you a single point in time copy, but they don't guarantee that the copy is in any way consistent. Now if you are willing to shutdown your guest and then take the snapshot it would work and downtime of the guest OS would be minimal. Or perhaps I'm misunderstanding what you are suggesting. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Cloud-backup solutions for Linux?
On Thu, Sep 24, 2015 at 1:55 PM, Edward Ned Harvey (blu) <b...@nedharvey.com> wrote: >> From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On >> Behalf Of Jack Coats >> >> Syncing is a form of backup IMHO. > > The reason why syncing is not a backup, is because if you delete a file, and > the deletion gets replicated, you cannot recover the deleted file. > > Ability to recover deleted files (or old versions of files that have been > overwritten) is a pretty important characteristic of a backup system. Some people would call that an archival system. While it is true that most backup systems allow you to recover deleted/old version of files, it's not clear that is a required part of a backup system in the strictest sense. Still it comes in handy and has been a typical feature of most backups systems so being aware of when it isn't available is well worth knowing. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] possible use case for hacked WiFi accessible SD cards/(Federico's talk on July 15th)
I went home after Federico's talk yesterday and tried to come up with some possible use cases for hacking the Linux running on the WiFi accessible SD cards. One issue with using these devices is that there doesn't seem to be an easy way to add sensors/external controls to the device. There is an onboard serial port, but many sensors don't do serial output. On the other hand, there ARE some rather cheap and interesting devices that do. For example, this hackable web page: http://www.instructables.com/id/8-GPS-Receiver-Hack/ talks about using an $8 GPS receiver with an Arduino. Normally, it is used via a mini-PCI-e interface, but it turns out that it also has a serial port to which an Arduino can communicate. It seems likely that this could be interfaced with the serial port on a WiFi-SD card as well. By combining the WiFi-SD card and the GPS receiver, a cheap device could be created to log position information which could then be retrieved via WiFi. Plant it on some one's car once and you never have to touch the vehicle again to retrieve the logged tracking information. For a more benign use case, add a rechargeable battery and solar cell and it might be useful for naturalists who want to monitor the coming and goings of animals in the wild. Admittedly, there might be issues of range on the WiFi connection here; but a directional antenna on the data retrieval system might be sufficient. If there are frequently used game trails/watering holes, just set up your retrieval system near one of them and retrieve the newest logs whenever the the animal passes by that location. In the local urban area, where xfinitywifi SSIDs are common, you could let the device use your Comcast account to upload the logs whenever it happens to see a strong WiFi signal. If you have it log whatever SSIDs it sees during its travels, you might not even need a GPS receiver. I think Android devices can use the SSIDs they see to retrieve coarse grained location information and if you can get access to the same database, you might be able to do figure out location information without the GPS. Does anybody know of other cheap sensors which happen to have serial ports included? If a device is dealing with digital data, having a serial port for testing seems to be a very common practice; so I seems likely that others exist. Of course once you add external devices, you are going to have higher power requirements; but it still might be manageable... Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] NAS: lots of bays vs. lots of boxes
On Fri, Jul 10, 2015 at 5:41 PM, Rich Braun ri...@pioneer.ci.net wrote: As drive capacities increased, transfer speed also did. (You have updated your motherboards to 6G SATA, right?) I posted here not too long ago that I'd suffered a triple-disk failure, which forced me to buy into the current crop of magnetic media rather than wait see what the SSD market looks like in 2017. There is a big difference between the bus speed and the actual speed that data can be read from/written to a disk. Hard drive manufacturers love to talk about the bus speed; but for something like a RAID rebuild, the sustained transfer speed is what matters. And that hasn't been going up that much. I think even the best enterprise hard drives max out at less than 150Mbytes/second and most drives are well under 100. That's one way in which SSDs are better. They can actually make use of the faster bus speeds. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] stumped: machine won't boot
On Sat, Jun 27, 2015 at 7:20 AM, Laura Conrad su...@laymusic.org wrote: Bill == Bill Bogstad bogs...@pobox.com writes: Bill On Fri, Jun 26, 2015 at 8:39 AM, Laura Conrad su...@laymusic.org Bill wrote: Bill My desktop is a lenovo I bought last fall. It has only ubuntu Bill installed Bill on it. There is a 500M boot partition, which seems to have been Bill clobbered. The / partition looks fine, and in fact includes a Bill /boot Bill directory that looks normal to me. Bill I rebooted it a couple of days ago because of a new kernel, and it Bill wouldn't boot. Bill It says it can't find an OS. I have tried boot-repair, and it Bill thinks Bill it has repaired things, but I still get the message about not Bill being able Bill to find an os. Bill Can you give the exact error message? Error 1962: No operating system found. Bill Can't find an OS type messages could even be from the BIOS Bill which might mean that your boot block is corrupted or your Bill boot partition isn't marked bootable. I assume it is from the bios, because Grub never comes up. I have been trying via gparted and boot-repair to fix this, but it still give me the message. I'm going to try Richard's suggestion of SystemRescueCD next, but I don't have a lot of hope. I assume this is UEFI or GPT or one of the other things that makes installing linux harder than it used to be. Could be. Googling for that exact error message and ubuntu finds quite a few references about legacy BIOS vs. UEFI booting for a wide range of Lenovo systems. At least one suggested that a BIOS upgrade might help. You might try reading some of the discussions found by doing the search and see what makes sense to you. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] stumped: machine won't boot
On Fri, Jun 26, 2015 at 8:39 AM, Laura Conrad su...@laymusic.org wrote: My desktop is a lenovo I bought last fall. It has only ubuntu installed on it. There is a 500M boot partition, which seems to have been clobbered. The / partition looks fine, and in fact includes a /boot directory that looks normal to me. I rebooted it a couple of days ago because of a new kernel, and it wouldn't boot. It says it can't find an OS. I have tried boot-repair, and it thinks it has repaired things, but I still get the message about not being able to find an os. Can you give the exact error message? Can't find an OS type messages could even be from the BIOS which might mean that your boot block is corrupted or your boot partition isn't marked bootable. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] SSD Drives
On Tue, Jun 23, 2015 at 10:30 PM, Brandon Vogel bvo...@accessint.com wrote: FYI I downloaded the ISO and booted with that. You don't need Windows, just the ability to burn an ISO to CD/DVD/whatever. I have the EVO 840 in my laptop as well and after booting with the cd burned from the ISO file, the software told me if wasn't needed, so must be running the latest firmware. hdparm -I /dev/sd gives me firmware revision along with a bunch of other stuff for hard drives. Can't recall if I've used it with SSDs, but I suspect SATA based SSDs should respond. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] memory management
On Sun, Jun 21, 2015 at 1:10 PM, Jerry Feldman g...@blu.org wrote: You can override its behavior my modifying the desktop file (/usr/share/applications/firefox.desktop in Gnome 3) The statement 'Exec=firefox %u' is the line to modify. You could place your modified copy in ~/.local/share/applications I have not tried this, but it should work. Instead of su -, use 'sudo -u user firefox', and update /etc/sudoers not to require a password for this. For instance: you user = NOPASSWD: /usr/bin/firefox I use multiple Firefox user profiles instead. Some of them allow cookies/javascript and others do not. This probably doesn't help memory usage, but it does allow some (small?) security benefits. I'm curious though, how this other user account gains access to your X server. Allowing other user ids to write on your screen/capture key mouse events seem to me to be a potential issue. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] memory management
On Sun, Jun 21, 2015 at 4:19 PM, Richard Pieri richard.pi...@gmail.com wrote: On 6/21/2015 9:18 AM, Bill Bogstad wrote: I use multiple Firefox user profiles instead. Some of them allow cookies/javascript and others do not. This probably doesn't help memory usage, but it does allow some (small?) security benefits. Or use a script blocker like NoScript or uBlock. These offer significant security benefits and significantly reduce memory footprint. I do that as well. Some of my FireFox profiles have NoScript and others do not. I have have a junk profile which has nothing installed, but allows everything, but discards all history/cookies/etc. when I exit it. I'm curious though, how this other user account gains access to your X server. Allowing other user ids to write on your screen/capture key mouse events seem to me to be a potential issue. May need to use xhost to allow the second user access to the X server, something like this: xhost +SI:localuser:myffuser sudo -u ffuser /usr/bin/firefox xhost -SI:localuser:myffuser It's not an issue on a single user box; it's the same user (human) with a different UID. This is where I disagree. If it doesn't increase security over using the same UID, why bother. And I'm not sure it really increases security all that much.For example, breaking out of a browser to run arbitrary code on the same box as my real user id is still a potential security problem. Any OS level bugs that aren't network exploitable are now in play. A bit like having a guest account on the machine. Not something that most people do anymore. Second, if that user id has the privileges to pop up windows on the same X server as my real user id; I might get spoofed, have my screen or even possibly my keystrokes captured. It will depend on how my X server is setup (and its security). While it isn't a bad idea to run the browser as a different user, I think it is more like a speed bump or a chain link fence than a vault door. Better might be a chrooted environment, linux container (docker?), or even VM. Now, I have to say that I'm not paranoid enough to bother with this. I guess it depends on why you do it. If it is for user tracking control, I think different user profiles are sufficient. If the intent is better security, I'm not sure it is an improvement. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] About to rip out systemd and start over
On Thu, May 21, 2015 at 10:50 PM, Rich Braun ri...@pioneer.ci.net wrote: For the umpteenth time, this morning found myself at the console of a dead Linux box, unable to bring the system up because of unreconciled or circular or otherwise out-of-sequence dependencies in systemd. An hour and a half later, during which I had none of the usual tools available to examine the system (because it wouldn't come up), I just wound up reverting a couple of tiny edits to dependencies that I'd made in the weeks since last reboot--and wound up with the same 5-manual-steps restart procedure to get past busted startup sequencing that I've endured for years. Perhaps you could provide some details for people who may not yet have run into these issues? Clearly every useful system requires some software/configuration changes not automatically provided by the installation system. I'm guessing that these kinds of problems are more likely to show up the farther away you get from default configs/software installs. Or the more upgrades that have been done since the initial install. Hints on what it take to break systemd would be a service to everyone. Thanks, Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] interesting discussion on silverlight
On Sun, May 3, 2015 at 11:07 PM, Richard Pieri richard.pi...@gmail.com wrote: On 5/3/2015 4:52 PM, Bill Bogstad wrote: GIF or it didn't happen. By which I mean, show me the contract where that is one of the stipulations. NDA Then it didn't happen. It must be nice to live in such a simple world. No complications. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] OSX Mavericks root exploit, and Safari
On Fri, Apr 17, 2015 at 8:13 PM, Richard Pieri richard.pi...@gmail.com wrote: On 4/17/2015 9:26 AM, Edward Ned Harvey (blu) wrote: I'd like to alert people that OSX Mavericks has a root exploit that will not be fixed. All Mac users must immediately update to Yosemite in order to maintain any semblance of security. Cutting through the hyperbole It's a local privilege escalation vulnerability nicknamed rootpipe. It can be mitigated by doing one thing: don't run as an administrator account. Standard user accounts cannot be used to exploit this vulnerability. From the Ars Technica article linked from the original email: ... The researcher continued to experiment with the flaw until he found a way to elevate privileges even from standard OS X accounts, which give users considerably less control. To Kvarnhammar's amazement, he was able to expand the attack by sending a what's known as a nil to the OS X mechanism that performs the elevation authorization. A nil is a zero-like value in the Objective C programming language that represents a non-existent object. Sounds like your info might be out of date. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] OT: Do CS grads need calculus?
On Tue, Apr 14, 2015 at 2:39 AM, Richard Pieri richard.pi...@gmail.com wrote: On 4/13/2015 7:36 PM, Bill Bogstad wrote: Exactly and why should Calculus be what everyone takes after their HS Algebra sequence? Algebra teaches you one way to approach solving problems. Calculus immediately after algebra teaches you a different way to approach solving problems while the first is still fresh in your head. The sequence teaches you to think. It teaches you to think outside the box. On the other hand, if you goal is to be a diploma mill code monkey then thinking outside the box is detrimental to your chosen career path and you don't need calculus. You probably don't need anything more complex than basic arithmetic. Did you not believe that other branches of mathematics can teach you how to think differently? Take the Monty Hall problem for example. So counter intuitive that quite a few people with Ph.Ds in engineering and mathematics got it wrong when they saw it. Perhaps there is something to even basic probabiliity after all. I will state it again. YES, Calculus is good. It stretches your mind. The specific things it teaches you may or may not be relevant to what you plan to do. Other branches of mathematics also stretch your mind, they may be more relevant. Is it really so hard to accept the theoretical possibility that the above might be true? Not for all people/but for some. To all of those who don't believe the above: Is it possible that this is some kind of hazing thing? You had to take Calculus and it was hard, therefore everyone should take it. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] OT: Do CS grads need calculus?
On Mon, Apr 13, 2015 at 7:44 PM, Derek Martin inva...@pizzashack.org wrote: On Tue, Apr 07, 2015 at 01:05:43PM -0400, Bill Ricker wrote: If you're programming Video Games, real Physics is VERY useful, and knowing enough Calculus to make good approximations too. If you're in the guts of a graphics rendering engine, Trig (and approximations) wins big. If you're straddling EE and CS, you need at least a little Calc to do the electronics. But that's not every programmer. At age 22, when most people earn their bachelor degree, do you have any idea what kind of programmer you ultimately will become? I sure as hell didn't imagine I'd be doing what I'm doing today, and I graduated later than most... The more you know, the more opportunities you will have. Learning calc exposes your brain to a way of thinking you likely hadn't seen before that. It expands your mind and makes thinking about certain classes of problems easier/more familiar. Arguing against it suggests to me narrow-mindedness and/or laziness. I don't use calc in my day-to-day work but I have used it on a few occasions to simplify certain problems. That wouldn't have been possible for me had I not studied it. Admittedly, if I needed to use it now I would fail--it's been way too long. Or at least, I would need a refresher. Nobody is saying that calculus isn't a useful subject or that learning new things is bad. They are just suggesting that priorities might be different for CS majors then for Physics or EE majors where math is concerned. I was an EE major and Calculus was absolutely necessary for my EE classes. I took a bunch of CS classes as well and Calculus wasn't needed for them, but graph theory, combinatorics, etc. would have been helpful. We should be changing the core math curriculum for HS College (for non-Physics/Engineering majors) to make better citizens: Probability, Statistics, Risk Management; Discrete Math. Those are more useful to Applied Computer Science students than Calculus too. My high school offered all of those; but there's only so much math you can fit into a high school education. I ended up having all of those Exactly and why should Calculus be what everyone takes after their HS Algebra sequence? Just because that's what was done in the past? Sixty years ago when this was all laid out (post WWII/start of the space race/etc.), it was mostly true that the professions/areas of study that needed math past Algebra all needed Calculus. It is not clear that is still true. Or that they don't need some other math even more. in college, and I dare say that my classical mathematic education in HS served me better, at that particular period of time. YMMV. In my experience, most people who weren't bound for some form of sciences in college took basic math in HS... and all too often did poorly at even that. Prob stats was a niche course taken mostly by people who *liked math* and were bound for college but not for science. A rare few indeed. Which would now be a mistake. You basically can't do science any more without probability and statistics. But you can do plenty of science without Calculus. And you can at least get a basic understanding of both without Calculus. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] OT: Do CS grads need calculus?
On Sat, Apr 11, 2015 at 6:20 PM, Daniel Barrett dbarr...@blazemonger.com wrote: Programmers definitely benefit from higher mathematics, but there's nothing magical about calculus in this regard. I majored in math in college (and have a Ph.D. in computer science) and learned more about proofs and rigorous thinking in my junior year algebra courses, studying groups, rings, and fields, than in my calculus classes. The So true. Much later in life I took some undergraduate algebra courses and I have to say that it had way more rigor/proofs then any of the high school or college calculus classes that I ever took. At times, I felt like my brain was oozing out of my ears. And it didn't require Calculus to get there. I would still push for Discrete Math (which includes basic combinatorics and graph theory) and discrete Probability rather then the Calculus based continuous Probability that is normally emphasized in Math departments. I think college algebra (at least as I experienced it) should only be used if you need a weed out class. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] OT: Do CS grads need calculus?
On Tue, Apr 7, 2015 at 7:58 PM, Mike Small sma...@panix.com wrote: Daniel Barrett dbarr...@blazemonger.com writes: ... was. Just cheesy. If CS programs do a really pretty discrete math course and/or other math courses I don't know about and do it as a proper university course, great. Tufts has an Undergraduate Discrete Math course jointly taught by the Math and CS departments: http://math.tufts.edu/courses/courses.htm Math 61 - Discrete Mathematics (formerly Math 22; cross-listed as Comp Sci 22) Every semester. Sets, relations and functions, logic and methods of proof, combinatorics, graphs and digraphs. Recommendations: MATH 32 (formely MATH 11) or COMP 11 or permission of instructor. It is a requirement for the CS major. Math 32 is Calculus I. Comp 11 is the Introduction to Computing class for prospective CS majors. My wife is a CS professor at Tufts with a Math Ph.D from MIT and frequently teaches that course. She definitely teaches it as a proper university course. The CS department needs its majors to understand that stuff and how to do proofs. So they have a strong incentive to make sure that it is done right. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] pulse files in /tmp on RHEL 6 - followup
On Sat, Mar 21, 2015 at 4:11 PM, Bill Ricker bill.n1...@gmail.com wrote: On Sat, Mar 21, 2015 at 8:02 AM, Jerry Feldman g...@blu.org wrote: The root passwords on these systems were expired and this prevents cron from executing on a user with an expired password. Ouch. Yet another reason to avoid having 'root' have a password. Is it even possible to configure RH for root to a UID but not an interactive account, like in Ubuntu ? In what sense is root not an interactive account on Ubuntu? The shell entry for root is an interactive shell. Ubuntu simply doesn't give root a password which can be used to login and makes sure that a non-root user name can sudo to root. I believe there are other things as well, but those are the basic differences. You could do the same thing manually on any relatively modern Linux system. The really interesting thing for me here is how our modern world of PAM authentication interacts with things that I don't normally think requires authentication. When I saw Jerry's original note, I did some googling and found that this can cause problems with ssh key-only logins as well. If you look at /etc/pam.d, it seems lots of programs use PAM for authorization/authentication and I suspect that there are other surprises waiting there. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Most common (or Most important) privacy leaks
On Wed, Feb 18, 2015 at 4:15 AM, Richard Pieri richard.pi...@gmail.com wrote: So. Someone replied directly to me instead of the list suggesting that character length is an important factor in password security. Letter count is a pointless factor in password security. Four score and seven years ago is 30 characters and still trivially vulnerable to dictionary attacks. We hold these truths to be self-evident is 40 characters and it is just as weak as the first example. Password reform starts with abandoning password rules and policies. Rules and policies are bad. Every policy that you enforce makes it easier for attackers to analyze passwords. If you have a policy that enforces a 15 character minimum then an attacker knows to ignore everything that is 14 or fewer characters, and given human nature he can ignore everything over about 20 characters for most passwords. If you have a policy that enforces the use of at least one number then an attacker has 9 known possible plaintexts in every password. At least one capital letter is 26 known possible plaintexts. And so forth. The problem with this that if you don't enforce a minimum length on passwords a significant number of your users will use something that is probably less than 6 characters long. Of course, many of those would fall to a dictionary attack as well. And the same users are going to use Four score if you require longer passwords, so you lose anyway. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] os x = poop?
On Fri, Feb 13, 2015 at 1:16 AM, Richard Pieri richard.pi...@gmail.com wrote: On 2/12/2015 4:16 PM, Martin Owens wrote: No, that's what the general public does. Apple is anti-foss, not just neutral to FOSS when you tot up their track record. https://www.apple.com/opensource/ http://www.opensource.apple.com/ The first link is a list of all of the FOSS projects that Apple ships with OS X. Some like GCC and LLVM are projects that Apple has contributed improvements. Others like CUPS are projects that Apple has opened up from within. The second link is the top level reference to the source code to the core Unix OS (Darwin) for every OS X and iOS release ever published. Apple isn't anti-FOSS. Apple is anti-GPLv3, and I for one don't blame them for that. You've been drinking the Apple kool-aid, Rich. While Apple hasn't been 100% anti-FOSS, I would suggest that they are not only anti-GPLv3; they have historically been anti-GPLv2 as well. Looking at CUPS: (http://en.wikipedia.org/wiki/CUPS): It had been around for 3 years (and GPLed) before Apple even started using it. It was about 8 years after CUPS was first available that Apple essentially bought the main CUPS programmer (and all his rights to the software). I'm not sure how you can interpret that as Apple opening up CUPS from the inside. It seems to me that they found themselves in a situation where they had become dependent on a GPLed piece of software and essentially bought themselves out of having to publicly release their internal changes due to GPLv2 requirements. If they had simply wanted specific changes made to the software, they could have commissioned the developer to make the changes and there would have been no need to buy all rights to the software. As for GCC, I don't feel like looking into too many details of Apple, GCC, and Objective-C; but this article seems to match up with my memory fairly well: http://www.informit.com/articles/article.aspx?p=1390172 It looks like Apple only reluctantly abided by GPLv2. Given the above, I would say that Apple might be okay with open software they have real problems with free software in the FSF sense. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Are there any no-cost vm's still out there?
On Fri, Feb 13, 2015 at 5:15 AM, Edward Ned Harvey (blu) b...@nedharvey.com wrote: From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On Behalf Of Bill Horne My question: does VMWare or Virtualbox still offer no-cost software for home/personal use? I'd like to run both Linux and Windows 7 (for all the usual reasons), but I don't know if I can do it without paying for a VM. TIA. For a desktop, no-cost, you have: Virtualbox on any platform. VMWare Player on windows or linux. Since there's only one no-cost option on a mac, the question becomes - is it worthwhile to pay for Fusion or Parallels on the mac? And I say yes if you use it on time that you're paid to be working. No, if you're a student who just wants to do cool stuff for free. For a server, I would recommend nothing other than VMWare ESXi. Xen is crap, Virtualbox sucks for servers, I'll just mention MS in passing... And what else is there? Sure you could do something like KVM on a linux host, but why would you? In that case, replace the linux host with ESXi on bare metal, and make the linux host actually a guest. Re: using KVM (or Xen) You would use them because you want to be the next Amazon or Rackspace. It's clear that large organizations get real work done using these products. I'm sure they all have real problems, but they apparently have benefits as well. The question is what is your use case. (The VirtualBox one seems nasty, OTOH, I've never had that problem in my casual use of VB.) Re: question of accelerated graphics (from a separate thread) hardware accelerated graphics != direct hardware access While I can understand there might be some times that direct hardware access is needed for a VM, it seems to me that is opening up a huge potential problem from a security (and reliability of host OS) perspective. Personally, I would prefer a virtualized graphics stack as long as it fully supports current graphics programming interfaces at a reasonable performance level. From what I've seen VMWare seems to be the best at doing this. For whatever reason, although stable for me; VB has never worked for me for more intensive graphics. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Finance software for Linux
On Wed, Jan 14, 2015 at 2:18 PM, Jerry Feldman g...@blu.org wrote: On 01/13/2015 10:16 AM, john saylor wrote: bonjour On 1/13/15 8:56 , Rich Braun wrote: GnuCash, I'm afraid, is even farther behind on the UI usability front. works for me, but one size does not fit all. It looks like the end of the road for desktop finance; the future is cloud services. But really, I'm a cloud-security developer: who can possibly trust the cloud with personal-finance data? the people who trust the cloud are the ones who don't really understand it ... once it's on those cloud servers, your control is gone. and all it takes is one careless temp, or disgruntled dba. Mike Rhodin, IBM Senior VP and general manager of the Watson Group one stated that the cloud is simply the old mainframe way of doing things, but they had to use different terminology. But conceptually, you are running on some company's computers located somewhere. This isn't just mainframe computing, it is akin to commercial time sharing/service bureau computing. Admittedly for many people they were one the same, since most businesses couldn't afford to own their own mainframe; but I would still consider them different computing styles. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Finance software for Linux
On Wed, Jan 14, 2015 at 2:26 PM, Matthew Gillen m...@mattgillen.net wrote: On 1/14/2015 8:05 AM, Jerry Feldman wrote: Unfortunately there are only a few companies in the industry who produce tax software, and they only Windows and Mac compatible, or you can use the web interfaces. This type of software does not really lend itself to to Open Source. Why do you say that? Because it requires a lot of specialized knowledge that typical CS majors don't have? There are certain projects in the open source world that have to pay attention to regulatory issues (e.g. wireless drivers), and they seem to be able to do so. I suppose the tax code is orders of magnitude more complex and intertwined. I'd be curious to explore your statement a little more though regarding what kinds of things lend themselves to open source. It isn't just the complexity. It is the constant churn. Sometimes because of last minute changes from Congress, the IRS doesn't put out final instructions/forms until late Fall. Admittedly these are usually fairly esoteric issues, but as lots of people have something odd about their taxes (uninsured medical expenses, loss due to theft/fire, consulting income, etc., etc.); any organization putting out tax software has to be prepared to put out a new version fairly quickly. Plus the software is geographically restricted. US Federal tax software might be adaptable to state level returns; but probably won't be at all useful for UK or Canadian taxes. This just doesn't strike me as a problem domain that is very tractable to volunteer efforts. It is also (to a great extent) an all or nothing problem for most users. If tax software only handles 2 of the 3 forms that I need to fill out, it probably isn't worth my time to use it. I'm not aware of any other free software (or culture i.e. wikipedia) which operates under these conditions. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Finance software for Linux
On Tue, Jan 13, 2015 at 2:56 PM, Rich Braun ri...@pioneer.ci.net wrote: [lots about personal finance software] First a confession, I don't use any personal finance software. I view my statements regularly, but don't do any reconciliation of accounts and I have an accountant do my taxes... While I can understand how Tax software needs constant updates, it is less clear to me why personal finance software would need this. The only changes that I can imagine would be needed on a frequent base would be for automated inputing of statements from outside parties. It would seem to me that some kind of plugin/filter system which allowed users to write data input modules might help to spread the effort around. Perhaps something like the system used in the Zotero bibliography package which understands many data layouts because it makes it easy for users to write and distribute import/export plugins. Perhaps I'm missing something? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Who sells the least expensive SSL certs right now?
On Mon, Dec 22, 2014 at 11:10 PM, Edward Ned Harvey (blu) b...@nedharvey.com wrote: From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss- bounces+blu=nedharvey@blu.org] On Behalf Of Shirley Márquez Dúlcey Free certificates shouldn't be a business model. They should be something that you do to give back to the community, to help keep the internet an open place for everybody. While we're at it, let's ban commercial software, and copyright and patent and trademarks. Computers are able to copy all these things at zero cost; it should be free for everyone. Unicorns and rainbows for the win! ;-) Sorry, I know I'm being a jerk. But the argument that the *only* provider of commonly trusted free certs is extorting people by charging for revocation is foolishness. If that argument holds, then *no* certificate authority should be able to charge for issuing certs. No argument from me on this. However, I am not sure why I would ever bother to revoke a certificate for a general purpose web site. Why wouldn't I just stop using it and go get a new certificate from whatever CA I want? As for someone else spoofing my site with the stolen cert, I thought that it was still possible to get certificates signed for almost any domain from some of the CAs. So revoking a stolen certificate isn't going to help that much to protect against man in the middle attacks. I don't think it is going to stop someone who recorded the entire session from decrypting it once they get the private key either. What am I missing here? Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Please point me to the thread about open-source software project management tools
On Sun, Dec 14, 2014 at 4:34 AM, Bill Horne b...@horne.net wrote: We had a discussion on the list about open-source software project management tools, but now I can't find it in the archives for some reason. Please provide a pointer to the archive thread if you remember where it is, and TIA. Searching my personal BLU archive, the newest thing that I find is from Dec. 2008. No idea if this is what you are remembering. Here is a link to the public archive: http://lists.blu.org/pipermail/discuss/2008-December/031563.html Good Luck, Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] root CA bloat
On Sun, Nov 23, 2014 at 1:15 AM, Richard Pieri richard.pi...@gmail.com wrote: On 11/22/2014 4:15 PM, Bill Bogstad wrote: I already mentioned part of this in my first note. They would have to do it by changing the nameserver entries for the microsoft.com domain at the .com DNS servers which I'm pretty sure they don't run. MarkMonitor owns the microsoft.com and msft.net domains along with a slew of variations of those domain names. As owner of the domain, MarkMonitor could have VeriSign change the top level registration. It would not be bad data because MarkMonitor is the owner of the domain. Would it be visible? Sure. Any change in a public space is visible. Would MarkMonitor's customers care? Absolutely. MM would be doing what it is being paid to do: protect its customers' trademarks and copyrights without resorting to raids like the NoIP raid. Would the world notice? Probably not. MarkMonitor has been doing it for going on 15 years. If they did something that Microsoft hadn't requested then I'm pretty sure somebody would both notice AND care. This is all in the context of attacking the security of Internet communications via a MITM attack. If Microsoft (one of the two parties communicating in this example) authorized it, then it isn't MITM. Whether it ishttp://en.wikipedia.org/wiki/Off-the-Record_Messaging done via Microsoft directly disclosing my communications or via allowing some other third party agent to do so is not really relevant to me. As far as I can tell that is the risk that you are now describing. The risk is in every talking to them at all and I don't see how technology can really solve that. Even Off the Record Messaging (http://en.wikipedia.org/wiki/Off-the-Record_Messaging) doesn't keep the other party from disclosing the contents. It just stops them from proving that I'm the person who said it. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] root CA bloat
On Sun, Nov 23, 2014 at 3:53 PM, Richard Pieri richard.pi...@gmail.com wrote: On 11/23/2014 3:26 AM, Bill Bogstad wrote: If they did something that Microsoft hadn't requested then I'm pretty sure somebody would both notice AND care. This is all in the context of attacking the security of Internet communications via a MITM attack. If Microsoft (one of the two parties communicating in this example) authorized it, then it isn't MITM. Whether it Ahh. I see what you mean, now. Your argument, that because Microsoft /did/ authorize MarkMonitor to act as an intermediary makes any interception not MITM since it's not an unauthorized party listening in, has merit. Almost... Microsoft didn't authorize MarkMonitor to monitor their communications (as far as I know). They authorized them to provide DNS related services. So if MarkMonitor did this, then I would call it a MITM attack. My point is more that if they did do it, I believe that it would be very public that something funny was happening. The cost to MarkMonitor for doing this would be so high that I don't see them doing it voluntarily. Now if this was really end of the world type stuff, someone might convince/force them to do it anyway. In that case though, I think the parities involved would just go to Microsoft and get copies from them. Much less likely for anyone to ever know. An alternative scenario where someone breaks into MM and does this is worth considering. By using MM, Microsoft is increasing the attack scope on their communications. Of course, Microsoft is also dependent on the security of all CAs, top level DNS servers, etc. The problems with CA delegation seem much more significant then this one though. Until we get a solution to that problem, I'm not going to worry about this one. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] root CA bloat
On Sat, Nov 22, 2014 at 2:30 AM, Richard Pieri richard.pi...@gmail.com wrote: On 11/21/2014 6:19 PM, Tom Metro wrote: Has anyone created an extension for Firefox that trims down the cert list to something like the top 50 cert providers? ... It gets better. Do a whois lookup on google.com. Then do one for yahoo.com. Now bing.com, microsoft.com, amazon.com, verizon.com, netflix.com, apple.com, comcast.com, att.com. Hell, any major commercial service or content provider. Chances are you'll see the same names: MarkMonitor and Corporation Service Company. These two companies are top-level CAs that control the DNS for most of the big-name players in the game. Which is to You are conflating DNS and Certificate Authorities. When I look at the certificate used for www.microsoft.com, it appears to be signed by Symantec via Verisign. In any case, controlling someone's DNS is not the same thing as being able to sign an SSL certificate that will be accepted. And is far as DNS is concerned, I don't see how you could do anything other then a world wide MITM attack via the whois entry because the whois database is not queried in realtime. While doable, I would expect it to be noticed. The important thing for actual DNS queries is the chain of recursive and authoritative DNS servers involved. If a DNS attacker is on your physical path to these servers, (or he manages to pollute the right DNS cache), attacks are relatively easy. If you are using DNSSEC (you probably aren't) then things get harder again. To be clear, I'm not saying that there aren't problems here. I'm just saying that whois data isn't the game over that you seem to be implying. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] root CA bloat
On Sat, Nov 22, 2014 at 4:17 PM, Richard Pieri richard.pi...@gmail.com wrote: On 11/22/2014 5:33 AM, Bill Bogstad wrote: You are conflating DNS and Certificate Authorities. When I look at the certificate used for www.microsoft.com, it appears to be signed by Symantec via Verisign. In any case, controlling someone's DNS is not the same thing as being able to sign an SSL certificate that will be accepted. MarkMonitor is a trusted CA. If they generate a certificate for microsoft.com then your browser will trust it. MarkMonitor is authoritative for the microsoft.com domain. They can change all microsoft.com hosts to point to their servers and you will trust them because their DNSSEC signatures are good and valid. I already mentioned part of this in my first note. They would have to do it by changing the nameserver entries for the microsoft.com domain at the .com DNS servers which I'm pretty sure they don't run. This would be visible to the whole world. So yes, they could do this; but I'm pretty sure it would be found out because the bad data would be sitting in everybody's caching servers as well as the databases at the .com servers which are run by multiple organizations. I'm pretty sure they would then lose every customer they had within a few days or weeks. This is not a scenario that I'm going to lose sleep over. If you have some other scenario that doesn't involve putting MarkMonitor out of business please provide details. Bill ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Password app
On Fri, Oct 10, 2014 at 7:08 PM, Mike Small sma...@panix.com wrote: Matthew Gillen m...@mattgillen.net writes: Because you don't have to keep a that password database file on 5 different backup devices (and keep it updated on all your backup copies every time you add one). It's certainly not a security improvement. It's a usability improvement at the expense of security. What happens when you need to change a site's password? You use a new master pass phrase. Now you either have to go change all your passwords on each site or keep track of which were generated from the old and which the new master passphrases. Is that not how it would work? The algorithm apparently has a per site password counter: http://masterpasswordapp.com/algorithm.html Of course, the current state of the counter is now something which has to be shared between devices. Nor sure how that works. I suppose you could also modify the name of the site as well. Instead of comcast.com use myf***ingisp as the site name. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Server/laptop full-disk encryption
On Wed, Oct 1, 2014 at 3:49 AM, Bill Horne b...@horne.net wrote: On 9/30/2014 9:38 AM, Edward Ned Harvey (blu) wrote: In linux, I'm not aware of any product that does whole disk encryption without needing a power-on password. In windows, Bitlocker uses the TPM to ensure the OS gets booted untampered, and then your user logon password and OS security are used to prevent unauthorized access. This is truly great to protect against thugs and laptop thieves. No offense, but why would it/ how could it? A laptop thief isn't likely to be looking for /your/ info, just an appliance to sell. Thugs, OTOH, will be able to apply rubber-hose cryptography if it's /your/ data they want, and either way having an encrypted hard disk doesn't seem like a deterrent. Yeah, I was going to post earlier that simply having Linux installed on your laptop is going to protect your data against 99% of random thieves. When they boot it up and find out that it isn't running Windows (they know it's not a Mac), they are going to either toss it in the trash or get their cousin who plays a lot of videos games to just do a reinstall of Windows on it. As for rubber hoses, it is not clear what the threat model is here. On the one hand, we seem to want complete security and on the other hand we are willing to hand out the passphrase to any machine that can retrieve the URL from the local LAN. I will agree with the original poster though that as of the last time I looked (year or so ago), Linux documentation of whole disk encryption seemed to assume that you were trying to hide out from the mob or some 3-letter agency and had a Ph.D in computer science. On the other hand, security is hard with lots of fiddling details. Whether better UIs to the underlying software would help is unclear to me. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Server/laptop full-disk encryption
On Wed, Oct 1, 2014 at 12:52 PM, Edward Ned Harvey (blu) b...@nedharvey.com wrote: From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss- bounces+blu=nedharvey@blu.org] On Behalf Of Bill Horne On 9/30/2014 9:38 AM, Edward Ned Harvey (blu) wrote: In linux, I'm not aware of any product that does whole disk encryption without needing a power-on password. In windows, Bitlocker uses the TPM to ensure the OS gets booted untampered, and then your user logon password and OS security are used to prevent unauthorized access. This is truly great to protect against thugs and laptop thieves. No offense, but why would it/ how could it? A laptop thief isn't likely to be looking for /your/ info, I didn't mean it deters thieves. I meant it protects your data against thieves. There are many situations where the data value greatly exceeds the laptop value - but even when it doesn't - Even if your data is only worth $1, then it's good to limit your loss to the value of the hardware and nothing more. Unlike on-line data thieves who can automate their data collection to attack thousands, actually retrieving data from you stolen laptop will take significant human effort on their part. If there were bootable CDs out there that they could use to scan a laptop for information to sell that might change things, but for now I suspect that even a low bar would be sufficient to deter random thieves from bothering to retrieve you data. There is also the value of their time. If your info is really only worth a $1 to them vs ?? amount of time; are they going to bother? Of course, even though it might only be worth $1 to them, it might be worth much more to you to reduce the aggravation. How much time it will take you and whether you should be willing to spend that time (as insurance) against future loss isn't obvious to me. It seems like whenever people start talking about computer security, there is a tendency to shoot for the maximum theoretically possible. We don't do that when it comes to our cars or homes, but it does with computers. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] CipherShed: TrueCrypt fork
On Wed, Oct 1, 2014 at 4:56 PM, Richard Pieri richard.pi...@gmail.com wrote: Meh. Today, if someone were to ask my opinion I'd say no to any and every software-based disk encryption system. If you want to encrypt the entire disk then get a self-encrypting disk and set a BIOS/firmware password on the host. Because you trust the firmware provided by the disk drive manufacturer? You clearly aren't wearing your tin foil hat today. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] CipherShed: TrueCrypt fork
On Wed, Oct 1, 2014 at 6:08 PM, Richard Pieri richard.pi...@gmail.com wrote: On 10/1/2014 11:19 AM, Bill Bogstad wrote: Because you trust the firmware provided by the disk drive manufacturer? You clearly aren't wearing your tin foil hat today. ... There is a clever SED attack: hotplug. If you disconnect the SATA data cable without disconnecting power then you can plug the drive into a different host and the data will be readable. Nice. I'll have to remember that one. This is easily foiled simply by turning off the computer when physical security is low. In short, SEDs do everything that software encryption can do, they do it faster, and they do it better. Actually, they don't do everything that (open source) software encryption does. They don't let you (or you an agent of your choice) audit the encryption algorithms/implementation to verify that everything is being done to spec. Now I admit that I've suggested in another thread that for MOST people ultimate security isn't required, but that doesn't meant that it isn't sometimes a good idea. In this case, it is more like not trusting firmware programmers not to screw things up inadvertently rather then being concerned about backdoors inserted by 3 letter agencies. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] SysVinit vs. systemd
On Mon, Sep 15, 2014 at 7:26 PM, Richard Pieri richard.pi...@gmail.com wrote: On 9/13/2014 9:28 AM, Edward Ned Harvey (blu) wrote: But if you want to create something new, the ability to daemonize any-random-command is a really nice convenience factor; you just write any simple console application or shell script, and it behaves exactly the same on your command terminal as it does when you make it a service under systemd. An active system will notice mysqld died, recognize that it's not supposed to do that right now, and restart it. I know SMF will try Which is a stupid way to run in production. There's a reason why the daemon died. That reason needs to be identified so that corrective steps can be taken. Blind restarts can obfuscate this information, can cause damage to data, and can exacerbate existing damage. I tend to think that way as well, but I have been noticing what I think is a trend away from debugging problems and towards just doing reinstalls/restarts. I think the rise of virtualization (particularly in the cloud) has driven this. As the tools make it easier and easier to spin up a new VM, why bother to figure out what caused the old one to fail, just reinitalize a new one and keep going. I remember some talk I went to recently about a tool to spin up anonymous(?) VMs where once the VM exited all storage was lost. So if you wanted the ability to debug a VM after it was shutdown, you had to be sure to log anything that might be relevant onto some kind of external storage server. And this, of course, assumes you knew ahead of time what would be relevant. While I think VMs and configuration management systems are great, I don't think they can eliminate the need to sometimes look at the actual details of a system. Unfortunately, I think the skills to do this are no longer being developed among new people. Hopefully, I'll be wrong and it won't matter when all the old timers are gone. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] selecting a subnet
On Tue, Sep 16, 2014 at 12:06 AM, Derek Martin inva...@pizzashack.org wrote: On Mon, Sep 15, 2014 at 09:17:24AM -0400, Bill Horne wrote: On Sunday, September 14, 2014 10:57:19 PM Derek Martin wrote: On Wed, Sep 10, 2014 at 04:04:12PM -0400, Stephen Adler wrote:. ... FWIW, this should work, and if it doesn't, you should quit today. I've been in this position, and I have in fact told a VP at my company that he was disrupting operations and he needed to stop. And he did. And it was a situation very much like what you described. You be polite, you be earnest, but you be sure he understands that what he's asking for is insane. And if this doesn't work, write done exactly what you told him/her and get him to sign a copy. It's amazing how having to actually sign something tends to get a manager's attention. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Wireless devices, 2 Wireless Routers, local network. DD-WRT
On Wed, Aug 27, 2014 at 3:28 PM, Richard Pieri richard.pi...@gmail.com wrote: On 8/27/2014 3:06 PM, Bill Bogstad wrote: Even better you could just connect two of the LAN ports togther (either with a crossover cable or if auto-MIDX is supported on either router that might work as well. Um... Bill? That's what DD-WRT's Repeater Bridge does albeit over 802.11 instead of 802.3. Admittedly, I made the assumption that a wired connection between access points did not make sense under the circumstances. If a wired connection is indicated then I would wire as you suggest to avoid double NAT. It's hard to discuss this without diagrams. I would prefer camp -- wireless -- G router -- wired -- N router -- wireless - local clients to camp -- wireless -- G router -- wireless -- N router -- wireless - local clients in order to conserve local wireless bandwidth in the vicinity of the G N routers. With three different wireless connections, you can't avoid band conflict when there are only two bands. Now you can proactively control the actual channels used for the G to N and N to client wireless networks to avoid overlap, but I still prefer wired as it requires less active management. Now maybe, you are suggesting something else. Perhaps, camp -- wireless -- G router -- wireless -- local clients It sounded though like he wanted to use both the hi-gain antennas on his G router as well as the higher bandwidth for local connections of his N router. If that is truly the case then I stand by my suggestion to make the G to N connection wired (if feasible). Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Wireless devices, 2 Wireless Routers, local network. DD-WRT
On Wed, Aug 27, 2014 at 8:38 AM, ma...@mohawksoft.com wrote: Here's the scenario: I like to go camping and often times they provide wireless access, but the camp site is often pretty far away from the wireless access point. I have a long distance wireless-G router with a high gain antenna. I have a second wireless-N router. Both routers are running DD-WRT. I should be able to connect to the camp ground's wireless with the high gain antenna using the Wireless-G router with a DHCP assign IP address. I should then be able to NAT to my own local subnet and be able to connect the Wireless-N to my local subnet and provide access to phones, tablets, and laptops. Are the wireless-G and the wireless-N routers going to be relatively close to each other? Is so, you could run an Ethernet cable from a LAN port of the G router to the WAN port of the N router. Yes, you would be running a double NATed configuration. I've been running that way for over a year now because I didn't like the signal strength phone/Internet router that Comcast provided and it has worked well enough that I haven't gotten around to getting them to put their equipment in bridge rather then NAT mode. For your local wireless devices, it would be best to make sure that your N network is running on non-overlappings channels from whatever the camp's G network is using. The above presupposes that you can get your wireless-G router to actually connect wirelessly to the camp's network. I don't use DD-WRT so I can't comment on that. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Wireless devices, 2 Wireless Routers, local network. DD-WRT
On Wed, Aug 27, 2014 at 10:57 AM, Richard Pieri richard.pi...@gmail.com wrote: On 8/27/2014 10:52 AM, ma...@mohawksoft.com wrote: I don't see how to NAT from the wireless port in the G router (the one with the antenna) to either the LAN or WAN ports. You don't. Your don't HAVE to. You could do it, however, as I suggested previously by connecting a cable between the WAN port of one of the N router and a LAN port of the G router. This would, of course, result in double NATing. Even better you could just connect two of the LAN ports togther (either with a crossover cable or if auto-MIDX is supported on either router that might work as well. Turn off the DHCP server on the N router and you have effectively turned your N router into a wired/wireless bridge. All devices will get an IP from the G router and Ethernet packets should flow appropriately. You create a wireless bridge between the two access points with DD-WRT's Repeater Bridge: http://www.dd-wrt.com/wiki/index.php/Repeater_Bridge That should get you started. I assume this if for the G to to camp router connection. For the G to N router connection, I would go wired if at all possible. No reason to waste wireless bandwidth if wired is available. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] lots of OLD Sparc UltraSparc equipment available
As far as I can recall, they all worked before I retired them to my basement. There are: 2 Ultra 5s 1 Ultra 10 1 Ultra 80 2 Sparcstation 5s Motherboard/CPU module for what I believe was an Ultra 5 (I discarded the case to save space). Some old Sun external SCSI hard drives (same age as Sparc 5s) Misc. Sun mice/keyboards/video cables/etc. I would like to get rid of all of them at one time if possible. Available in Cambridge. Bill Bogstad P.S. They are probably best for parts rather then as whole systems. Chances are that if you have any interest in them at all, it is to keep your current collection running so this probably won't be a problem. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] vnc
On Mon, Aug 25, 2014 at 10:16 AM, Richard Pieri richard.pi...@gmail.com wrote: On 8/25/2014 8:51 AM, ma...@mohawksoft.com wrote: SSH is a very BAD thing to open up to the free internet. BAD BAD BAD. Once in, you are in. Shell access is dangerous. Stop right there. We have been discussing securing VNC connections to X11 desktops running on virtual framebuffer devices. In other words: full shell access. X11 desktop is kind of vague. X11 does not have to mean access to a shell. It could very well mean some kind of access to a single (or small set) of X11 based applications. A mythtv frontend is one possibility. I would agree, however, that when people say X11 they are usually talking about a full desktop environment with window manager and access to a shell. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Why the dislike of X.509?
On Mon, Aug 25, 2014 at 3:54 PM, Bill Ricker bill.n1...@gmail.com wrote: ... (Which doesn't change that anything that smells like escrow smells 'off' to those who care about security that really works. From what Rich has said re dates, his allergy to escrow likely stems from the same controversy as mine. http://www.cryptomuseum.com/crypto/usa/clipper.htm http://en.wikipedia.org/wiki/Clipper_chip#Backlash X509 PKI is not normally considered an escrow regime in normal usage, but Rich is quite correct that central CAs or other registries have *abilities* that are hard to distinguish from Escrow - even if they never know your private key, they can at the very least forge another one with the same apparent identity, and so spoof you to others -- or spoof someone important to you. I think there are still some significant differences between key escrow and a X509 PKI system. Please correct any errors. Key Escrow - Holder of Key can read all your old messages, read any new messages you create, and pretend to be you in a way that is indistinguishable? from your own signing. X509 PKI - Holder of a CA can't read old messages you sent, can't read any new messages that you send, can pretend to be you (but with a key that is different from the one that you are using). The pretending to be you is a bit like the you/evil twin thing. Recipient can't tell which one is the real you, but can tell that two different entities are trying to claim to be you based on the CA that they are using. (Okay if they compromised the CA that you actually used, that may not be true; but lets assume they compromised Certs-R-Us instead of whatever ultra-secure CA that you used.) Now while I see that PKI has issues, I think it is a little much to claim that it is as bad as PKI. Or maybe I'm missing something. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Draft document from NIST about SSH in an automated environment
Given recent discussion on this list about using SSH (for VPN access), this document from NIST may be of interest: http://csrc.nist.gov/publications/drafts/nistir-7966/nistir_7966_draft.pdf Found via: http://www.theregister.co.uk/2014/08/25/nist_to_sysadmins_clean_up_your_ssh_mess/ Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Verizon blacklisted me
On Fri, Aug 8, 2014 at 3:26 PM, Daniel Barrett dbarr...@blazemonger.com wrote: Chapter 2 of the saga... After Verizon insisted for the Nth time that I am not blacklisted and my email client settings must be wrong, I tried an experiment. My Verizon username and password definitely work on verizon.com and webmail.verizon.com. Could it be possible, I wondered, that they could be failing ONLY on smtp.verizon.net? The answer appears to be YES. I used telnet and stunnel4 to talk to smtp.verizon.net over port 465, using the AUTH LOGIN command to authenticate, after base-64-encoding my username and password: I thought about suggesting this earlier, but you might try it with a well-known client program (i.e. Outlook) This Verizon web page seems official http://www.verizon.com/Support/Residential/Internet/fiosinternet/Email/Setup+And+Use/QuestionsOne/124289.htm?CMP=DMC-CVZ_ZZ_ZZ_Z_ZZ_N_X322 Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] contact info for OLPC/Sugarlabs related people in Boston area?
I'm trying to decruft and I have a (small) pile of XO-1 hardware that needs a new home. It seems that OLPC has moved their office to Miami and at the moment the server for the OLPC-Boston mailing list (and all of the OLPC mailing lists) is down. Does anyone have contact info for someone in the Boston metro areas who is still involved with Sugar development/OLPC (one laptop per child) activities? I'm hoping not to just trash the hardware. Bill Bogstad ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss