Re: Ruby slow to launch (was L-o-n-g delay for rc.local in systemd on Ubuntu.)

2017-08-08 Thread Joshua Judson Rosen
On 08/08/2017 02:52 PM, Ken D'Ambrosio wrote:
> On 2017-08-08 14:43, Bill Freeman wrote:
>> I don't know, but getrandom() may well be using /dev/urandom (or a
>> related facility).  And that, in turn, might be waiting to "collect
>> sufficient entropy".  So some network traffic, keystrokes, whatever,
>> need to happen between boot time and the first random emission, or
>> that first "random" number becomes predictable.  Since random numbers
>> are often used cryptographically, predictability is a bad thing.
> 
> True, but there's debate about just *how* predictable, etc. Not a 
> subject for this particular thread, but I'd be perfectly happy with udev 
> almost-as-random.
> 
>> As to why ruby is designed to require a random number before being
>> asked to do something dependent on such a random number is a question
>> for the ruby developers.
> 
> Email already sent. :-)
> 
>> Re-linking /dev/urandom will probably break lots of things.  Maybe run
>> your script in a chroot jail that has a different /dev/urandom could
>> work.
> 
> Alas, no -- I'm doing various admin chores, and a chroot won't be 
> helpful.
> 
>> Is your script too complex to rewrite in bash?  Not a general
>> solution, but as a workaround it has its appeal.
> 
> *sigh* This is probably where I'm gonna wind up (or Perl, or Python).  
> Except I've now written a good handful of scripts that people are 
> waiting on, and it's gonna cause me physical pain to have to re-do them 
> at this point.
> 
> C'est la vie.  I guess that's the way the Ruby crumbles...

Instead of rewriting the whole thing, why not just seed the RNG manually?

Slightly relevant-looking discussion BTW:

https://bugs.ruby-lang.org/issues/9569#note-56

... mainly in that it points to the updated random(4) Linux man page, which 
says:

   The  /dev/random  interface  is  considered  a  legacy  interface,  and
   /dev/urandom is preferred and sufficient in all  use  cases,  with  the
   exception  of  applications  which require randomness during early boot
   time; for  these  applications,  getrandom(2)  must  be  used  instead,
   because it will block until the entropy pool is initialized.

So, there you go. "until the entropy pool is initialized" is apparently
about 3 minutes in your case ;)

You should be able to explicitly seed Ruby's internal RNG,
or explicitly seed the system RNG by writing bytes into
/dev/random or /dev/urandom.

If you want `instant good entropy' at boot, you can even store
some random data into a file at shutdown and then seed from that file
at boot (be sure to invalidate that cache before seeding from it though,
to ensure that you don't use the same seed twice!). IIRC there are
some preexisting packages for this, and some distributions even do it by 
default.

If you write a systemd service, it looks like you can depend on
systemd-random-seed.service.

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Network-controlled power switches/relays?

2017-08-22 Thread Joshua Judson Rosen
Anyone have any experience with ethernet-controlled power relays?

I have a situation with a couple of embedded Linux appliances I'm working on,
that are deployed hundreds of miles away from me, and I need the ability
to power-cycle one of them remotely. Looking for some sort of remote-controlled
AC outlet or relay (relay could be an 120V AC relay or a 12VDC relay, 
actually...).

Need one that I can control from a shell login on the other Linux machine
at the site, e.g.: a socket interface I can drive with netcat or the like,
a web interface that works with w3m or curl, SNMP Any of those would be 
fine.

I see a lot of different devices on Amazon that look like they might require
an iPhone or Android device running some proprietary GUI app on the LAN,
but I'm having trouble telling which are worthwhile and which will be a waste 
of time.

Suggestions?

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [Discuss] Any miners?

2017-06-18 Thread Joshua Judson Rosen
On 06/16/2017 06:12 PM, Bill Ricker wrote:
> Bitcoin initially did not require specialized hardware, but as new golden 
> hashes get harder to find, mining costs more
> in electricity and depreciation without speciality gear (or a huge BITNET 
> running for free). If scarcity drives up BTC
> value, maybe, but odds of finding one still declining as payoff increases. I 
> haven't checked the calculation lately, it
> would be a good exercise: what would an AWS VPS mining cluster big enough to 
> average 1 BTC mined per week cost to operate?

IIRC Amazon asks the purpose of your cluster before they rent it to you;
do they accept bitcoin-mining? And do they report you to the IRS/SEC? ;)

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: What's the strategy for bad guys guessing a few ssh passwords?

2017-06-11 Thread Joshua Judson Rosen
On 06/11/2017 10:17 AM, Ted Roche wrote:
> For 36 hours now, one of my clients' servers has been logging ssh
> login attempts from around the world, low volume, persistent, but more
> frequent than usual. sshd is listening on a non-standard port, just to
> minimize the garbage in the logs.
> 
> A couple of attempts is normal; we've seen that for years. But this is
> several each  hour, and each hour an IP from a different country:
> Belgium, Korea, Switzerland, Bangladesh, France, China, Germany,
> Dallas, Greece. Usernames vary: root, mythtv, rheal, etc.
> 
> There's several levels of defense in use: firewalls, intrusion
> detection, log monitoring, etc, so each script gets a few guesses and
> the IP is then rejected.
> 
> In theory, the defenses should be sufficient, but I have a concern
> that I'm missing their strategy here. It's not a DDOS, they are very
> low volume. It will take them several millennia to guess enough
> dictionary attack guesses to get through, so what's the point?

Maybe they already have known-good passwords to go along with the usernames,
and they're guessing at *hosts* (or networks) where those combinations work?

Just over a decade ago, a friend who was doing sysadmin at a college
got involved in chasing down someone who had been worming his way
through college/university networks using that same general class
of strategy:

1. find usernames+passwords for staff at an arbitrary university

2. assume people with a network account at one university
   probably have accounts with the same username+password
   on systems at _other_ universities
  (because academics collaborate across institutional boundaries)

3. grow the list hosts you can log into using #2

4. assume that some of the systems you can now log into
   probably have vulnerabilities that allow you to find other
   known-good username+password pairs

5. grow your list of username+password pairs using #4

5. GOTO 1


If you already have a big network of attack-bots, then there's probably
no reason to even restrict the scope to universities.

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: What's the strategy for bad guys guessing a few ssh passwords?

2017-06-13 Thread Joshua Judson Rosen
On 06/12/2017 01:27 PM, Dan Coutu wrote:
>> On Jun 12, 2017, at 13:15, Tom Buskey > > wrote:
>>
>> As Ted said in the 2nd sentence, it's running on a non-standard port.  Yes, 
>> it helps lot to reduce garbage in the logs.
>
> Insisting on the use of an ssh key instead of login credentials also helps a 
> lot.

Helps with the security, anyway; and not blacklisting based on source-address
means that you'll never be locked out of your own server just because
some machine at the hotel where you're staying is (or has been) part of
the communist party^W^W^W a botnet.

*Doesn't* help cut down on logspam. ;)

But adding liberal ignore rules into logcheck (or whatever) helps a lot with 
logspam ;)

I don't care about the probes of nonexistent accounts, for example;
I just care about attempts on accounts that someone/something might actually
be able to log into if they somehow got a compromised key;
so I ignore attempts on nonexistent logins--and many usernames that do exist
but aren't able to _log in_, and I explicitly monitor for things like attempts
on my own specific username

>> Maybe it's not non-standard enough?

Portscans are easy enough, especially using the new `horde of slow brutes'
techniques from the 1990s I've always been impressed with how _few_
of those I saw, and by the fact that moving services to nonstandard ports
was as effective as it was at reducing the connection-attempts to those 
services.

The whole "I have ssh on a secret port to secure it against attacks" thing
has always seemed fundamentally bogus to me: the _premise_ of ssh itself is
that you're supposed to be able to assume that the network is in fact
extremely hostile--more hostile than any network where
`hiding in a non-standard port' could ever be useful.


>> On Mon, Jun 12, 2017 at 12:42 PM, Bruce Dawson > > wrote:
>>
>> I have to second this suggestion - changing the port did wonders for our 
>> servers. Of course, as Dan says, it works
>> for script kiddies, not so much against a determined attack on your 
>> server.
>>
>> --Bruce
>>
>>
>> On 06/12/2017 09:59 AM, Dan Garthwaite wrote:
>>> If you can change the port number it does wonders against the script 
>>> kiddies.
>>>
>>> Just remember to add the new port, restart sshd, then remove the old 
>>> port.  :)
>>>
>>> On Sun, Jun 11, 2017 at 1:53 PM, Ted Roche >> > wrote:
>>>
>>> Thanks, all for the recommendations. I hadn't seen sshguard before;
>>> I'll give that a try.
>>>
>>> I do have Fail2Ban in place, and have customized a number of 
>>> scripts,
>>> mostly for Apache (trying to invoke asp scripts on my LAMP server
>>> results in instaban, for example) and it is what it reporting the 
>>> ssh
>>> login failures.
>>>
>>> I have always seen them, in the 10 years I've had this server 
>>> running,
>>> but the frequency, periodicity and international variety (usually
>>> they're all China, Russian, Romania) seemed like there might be
>>> something else going on.
>>>
>>> Be careful out there.
>>>
>>> On Sun, Jun 11, 2017 at 11:19 AM, Mark Komarinski 
>>> > wrote:
>>> > sshguard is really good since it'll drop in a iptables rule to 
>>> block an IP
>>> > address after a number of attemps (and prevent knocking on other 
>>> ports too).
>>> >
>>> > Yubikey as 2FA is pretty nice too.
>>> >
>>> >  Original message 
>>> > From: Bruce Dawson >
>>> > Date: 6/11/17 10:58 AM (GMT-05:00)
>>> > To: gnhlug-discuss@mail.gnhlug.org 
>>> 
>>> > Subject: Re: What's the strategy for bad guys guessing a few ssh 
>>> passwords?
>>> >
>>> > sshguard takes care of most of them (especially the high 
>>> bandwidth ones).
>>> >
>>> > The black hats don't care - they're looking for vulnerable 
>>> systems. If
>>> > they find one, they'll exploit it (or not).
>>> >
>>> > Note that a while ago (more than a few years), comcast used to 
>>> probe
>>> > systems to see if they're vulnerable. Either they don't do that 
>>> any
>>> > more, or contract it out because I haven't see probes from any of 
>>> their
>>> > systems in years. This probably holds true for other ISPs, and 
>>> various
>>> > intelligence agencies in the world - both private and public, not 
>>> to
>>> > mention various disreputable enterprises.
>>> >
>>> > --Bruce
>>> >
>>> >
>>> > On 06/11/2017 10:17 AM, Ted Roche wrote:
>>> >> For 36 hours now, one of my clients' servers has been logging ssh
>>>

Netiquette (was: Need to copy a 200GB directory)

2017-06-28 Thread Joshua Judson Rosen
On 06/27/2017 10:01 PM, R. Anthony Lomartire wrote:
> OK, my apologies for hijacking this thread, I haven't been on a mailing list 
> in forever but I will apply proper
> etiquette. Can I just ask what you mean by "top post" though?

Not everyone reads or even receives every message, in real time,
in the same order in which you saw or responded. So the custom Greg is
encouraging you to use is to provide a quotation of (the salient parts of)
whatever you're replying to *above your reply* so that the context
of your reply is obvious, even to someone who reads your message
without having seen any other messages in the thread first.

e.g.: some people will find this message as a result of a web-search,
some people just get thousands of e-mails a day and need to prioritize them
using some other system than `by the order in which they were sent';
many people will simply re-read messages at some point in the distant future,
when the memory of `the last thing that was posted before this' has
faded even _if_ it was once clear.

Also: you know how some people think Shakespeare is hard to read?
Try reading all of the dialog *back to front* ;)

"top posting" is "yoda dialog"

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!


>
> On Tue, Jun 27, 2017 at 9:25 PM Greg Rundlett (freephile)  > wrote:
>
> Hi Anthony! Welcome!
>
> You can just reply to the list in general, but it doesn't hurt to 
> reply-all
>
> You should always start a new topic with a new thread ;-). And never top 
> post (unless you're me and using a phone)
>
> ~ Greg
>
> On Jun 27, 2017 8:00 PM, "R. Anthony Lomartire"  > wrote:
>
> Also sorry idk if there is an intro thread or anything, but I've been 
> a lurker for a while this has been my
> first actual post I think. I don't know if I should reply all or just 
> send my reply to the GNHLUG email address?
>
> Anyways just quickly, I'm Tony and I'm in ad tech. We use machine 
> learning to help advertisers optimize their
> ROI. At first I thought it would be lame, but at least it was a job, 
> but gradually I have become more and more
> interested in ad tech and it is actually kinda cool. Ok so hiii!
>
> On Tue, Jun 27, 2017 at 7:55 PM R. Anthony Lomartire 
>  > wrote:
>
> No offense or anything but I find it amusing that one of the most 
> active threads on this mailer has been
> about copying a bit of data :D

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Is Amazon AWS/EBS snapshotting just LVM, or what?

2017-09-28 Thread Joshua Judson Rosen
I'm working on a project that uses Amazon AWS-provided VPS instances,
and the other guy on the project is telling me that "snapshotting hourly may 
degrade performance",
and I'm trying to determine where that's actually true. My gut feeling is that 
it sounds kind of bogus.

>From the information I've been able to find about how Amazon's stuff works 
>(either in terms
of how it's _implemented_ [for which I'm finding basically no insight] or how 
it's _characterized_
[in the engineering sense, not the literary sense]...), it really sounds a 
_lot_ like Amazon
is just using LVM snapshots, e.g. from :

"snapshots can be done in real time while the volume is attached and in 
use.
 However, snapshots only capture data that has been written to your 
Amazon EBS volume,
 which might exclude any data that has been locally cached by your 
application or OS."

"By design, an EBS Snapshot of an entire 16 TB volume should take no 
longer than the time
 it takes to snapshot an entire 1 TB volume. However, the actual time 
taken to create
 a snapshot depends on several factors including the amount of data 
that has changed
 since the last snapshot of the EBS volume."

... though I'm not entirely sure how to interpret that last bit about "time 
taken to create a snapshot
depends on... the amount of data that has changed since the last snapshot";
the _first half of that statement_ reads as "creating a snapshot is constant 
time",
which basically screams to me "copy-on-write just like LVM, and they're 
probably implemented
in terms of LVM".

Any insight here as to whether my gut is correct on this, or whether I'm 
actually likely
to notice an impact from hourly snapshots of, say, a 200-GB volume? How about a 
1-TB volume?

The only thing I'm seeing from Amazon that seems to _vaguely_ support (maybe) 
the notion
that `snapshotting too often' would be something to worry about is this bit 
from elsewhere
in that same FAQ page (under the heading of "performance", whereas the others 
were
under the heading of "snapshots" and a subheading of "performance consistency 
of my HDD-backed volumes":

Another factor is taking a snapshot which will decrease expected write 
performance
down to the baseline rate, until the snapshot completes.

... and, taken in the context of the previously-cited notes about snapshots 
being
`not base on volume-size but maybe influenced by changed-since-last-snapshot 
set size'
(and in the context of the explanations they give for HDD-backed vs. SSD-backed 
storage),
I'm basically reading that as:

`if you're using HDD-backed storage then it's because you care about 
*throughput*
 more than *response time* and are likely to be monitoring throughput,
 and if you're monitoring throughput you may notice a *momentary dip in 
throughput*
 as the *HDDs* need to seek around to find the volume boundaries and 
set up the COW records.'

Even if you don't have any insight into what's actually happening under the 
covers at Amazon,
does my reading of all of this sound right to you?

And, perhaps more interestingly, are these same caveats from Amazon generally 
applicable to LVM?

-- 
Connect with me on GNU social network: 
Not on the network? Ask me for an invitation to the nhcrossing.com social hub
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is Amazon AWS/EBS snapshotting just LVM, or what?

2017-09-28 Thread Joshua Judson Rosen
On 09/28/2017 01:46 PM, Tom Buskey wrote:
> I work with OpenStack.  It manages images in Glance which sit above its 
> object storage, Swift.
> 
> On the POC clouds, you can use LVM as a backend for Glance.  Snapshotting is 
> *very* slow.  30 minutes for a snap of a
> 80GB VM that's shutdown.

OK..., that surprises me. A lot.

For comparison, I just made an LVM snapshot of a volume 50% larger than that, 
that's *in use*
(and mostly not in cache, if that even makes a difference, since my 
buffer+cache shows as only 17GB *total*),
and the whole operation took only a fraction of a second:

rozzin@zuul:~ $ time sudo lvcreate --name home_snap --size 128G 
--snapshot zuul-vg/home
  Using default stripesize 64.00 KiB.
  Logical volume "home_snap" created.

real0m0.349s
user0m0.028s
sys 0m0.060s


How in the world does that translate to 30-minutes (*5 thousand* x time)
for a volume only 0.63x as big?

When you say "snapshotting on top of LVM", does that entail actually making a 
full copy
after the LVM snapshot is made--or something like that?

> You can use other storage backends in OpenStack that are faster.  A full non 
> LVM Swift.  Ceph and glusterfs are common
> choices where performance matters.  They wouldn't be using ZFS but probably 
> something using their S3 object store.
> 
> 
> 
> 
> On Thu, Sep 28, 2017 at 1:32 PM, Ken D'Ambrosio <k...@jots.org 
> <mailto:k...@jots.org>> wrote:
> 
> I would say it's unlikely to be LVM, because LVM is content-ignorant; it
> snapshots the entire volume, which is inefficient, and when you're
> Amazon, you care a LOT about being efficient.  Instead, I imagine
> they're using some content-aware CoW solution such as ZFS.  But,
> whatever mechanism, I agree with your opinion: I doubt that their
> solution -- almost certainly CoW of some sort -- stands a chance of
> being more than even slightly impactful.
> 
> $.02, YMMV and other assorted disclaimers,
> 
> -Ken
> 
> 
> On 2017-09-28 13:16, Joshua Judson Rosen wrote:
> > I'm working on a project that uses Amazon AWS-provided VPS instances,
> > and the other guy on the project is telling me that "snapshotting
> > hourly may degrade performance",
> > and I'm trying to determine where that's actually true. My gut feeling
> > is that it sounds kind of bogus.
> >
> >> From the information I've been able to find about how Amazon's stuff
> >> works (either in terms
> > of how it's _implemented_ [for which I'm finding basically no insight]
> > or how it's _characterized_
> > [in the engineering sense, not the literary sense]...), it really
> > sounds a _lot_ like Amazon
> > is just using LVM snapshots, e.g. from
> > <https://aws.amazon.com/ebs/faqs/ <https://aws.amazon.com/ebs/faqs/>>:
> >
> >       "snapshots can be done in real time while the volume is attached 
> and
> > in use.
> >        However, snapshots only capture data that has been written to 
> your
> > Amazon EBS volume,
> >        which might exclude any data that has been locally cached by your
> > application or OS."
> >
> >       "By design, an EBS Snapshot of an entire 16 TB volume should take 
> no
> > longer than the time
> >        it takes to snapshot an entire 1 TB volume. However, the actual 
> time
> > taken to create
> >        a snapshot depends on several factors including the amount of 
> data
> > that has changed
> >        since the last snapshot of the EBS volume."
> >
> > ... though I'm not entirely sure how to interpret that last bit about
> > "time taken to create a snapshot
> > depends on... the amount of data that has changed since the last
> > snapshot";
> > the _first half of that statement_ reads as "creating a snapshot is
> > constant time",
> > which basically screams to me "copy-on-write just like LVM, and
> > they're probably implemented
> > in terms of LVM".
> >
> > Any insight here as to whether my gut is correct on this, or whether
> > I'm actually likely
> > to notice an impact from hourly snapshots of, say, a 200-GB volume?
> > How about a 1-TB volume?
> >
> > The only thing I'm seeing from Amazon that seems to _vaguely_ support
> > (maybe) the notion
> > that `snapshotting too often' would be something to worry about is
> &g

Re: Is Amazon AWS/EBS snapshotting just LVM, or what?

2017-09-28 Thread Joshua Judson Rosen
On 09/28/2017 01:32 PM, Ken D'Ambrosio wrote:
> I would say it's unlikely to be LVM, because LVM is content-ignorant; it 
> snapshots the entire volume, which is
> inefficient, and when you're Amazon, you care a LOT about being efficient.  
> Instead, I imagine they're using some
> content-aware CoW solution such as ZFS.  But, whatever mechanism, I agree 
> with your opinion: I doubt that their solution
> -- almost certainly CoW of some sort -- stands a chance of being more than 
> even slightly impactful.

Oh--yeah, ZFS is another good candidate. Actually, there are a few others that 
I can think of as well

But it is basically `screaming "COW"' at me, and my gut is telling me that this 
`fear of over-snapshotting'
is basically (generally) the same as when people talk about how they `need to 
do multithreading [for EVERYTHING]
because it's so expensive to fork a new process' (there are some corner cases 
where fork() is actually
`too expensive' in at least some sense [and I've actually run into some of 
those cases],
 but *most* of those claims always seemed to be from people who didn't even 
know that COW was a thing...).

> $.02, YMMV and other assorted disclaimers,
> 
> -Ken
> 
> 
> On 2017-09-28 13:16, Joshua Judson Rosen wrote:
>> I'm working on a project that uses Amazon AWS-provided VPS instances,
>> and the other guy on the project is telling me that "snapshotting
>> hourly may degrade performance",
>> and I'm trying to determine where that's actually true. My gut feeling
>> is that it sounds kind of bogus.
>>
>>> From the information I've been able to find about how Amazon's stuff works 
>>> (either in terms
>> of how it's _implemented_ [for which I'm finding basically no insight]
>> or how it's _characterized_
>> [in the engineering sense, not the literary sense]...), it really
>> sounds a _lot_ like Amazon
>> is just using LVM snapshots, e.g. from <https://aws.amazon.com/ebs/faqs/>:
>>
>> "snapshots can be done in real time while the volume is attached and in 
>> use.
>>  However, snapshots only capture data that has been written to your
>> Amazon EBS volume,
>>  which might exclude any data that has been locally cached by your
>> application or OS."
>>
>> "By design, an EBS Snapshot of an entire 16 TB volume should take no
>> longer than the time
>>  it takes to snapshot an entire 1 TB volume. However, the actual time
>> taken to create
>>  a snapshot depends on several factors including the amount of data
>> that has changed
>>  since the last snapshot of the EBS volume."
>>
>> ... though I'm not entirely sure how to interpret that last bit about
>> "time taken to create a snapshot
>> depends on... the amount of data that has changed since the last snapshot";
>> the _first half of that statement_ reads as "creating a snapshot is
>> constant time",
>> which basically screams to me "copy-on-write just like LVM, and
>> they're probably implemented
>> in terms of LVM".
>>
>> Any insight here as to whether my gut is correct on this, or whether
>> I'm actually likely
>> to notice an impact from hourly snapshots of, say, a 200-GB volume?
>> How about a 1-TB volume?
>>
>> The only thing I'm seeing from Amazon that seems to _vaguely_ support
>> (maybe) the notion
>> that `snapshotting too often' would be something to worry about is
>> this bit from elsewhere
>> in that same FAQ page (under the heading of "performance", whereas the
>> others were
>> under the heading of "snapshots" and a subheading of "performance
>> consistency of my HDD-backed volumes":
>>
>> Another factor is taking a snapshot which will decrease expected
>> write performance
>> down to the baseline rate, until the snapshot completes.
>>
>> ... and, taken in the context of the previously-cited notes about
>> snapshots being
>> `not base on volume-size but maybe influenced by
>> changed-since-last-snapshot set size'
>> (and in the context of the explanations they give for HDD-backed vs.
>> SSD-backed storage),
>> I'm basically reading that as:
>>
>> `if you're using HDD-backed storage then it's because you care about
>> *throughput*
>>  more than *response time* and are likely to be monitoring throughput,
>>  and if you're monitoring throughput you may notice a *momentary dip
>> in throughput*
>>  as the *HDDs* need to seek around to find the volume boundaries and
>> set up the COW records.'
>>
>> Even if you don't have any insight into what's actually happening
>> under the covers at Amazon,
>> does my reading of all of this sound right to you?
>>
>> And, perhaps more interestingly, are these same caveats from Amazon
>> generally applicable to LVM?
> 

-- 
"Don't be afraid to ask (λf.((λx.xx) (λr.f(rr."
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is Amazon AWS/EBS snapshotting just LVM, or what?

2017-09-28 Thread Joshua Judson Rosen
On 09/28/2017 01:48 PM, Bill Ricker wrote:
> The lack of coherence due to OS cave not being flushed should still be a 
> concern. 

In the general case, yes. In my particular case I'm specifically concerned only
with data that's stored transactionally to the extent that (and I really hope
that I'm not grossly mistaken on this...) I'd expect to survive an unexpected 
power-loss,
like PostgreSQL and git.

(and in the case of git, I'm only talking about its internal object-store,
 *not working trees*, which I know from experience *cannot* be expected to 
survive that--
 in other words, always sync between "git pull" or "git checkout" and a 
power-cut!)

> OTOH I saw a storage level replication system propagate corruption to the 
> remote site's copy of the Production DBMS ...
> So it perfectly replicated the primary's failure. Oops. Easiest recovery was 
> restoring a nightly backup to the test
> system since both Prod nodes were so hosed.
> This is bit one reason I like best-effort (asynchronous) dbms level 
> transaction replication. The remote is 30s behind
> but is in a consistent state. Most users can handle checking if their last 
> txn before crash survived; most will even if
> not instructed to! (Even if asked not to!)

☺

-- 
Connect with me on GNU social network: 
Not on the network? Ask me for an invitation to the nhcrossing.com social hub
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is Amazon AWS/EBS snapshotting just LVM, or what?

2017-09-28 Thread Joshua Judson Rosen
On 09/28/2017 02:14 PM, mark wrote:
> AWS/EBS is not LVM under the covers, it's more like NFS; and snapshots are 
> more like VMware & how it does snapshots.

I have never used VMWare and have no idea how it does anything. Can you provide 
more insight on what that means?


> The OS cache exclusion refers to read-ahead and write caching going on in RAM.

Yes, I got that. The reason I included that in the citation was actually that I 
took it
as supporting my "this looks like atomic COW snapshotting" conclusion, because 
that's
exactly what I'm accustomed to getting through LVM (snapshotting a block device
captures all of the blocks that *have actually been written* at the time of the 
snapshot).

> On Sep 28, 2017 1:17 PM, "Joshua Judson Rosen" <roz...@hackerposse.com 
> <mailto:roz...@hackerposse.com>> wrote:
> 
> I'm working on a project that uses Amazon AWS-provided VPS instances,
> and the other guy on the project is telling me that "snapshotting hourly 
> may degrade performance",
> and I'm trying to determine where that's actually true. My gut feeling is 
> that it sounds kind of bogus.
> 
> >From the information I've been able to find about how Amazon's stuff 
> works (either in terms
> of how it's _implemented_ [for which I'm finding basically no insight] or 
> how it's _characterized_
> [in the engineering sense, not the literary sense]...), it really sounds 
> a _lot_ like Amazon
> is just using LVM snapshots, e.g. from <https://aws.amazon.com/ebs/faqs/ 
> <https://aws.amazon.com/ebs/faqs/>>:
> 
>         "snapshots can be done in real time while the volume is attached 
> and in use.
>          However, snapshots only capture data that has been written to 
> your Amazon EBS volume,
>          which might exclude any data that has been locally cached by 
> your application or OS."
> 
>         "By design, an EBS Snapshot of an entire 16 TB volume should take 
> no longer than the time
>          it takes to snapshot an entire 1 TB volume. However, the actual 
> time taken to create
>          a snapshot depends on several factors including the amount of 
> data that has changed
>          since the last snapshot of the EBS volume."
> 
> ... though I'm not entirely sure how to interpret that last bit about 
> "time taken to create a snapshot
> depends on... the amount of data that has changed since the last 
> snapshot";
> the _first half of that statement_ reads as "creating a snapshot is 
> constant time",
> which basically screams to me "copy-on-write just like LVM, and they're 
> probably implemented
> in terms of LVM".
> 
> Any insight here as to whether my gut is correct on this, or whether I'm 
> actually likely
> to notice an impact from hourly snapshots of, say, a 200-GB volume? How 
> about a 1-TB volume?
> 
> The only thing I'm seeing from Amazon that seems to _vaguely_ support 
> (maybe) the notion
> that `snapshotting too often' would be something to worry about is this 
> bit from elsewhere
> in that same FAQ page (under the heading of "performance", whereas the 
> others were
> under the heading of "snapshots" and a subheading of "performance 
> consistency of my HDD-backed volumes":
> 
>         Another factor is taking a snapshot which will decrease expected 
> write performance
>         down to the baseline rate, until the snapshot completes.
> 
> ... and, taken in the context of the previously-cited notes about 
> snapshots being
> `not base on volume-size but maybe influenced by 
> changed-since-last-snapshot set size'
> (and in the context of the explanations they give for HDD-backed vs. 
> SSD-backed storage),
> I'm basically reading that as:
> 
>         `if you're using HDD-backed storage then it's because you care 
> about *throughput*
>          more than *response time* and are likely to be monitoring 
> throughput,
>          and if you're monitoring throughput you may notice a *momentary 
> dip in throughput*
>          as the *HDDs* need to seek around to find the volume boundaries 
> and set up the COW records.'
> 
> Even if you don't have any insight into what's actually happening under 
> the covers at Amazon,
> does my reading of all of this sound right to you?
> 
> And, perhaps more interestingly, are these same caveats from Amazon 
> generally applicable to LVM?
> 
> --
> Connect with me on GNU social network: 
> <https://status.hackerposse.com/rozzin 
> <https://status.hackerposse.com

Re: Satellite Internet relative security

2017-08-26 Thread Joshua Judson Rosen
On 08/26/2017 09:46 PM, James A. Kuzdrall wrote:
> On Friday 25 August 2017 13:04:11 Brian St. Pierre wrote:
>> On Fri, Aug 25, 2017 at 11:56 AM, James A. Kuzdrall 
>>
>> wrote:
>>> Does Linux have any special problems interfacing with the dish
>>> equipment?
>>> Is a standard Ethernet connection enough, or must they install software
>>> on the Linux computer?
>>
>> I had service through Hughes for a couple years around 2010 or so. They
>> give you a modem, connect a cable from dish to modem, connect ethernet
>> cable from computer to modem, and it's more or less like having dsl or
>> cable with horrible latency. You don't need to install anything on linux,
>> though you may get hassled by customer support if you have to call in.
>>
>> The latency is bad. You can end up with no service or degraded performance
>> in heavy rain, snow, and/or if there's any snow/ice buildup on the dish. I
>> think they still have a daily usage cap, but I'm not sure. It's probably
>> better than dialup if you don't care about the latency, but I'd consider it
>> a last resort. If there is still a usage cap, you might look at whether a
>> mobile data plan and tethering is a viable alternative.
> 
> Latency would be a pain.  Since the military controls combat drones via 
> satellite, I assumed it would be fast.  It could be that the commercial links 
> go through more processing hubs, but how about the suggestion that it has a 
> built-in latency?


Speed ("fast" or "slow") and latency ("quick" or "lagging") are two completely 
different things
Satellite Internet (when it's working) is *both* high-speed *and* high-latency,
and there's no contradiction there.

The latency doesn't so much have anything do with `more processing hubs';
it's just that routing everything through a geostationary satellite means the 
signals
ultimately need to travel roughly *100,000 miles* (round trip) before you can 
see
a response from whatever website or other service you're using. Even at *light 
speed*,
that's more than a half-second of delay.

Think about it this way: if you have a bunch of stuff to move from Nashua to 
Manchester by car,
and you're going to drive it there at 60 MPH, it's going to take something like 
20 minutes, each way,
per trip. You can move your stuff *faster* by using a bigger car/truck and 
carrying more stuff
on each trip, but you can't do anything about the 20-minute *latency*. While 
you can increase
the overall `items shipped per day' rate, no specific item can make the trip in 
less
than 20 minutes--and if you need to convey items *back* to complete an 
exchanges,
then no exchange can complete in less than 40 minutes because that's how much 
latency
is `built into the system'.

And if for some reason you go from Nashua to Manchester and back *by way of 
California*...,
again it doesn't matter how big a truck you have or how much it can carry at 
once,
you're still going to have *days* of latency if you take such a long route.

That's the thing about latency: it's pretty much always, by definition, `built 
in':
once you have latency, there's generally nothing you can do to work around it.
If you just have a slow (low-bandwidth) link, there are some things that
can be done to make more efficient use that limited bandwidth.

Having said all of that: some people find that high-speed + high-latency
links are unusable; other's probably find that they're perfectly fine.
You wouldn't want to be trying to remote control something *in real time*
over a satellite link, but there are things like sending e-mail or downloading 
large files
(or queuing up actions on a mostly-autonomous UAV!) where even an full extra 
second of *lag*
probably shouldn't make much of a difference

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kevin D. Clark, R.I.P.

2018-08-23 Thread Joshua Judson Rosen
On 08/22/2018 02:46 PM, Jim Sheldon wrote:
> I worked with Kevin for a short time about 10 years ago, this is very
> sad, he was a great person.

Seconded.

The obituary for the more general audience doesn't necessarily do justice,
for people who actually knew him in more specific capacities; I guess that's not
really what it's for, though.

I remember him as someone who was at home in Perl, who edited in emacs,
who preferred feature-tests to platform-checks, whose response to being laid off
was something to the effect of "on the up side, I'm done fighting with that svn 
merge
a lot sooner than I thought I would be"; and who preferred butterfly yo-yos 
because
they made string-tricks easier.

I remember the first conversation I ever had with him, when I first met him,
I think I said "I... kind of like Python, actually. Does that mean
that we can't be friends?". He took a moment to think that over.
(though in reality I was actually writing mostly Perl code at that point, 
myself...)

And I remember when he later introduced me to Valgrind.

But "how we _go about_ remembering people" is something that I personally feel
like I don't really have a good handle on; I think, maybe...,
if you'd like to spend some time remembering Kevin the way some of us knew him,
it might make sense to visit these:

* Kevin's github profile: https://github.com/kdc1024/
* Kevin's blog: http://kdc-blog.blogspot.com/

Alas Kevin's tribute page for Elephant Memory Systems disappeared some time ago,
and the wayback machine doesn't even have a copy. A bit ironic, that
I kept meaning to ask him if he still had a copy of it somewhere..., but 
forgetting
to actually do so.

I was trying to hire him last year. That I'll never get another chance at that
is... I don't know--"supremely frustrating".


> On Wed, Aug 22, 2018 at 8:53 AM Ted Roche  wrote:
>>
>> I'm sorry to report of the passing of Kevin D. Clark at the too-young age of 
>> 48:
>>
>> http://www.legacy.com/obituaries/fosters/obituary.aspx?pid=190018995
>>
>> Kevin was an active member of GNHLUG, several of the satellite LUGs and a 
>> regular contributor to the mailing list.
>>
>> He will be missed.
>>
>> --
>> Ted Roche
>> Ted Roche & Associates, LLC
>> http://www.tedroche.com
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

-- 
"Don't be afraid to ask (λf.((λx.xx) (λr.f(rr."
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kevin D. Clark, R.I.P.

2018-09-08 Thread Joshua Judson Rosen
On 09/04/2018 01:27 PM, Ben Scott wrote:
> On Wed, Aug 22, 2018 at 8:53 AM Ted Roche  wrote:
>> I'm sorry to report of the passing of Kevin D. Clark at the too-young age of 
>> 48...
> 
> This is horrible.
> 
> Oh my.
> 
> Kevin has been a member of GNHLUG since just about forever.  I
> remember his astute comments on things far and wide.  Perl certainly,
> but any number of other things -- often seeing the big picture, or the
> critical details, that others missed.  And he was a genuinely nice guy
> besides that.
> 
> We are all diminished by this.

I guess I'll see some of you guys at the memorial service tomorrow afternoon.

I feels pretty awful to realize that the last few `LUG meetings' I've been to
were memorial services. We have to start meeting under better circumstances

In case anyone missed the info about the memorial service in the obituary, it's:

There will be a celebration of life
at the Huddleston Hall Ballroom on the UNH [Durham] campus,
on Sunday, September 9, at 2 p.m.

(it wasn't entirely clear, but Huddleston Hall is at the primary UNH campus--in 
Durham)
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Quantum Crypto redux Re: Boston Linux Meeting ... Crypto News, plus ...

2018-09-21 Thread Joshua Judson Rosen
On 09/19/2018 10:33 PM, Bill Ricker wrote:
> QuBits aren't QUITE on the Moore's Law 18-month doubling cycle yet; my 
> back-of-the-envelope shows going from 7 QuBits to 72 QuBits in 16 years is 
> doubling in 28 months.  Which is kinda close to Moore's law for RAM (24 
> months)...
> How soon the engineering will allow a growth spurt is unclear.
> 
> So setting my ED25519 key expiration at 10 years was just about right, :-) 
> that's just exactly when it should be doable commercially :-).
> A little shorter would have been more conservative!

Hmm. My understanding of key-expiries has been more that they're useful as a 
sort of
dead-man switch (since you can always publish *changes* to the expiration-dates
as long as you have still are capable of accessing and making use of the 
private key,
and haven't published a revocation); to help balance concerns about
things like long-term management of secrecy
(however low your likelihood of compromise is over the course of a year,
 if it's non-zero then it compounds over multiple years/decades--and larger 
probabilities
 compound more quickly; this is he concern that Schneier quoted from Filippo 
Valsorda
 a couple years ago, form example 
);
or what what happens to your key's validity after it becomes inaccessible to/by 
you
(for example if you become incapacitated or die unexpectedly...); or,
more generally, to establish key-migration timeframes.

To *those ends*, a 10-year expiry period is kind of crazy-sounding--especially 
if
you take a position like "my modern smartphone is the most easily-compromised 
keystore,
because someone could easily mug me for or I could fumble it into someplace 
where
I can't retrieve it before someone else has the opportunity; and my password
probably won't guard it for *that* long..., so maybe I should be giving the 
smartphone
short-lived subkeys on the order of 1 month or even less".

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Edit over SSH.

2019-02-27 Thread Joshua Judson Rosen
You haven't lived until you've invoked emacs noninteractively from a Makefile 
to, say... render your documentation
into end user consumables.

On 2/27/19 4:02 PM, Tom Buskey wrote:
> I know the feeling.  I've gotten so used to emacs for coding (python, shell) 
> and vi for remote/quick work that I haven't been able to get into an IDE.
> 
> Mostly I'm writing code on my desktop that will run in a VM or container or 
> the code will build it one of those.  I can't/shouldn't put a whole 
> development envivironment let alone emacs on it and the VM/container is 
> ephemeral.  I'm not sure an IDE would help me much beyond what emacs already 
> has.
> 
> On Tue, Feb 26, 2019 at 8:17 AM Marc Nozell (m...@nozell.com 
> ) mailto:noz...@gmail.com>> wrote:
> 
> Like this? Been in base emacs for years.
> 
> 
> https://www.gnu.org/software/emacs/manual/html_node/emacs/Remote-Files.html
> 
> -marc
> 
> On Mon, Feb 25, 2019 at 7:00 PM Dan Garthwaite  > wrote:
> 
> Bill is correct.  Just stick to:
> vim scp://target.host.com/.bashrc 
> 
> On Mon, Feb 25, 2019 at 4:32 PM Bill Freeman  > wrote:
> 
> Resistance (like capacitance) is futile. Stay with the one true 
> editor. Whatever nifty feature you saw, there is probably an extension to do 
> it in emacs. (Or you can write one.)
> 
> On Mon, Feb 25, 2019, 2:52 PM Ken D'Ambrosio  > wrote:
> 
> Hi, all.  In Emacs, it's trivially easy to open a file on a 
> remote host:
> 
> emacs /user@host:/path/to/file
> 
> And while I *do* enjoy Emacs, I admit that some of the other 
> IDE/editors
> I've seen look kind of nifty.  But opening files via SSH is 
> really,
> really handy -- to the point where I consider it a 
> dealbreaker to not
> have it.  I found Visual Code can do SSH, but you have to (at 
> least, by
> my reading) set up per-host profiles, etc.  Bleh.  I know 
> that vim can
> do it, but I'm just not a vim guy.  I'm just not interested 
> in doing
> some out-of-the-box thing like sshmount (or whatever it is).  
> So, at the
> end of the day, anyone have an editor they enjoy where it's 
> as easy to
> open a file over SSH as it is in Emacs?
> 
> Thanks for any thoughts you might have...
> 
> -Ken
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org 
> 
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org 
> 
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org 
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
> 
> 
> -- 
> Marc Nozell (m...@nozell.com ) 
> http://www.nozell.com/blog
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org 
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

-- 
Connect with me on the GNU social network: 

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


How GNU has influenced POSIX over the years

2019-08-10 Thread Joshua Judson Rosen
Found this article while doing a websearch for myself (gotta do that every once 
in a while...),
and thought some of you would enjoy it--it's an interesting read, actually:

What is POSIX? Richard Stallman explains
Discover what's behind the standards for operating system compatibility 
from a pioneer of computer freedom.

https://opensource.com/article/19/7/what-posix-richard-stallman-explains

(I got a small quote into it, from a sort of GNU maintainers' 
group-reminiscence session...)

A new way (or GNU way...) of feeling olde: "I remember when all these POSIX 
standard features were GNU-specific"

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Recent Laptop experiences sought

2019-09-16 Thread Joshua Judson Rosen



Usually I just buy from ZaReason, have them pre-install whatever distro I need,
with whatever options I want (they'll flip whatever switches are available via 
the installer--
want full-disk encryption? LVM? Software RAID? Some combination? Just ask them).

And these laptops are just *brilliant* when it comes to being able to 
self-service them:
remove a few phillips-head screws, pop open one big hatch, and everything is 
right there
if you want to add/swap components... or just clean your CPU fan sometime down 
the road
(and if you ask, they'll even include a nice little ZaReason-branded 
screwdriver in the box).


BUT...:

ThinkPenguin relocated to New Hampshire (Keene!) a few years ago.

And Showtime PC in Hudson also started doing custom-built laptops a while ago.

And Showtime and ThinkPenguin presently seem to be using the same ODM laptop 
kits
as ZaReason are using.

On the other hand, I hear that if you buy a Dell with the "corporate" support 
plan,
supposedly you can call Dell and have them send someone same-day to fix your 
laptop
if you break it (and I've heard that support plan is actually remarkably cheap,
though I don't remember the specifics or have a URL handy...).

ThinkPenguin is a lot more active upstream in free software and open hardware
(the ThinkPenguin USB Wi-Fi module is an atheros chip has a nice story behind 
it, for example
 
;
 and LibreCMC is run by the same people who run ThinkPenguin...).

Showtime on the other hand is a brilliant I-wish-there-were-still-more-of-these 
local computer shop.


Sorry, I don't know if this is even close to helpful to Mark ("short list" 
might have meant
"really, needs to be one of these") But I figured if the thread was already 
wandering...,
maybe someone would like to know that list of options existed.

ZaReason: https://zareason.com/
ThinkPenguin: https://www.thinkpenguin.com/
Showtime PC: https://www.showtimepc.com/

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to the nhcrossing.com social hub!

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


The sudden upheaval at the FSF...

2019-09-17 Thread Joshua Judson Rosen
Presumably you've all seen the news *somewhere* by now: there has been a major 
upheaval in the free-software community
over the last day or so, with RMS resigning from the FSF amidst a remarkable, 
uh... "flurry" of controversy.

Regardless of whether one cares one way or the other about RMS per se (as an 
actual person or an icon),
or whether one thinks the organization or movement needs him in (or out of) 
that position...,
the way that the events seem to have unfolded (or maybe "mushroomed" is a 
better term...)
have left many people dazed and confused, and even afraid.

I've taken today off from work to try to make sense out of a number of aspects 
of the whole episode...;
if anyone else feels like getting together for dinner or something to work it 
out together, in person,
I'm here in southern NH all day/evening. I'd really kind of like that, 
actually--it's been a while

Some links in case you have managed to miss it so far...:

https://medium.com/@selamie/remove-richard-stallman-fec6ec210794
https://www.fsf.org/news/richard-m-stallman-resigns
https://sfconservancy.org/news/2019/sep/16/rms-does-not-speak-for-us/


-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: A NH project...

2019-09-23 Thread Joshua Judson Rosen
On 9/23/19 12:36 PM, Bobby Casey wrote:
> I love the idea of this and would contribute if I had some time and energy.
> I've written a few Python scrapers before, but those were all years ago and 
> one-offs.

I expect that these are going to generally be one-offs as well

> If you could share some details about how you've developed the ones you're 
> presently using

I start off by writing a script that just fetches whatever web pages look 
relevant
into a revision-controlled directory; pretty-prints the HTML with "hxnormalize 
-x"
(some of these things are inscrutable without going through that step),
runs a vc diff to stdout, and then commits any new/changed files.

Then set up a cron job to run that script every few hours (and sends me e-mail
whenever the "vc diff to stdout" cited above actually shows a difference).

At that point, while I'm waiting for the `raw' diff-notices, I can look through
the HTML structures and try to identify the parts that I need and
and figure out what the relevant bits of text/data are and how to summarize them
(web.archive.org can be helpful looking for past revisions of the pages,
 if it seems likely that there's something that's just not represented
 in the samples I've got so far).

So then I write code to *summarize* whatever "status items" are described in 
the source material,
into single-line versions (like "NOW PLAYING: ...", "STARTING ON SUNDAY 
(DD/MM): ..." or "ENDS TODAY: ...",
and I just write all of the lines of "current" summary into a text-file.

And I also revision-control the text-file. Because that means that looking for
"NEWS items" is just doing a VC diff and looking for lines that start with "+" 
:)

In the case of the Wilton Town Hall Theatre, the text is only *very slightly*
edited from the raw multi-line display format used on 

(_most_ of the phrasing is copied verbatim from the source material--I'm just
 adjusting punctuation, capitalization, and removing some redundancies
 and awkwardnesses that become more apparent when things are condensed
 into single-line format, e.g.: "STARTS SUNDAY (10/27) PLAYING FOR ONE DAY 
ONLY").

The news-item lines are written into my "summary listing" file without 
hashtags/bangtags or other markup--
the thing that actually *posts* them runs some sed substitutions to convert
phrases like "silent film" and "live music" to tags.

Once I'm reasonably sure that a given converter isn't going to just spew 
gibberish
or be overly aggressive about re-notifying based on overly-subtle source-changes
that a reasonable person wouldn't actually count as "news", I add a social 
account
for the the bot and set it up to post, per this article:


https://www.linux.com/tutorials/weekend-project-monitor-your-server-statusnet-updates/


Most of the methodology here is coming from the experience of having 
collaborated on
a "Taco Salad Early Warning System" that a friend started when we worked 
together
about a decade ago :)

> I assume most lists could be easily developed and maintained with one of the 
> many Python web scraping libraries, although I have little to no experience 
> with them.

I did the Wilton movie-listings scraper in bash ;p

That website is kind of "simultaneously ideal and pessimal" source material:

* there's not really any "structured data" to pull out,
  mostly just lists of headings, with some details put into HTML tables
* but (going by the diffs) the HTML seems to be generated automatically
  based on titles/dates/flags set in some sort of database;
  so it seems safe to assume _some level_ of consistency
* nothing about that site (other than the specific movies/times being 
listed)
  has changed in *years*--possibly even *decades* at this point,
  and seems unlikely to change any time soon (the web-designer 
referenced
  at the bottom of the page is actually an HVAC contractor who at some 
point
  when the WWW was relatively new apparently decided to dabble in it;
  and the whole point of the theatre is "not changing with the times".

Milford Drive-In's site is apparently based on WordPress..., but with all of 
the blogging/chronology
stuff removed Looks like there are RSS feeds, but I've yet to see them 
actually contain anything;
at least the HTML (provided by movienewsletters.com) includes some structured 
markup to make it
easier to pick stuff out.

Looks like the Drive-In's autumn update-schedule is "take the listings offline 
the day after the showing,
decide mid-week what the next set of movies will be and put them up then".

Haven't really looked at any other sites yet.

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org

How to deal with Amazon's VPS "support" when hosting e-mail servers on Amazon, or: How to deal with Vogons?

2019-09-30 Thread Joshua Judson Rosen
Anyone familiar with this?

I've got at work VPS hosted with Amazon right now, and am trying to get
Amazon to drop their constrictions on outbound SMTP traffic so that I can get 
logcheck
reports etc. out of the server. Ideally I'd also like to get them to fix the
PTR record in DNS so that it points back to the server's actual FQDN rather than
some goofy "ec2-xx-xx-xx-xx.compute-1.amazonaws.com" name.

Amazon considers both of those things to be the same issue for some reason,
and AFAICT has only a single form to fill out that combines them.
I had the administrator on the account fill out the form, and got back
this response 3 days later:

We've received your request to add a RDNS entry.

In order to make sure we process your request as quickly as possible,
please use the form provided below to resubmit the request using
the email address and information connected to the account in question:

https://aws.amazon.com/forms/ec2-email-limit-rdns-request


Not even any mention of whether they have removed the throttling on traffic 
outbound to SMTP ports
(through testing, I can verify that they haven't--I still can just barely 
trickle mail out).

The URL that they're directing us to is the same URL as for the form that we 
already filed,
and we've verified that the e-mail address that we gave when we originally 
filed it
was the e-mail address listed as the admin contact.

AND we know we were logged in with the the relevant admin privileges when we 
filed the form...,
because filing it from outside the admin login isn't even possible--attempting 
to do so just
results in an error-message: "Root Account Required: We're sorry. This form 
requires a root account".

What am I misunderstanding about this process?

Do they really just want the request *filed twice*? Or does this indicate that 
there's
actually some mismatch somewhere that we're overlooking?

(and, yeah--I know, there are services that both cost less *and* are less of
 a hassle to deal with; for the time being I'd really like to figure out
 how to get Amazon to actually at on this...).

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


A NH project...

2019-09-22 Thread Joshua Judson Rosen
I just started a project to provide "guerrilla" newsfeeds for movie-theatres 
and stuff;
wondering if anyone would be interested in helping out--adding more newsfeeds 
for different things.

First is the Wilton Town Hall Theatre, which now has a GNU social stream + 
RSS/Atom feeds
via , as described here:

https://status.hackerposse.com/conversation/134528#notice-134772

... because I love seeing things there, and I've been subscribe to their 
mailing-list
for years..., but I only recently came to realize/accept that the reason I 
*haven't been*
seeing things there is that their "ALL THE THINGS!" style of listings
(both in the e-mails and on the website) is not usable for me--
and my wife (and kids) and I would probably go see more movies if it didn't
require so much work to digest the listings.

I've also started capturing the Milford Drive-In's listings into a 
revision-control system
for the last couple of weeks so that I can find trends in how they update;
thinking about adding a feed for them next--though, since their season is about 
to end,
I may postpone work on that in favor of adding a listing for something else's 
that's
interesting and live during the winter.

Looking for suggestions--and help writing scrapers/converters. Ideally for more 
small-time
local things that are more likely to appreciate the publicity than to sue me
into oblivion "just per general corporate policy".

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SSH and domain wildcards.

2019-11-06 Thread Joshua Judson Rosen
i.e.: you just got the order backward :)

FYI the ssh_config man page does say:

 Since the first obtained value for each parameter is used, more host-spe‐
 cific declarations should be given near the beginning of the file, and
 general defaults at the end.


On 11/6/19 6:01 PM, Ian Kelling wrote:
> 
> Ken D'Ambrosio  writes:
> 
>> OK.  Feeling kinda dumb.  So!
>>
>> ===
>> $ head -6 ~/.ssh/config
>> Compression yes
>> ForwardX11 yes
>> User kdambrosio
>>
>> Host *.foo.com
>>User ken
>> ===
>> So I've got kdambrosio (my work username) as my default, however, when I 
>> try to log into bar.foo.com, it's not using "ken", it's using 
>> "kdambrosio".  Can someone show me where I'm screwing up?
>>
>> Thank you kindly,
>>
>> -Ken
> 
> Ya. Do this:
> 
> Compression yes
> ForwardX11 yes
> 
> Host *.foo.com
> User ken
> 
> Host *
> User kdambrosio
> 

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


It's International Day Against DRM...

2019-10-12 Thread Joshua Judson Rosen
... in case anyone cares but forgot :)


And in case anyone's interested in printable materials:

* defectivebydesign.org put together a cute printable book-cover
  (since today's special feature is "DRM'd textbooks")

* I converted a couple articles into easy-printing fold-over pamphlets


Links on GNU social:

https://status.fsf.org/conversation/1143302#notice-1831023

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Nashua-area folks -- meet up?

2020-01-28 Thread Joshua Judson Rosen
On 1/21/20 6:21 PM, Ken D'Ambrosio wrote:
> Well, I'll take point on calling Martha's -- if, that is, enough people
> reply to warrant grabbing a bigger table.  Anybody got a preferred time?
>It's heading toward Feb, and we should probably push it out far enough
> that there's a chance those that want to come can schedule for it.
> Maybe Thursday, the 20th of Feb.?  (Safely after Valentine's...)

Heh--Thursdays just happen to be the one day each week I actually
need to plan around right now, but with that much advanced notice I'll figure 
it out ;)

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Reminder/RSVP -- meet *this Thursday* for chat & beer.

2020-02-19 Thread Joshua Judson Rosen
On 2/19/20 3:35 PM, Tom Buskey wrote:
> Wish I could be there.  I remember driving to UNH, Martha's and a few others 
> to go to meetings.

Similar situation here--was hoping to be able to make the schedule work,
but it's not looking like it's going to work out this time.

Hoping everyone will be inspired by the experience
(either "that was great, we should do it again"
or "I wish I could have been there, lets' try again")
and we can start trying to get together more often ;)

PS: if anyone's interested in a new engineering job,
I could use some people to help shoulder the sort of burdens
that are making me miss this one--send me a resume ;)

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Reminder/RSVP -- meet *this Thursday* for chat & beer.

2020-02-19 Thread Joshua Judson Rosen
On 2/19/20 1:53 PM, Ric Werme wrote:
 >
> Note that we'll be there at date/time
> 2020 Feb 20 @ 20:02:02
> In odometer format, that's two days before the palindrome
> 2020000202

But it's just *a few hours after* the nearest unix time palindrome:

$ date --date=@158851
Thu 20 Feb 2020 01:20:51 PM EST

And, actually... the nearest hexadecimal time_t palindrome is even closer:

$ date --date=@$((0x5e4ee4e5))
Thu 20 Feb 2020 02:58:29 PM EST


The nearest binary and octal palindromes are a bit more distant :\

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Japan-certified USB Wi-Fi adapters?

2020-01-16 Thread Joshua Judson Rosen
Looking for USB Wi-Fi adapters that have been certified for sale/use in Japan.

Any recommendations?

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: inotify (was: systemd and search domains.)

2020-01-09 Thread Joshua Judson Rosen
On 1/9/20 12:56 PM, Ed Robbins wrote:
> 
> There are some things in Linux that I absolutely gush over because of how 
> handy they are,
> inotify is just such a creature.  I use it in some of the most unlikely places
> to solve some of my most baffling problems.

Any thoughts on using inotify directly vs. using FAM/gamin/fswatch/libev/...?

PS: I think I'm going to take that "in some of the most unlikely places,
solving the most baffling problems" for my bio ;p

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: systemd and search domains.

2020-01-08 Thread Joshua Judson Rosen
What sort of VPN is it? e.g.: OpenVPN, Wireguard, IPSec...?

And have you installed either resolvconf (which is Suggested by the openvpn 
package, but not required)
or openresolv (which is supposed to be a better, generally compatible, 
replacement for resolvconf)?

On 1/8/20 2:37 PM, Ken D'Ambrosio wrote:
> Hey, all.  When I fire up my VPN, it re-writes my /etc/resolv.conf.
> Shocker.  But I *want* it to, because then all my DNS stuff is good for
> my company.  But it's NOT good for my personal domain.  I'd like to have
> that added to the search domains.  I'm in Ubuntu; not sure if that
> matters.  From my reading:
> * I can the search domains on a per-interface manner, but that seems
> hokey, and subject to issues if I use something (e.g., Bluetooth) to be
> my conduit to the 'Net.
> * /etc/resolv.conf shouldn't be manually modified as it'll just get
> overwritten (and I don't want to make it immutable because I want it to
> change depending on whether I'm using VPN or no)
> * /etc/dhclient/dhclient.conf (apparently) doesn't matter any more if
> you're running NetworkManager
> 
> So, my question: is there an elegant, global way to set/append to my DNS
> domain search list?  Or am I just gonna wind up writing a daemon to wham
> an resolv.conf in-place depending on the current network config?
> 
> Thanks,
> 
> -Ken
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: systemd and search domains.

2020-01-08 Thread Joshua Judson Rosen
So, I don't know anything about GlobalProtect per se (this is the first I've 
even heard of it...);
but...:

On 1/8/20 5:24 PM, Ken D'Ambrosio wrote:
> 
> * I used to do the dnsmasq thing, and it works really well, but it's kind of 
> a pain to set up all the DNS servers and stuff for internal use, and you 
> occasionally get stuff wrong.

FYI managing dnsmasq's back-end DNS server config is actually something that 
openresolv/resolvconf handle :)

Though I'm not sure adding dnsmasq really addresses the "search domain" issue.

> * I tried Joshua's suggestion of openresolv, and it's got exactly what I 
> want, and happily prepends the domain to resolv.conf... until the VPN 
> (GlobalProtect) steps on it.

Well, crap. So my next question was going to be, "are you actually using 
GlobalProtect per se
or are you (can you?) use OpenConnect? Because I bet we could just hook 
OpenConnect into resolvconf
if it doesn't already have a hook..."; but then you went ahead and answered 
that...:

> I *think* I'd be able to make it work through OpenConnect, except that it 
> seems OpenConnect
> isn't doing MFA (at least, with the GlobalProtect?)  Nutshell: clearly, it's 
> time for
> a self-written inotify daemon and call it a day.
> Because it's stupid easy to prepend a line with my domain name every time the 
> file changes,
> whereas I'm gettin' old trying to figure this out through a more elegant 
> mechanism.

Ha! An inotify monitor actually seems like a pretty elegant solution to me!
(though maybe I should point out that I got some of my aesthetic sense
  from growing up watching The Red Green Show...).

The only other option that comes to mind for me is "figure out how to just 
block writes from the GlobalProtect process"
(I'm guessing GlobalProtect is running as root, but you could use an SELinux or 
AppArmor policy
  or something to deal with that?).

Watch out for the `inotify-handler writes and re-triggers itself resulting in 
an infinitely-long "search" line' problem,
obviously? :)

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Runaway log...

2020-01-06 Thread Joshua Judson Rosen
On 1/6/20 8:45 PM, Ken D'Ambrosio wrote:
> OK, guys.  CentOS 7.1.  I've got an OpenStack process that wigged out
> and was logging like crazy to /var/log/messages.  So I killed it.  FORTY
> FIVE MINUTES AGO.  And still, log lines that must've been buffered...
> somewhere, are flying into the messages file.  Gigabytes of them, e.g.,
> 
> Jan  6 20:42:56 sca1-drstack01 neutron-server[27127]: Exception
> RuntimeError: 'maxiException mum RuntimeErrorr: e'cmuaxrismuim roencu
> rdsieonp tdehp the xecxcddede wdhi lew cahlillien gc aal lPiyntgh
> oan  Poybtjheocnt 'o in bject' > ignored
> 
> Now, 27127 is dead, gone, not in the process table.  Not a zombie, not
> nothing.  I restarted the syslog... and the logging stopped for a few
> seconds, and then restarted.  How in blazes do I find what's buffering
> the logs, and how do I flush it?!

Buffered in journald,  maybe?
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: COBOL on HPUX

2020-01-07 Thread Joshua Judson Rosen
On 1/7/20 11:36 AM, Jerry Feldman wrote:
 > My first few programming jobs was as a COBOL programmer on both Burroughs 
 > and IBM  mainframes in the 1970s.
 > I even was able to have lunch with Grace Hopper. In college I learned 
 > Fortran and BASIC. And pdp 8 assembler.
 > I got a copy of the original K and learned C to wean me from COBOL.
 > As a contractor I did have 1 COBOL assignment on HP-UX.

FYI since apparently there are several of you guys here...,
I think I heard something not long ago about the GNUCobol looking to
build up a catalog of properly free COBOL programming reference material,
"to show people good code samples" etc.--since I guess the only such
archive that's known so far is the SimoTime listing, and much of _that_ is
"All Rights Reserved, look and learn but don't copy"

Any interest? And _thoughts_ about such an undertaking?

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Privacy Respecting Replacement for facebook groups

2020-09-30 Thread Joshua Judson Rosen
On 9/29/20 12:11 PM, Lori Nagel wrote:
> Hello everyone, I'm trying to figure out a privacy  respecting replacement 
> for facebook groups.  I want something that is easy to join, (so no 
> requirement that you learn email encryption, system administration or 
> anything "hard"  but also something that even Richard Stallman wouldn't 
> object to (not that I'm trying to recruit him to join it, just some people 
> are really zealots about stuff, if it doesn't have javascript that is also a 
> bonus.)
> I've also considered things like email lists, matermost, irc and forums, and 
> I've dismissed them for the following reasons.
> 1. Lots of people just ignore email thesse days, plus it isn't really very 
> real time.
> 2. irc is just a chat channel, too many bots and while it is real time, it 
> doesn't really have any persistance of topics.
> 3. Forums tend to be too public with just anyone can join it, and while you 
> can have private forums or private sections of forums, you need to be an 
> administrator to set that all up.  Plus forums tend to have things like 
> spralling topics,  and things that either get out of date, or else there is 
> no conversation about the subject (thread necromancy vs an empty forum.)  I 
> want to create a small group that is highly engaged with the subject, 
> chatting everyday etc.
> 4. Email messages from lists you need to get info from can end up in spam if 
> you don't set up email right.  It is too easy to miss important messages 
> cause you get consumed with marketing or things you inadvertantly signed up 
> for and should not have.
> 5. Matermost is like discord, but then I would have to set it up, and I'm not 
> a professional system admin. If i spend all my time learning professional 
> system admin skills, then I won't get to do what I want, which is interacting 
> with people.
 >
 > Just on a whim I also checked into Dissporia and groups.io, mastadon doesn't 
 > really have groups yet, and I don't think all the source code for groups.io 
 > is included.

Somehow there's a lot of confusion about "groups"; people doing a survey of 
social services,
particular people who don't use Facebook, often mistaken "groups" as meaning 
"what other systems call `lists'",
which really isn't the case; e.g. Twitter has "lists", which are just 
collections of 1:1 feeds, not
conversation-topic groups or anything like the "mailing lists" that we're 
relaying these e-mail messages through
right now. A lot of other systems are based on `the Twitter model', are built 
by/for people who have never used
an e-mail list, and just don't even understand groups; IIRC there was an 
interesting article I read about this
sort of mismatch..., I'll see if I can find it

GNU social actually has *groups*; and if you don't want to invite the rest of 
the world to your groups,
there are facilities for private groups--both in the "messages posted to the 
group are hidden to outsiders" sense
and in the "joining the group requires approval" sense (IIRC those are two 
distinct flags that can be used
together or separately). And it's actually pretty straightforward to just set 
up an entirely private/closed site
with nothing visible to anyone without a login, if that's what you want (it's 
not clear from your description
whether you're interested in federation / involving people from elsewhere on 
the internet at all,
or if you just want a `closed community').
And GNU social even meets your `even Richard Stallman wouldn't object to it' 
and `doesn't need JavaScript' desires
(there are some fancy javascript-based front ends _available_ for people who 
want them, like the one that we have
  enabled by default on , and that I have 
enabled-but-not-by-default on ,
  but "must work fine without javascript" is one of the project's requirements 
for the default UI).


XMPP has MUCs; if you're tempted to think of this as "like IRC, just a chat 
channel"..., don't:
XMPP and its MUCs don't have any of the IRC issues.


There are hosting options for both GNU social and XMPP, so that you can have 
them without being "a professional system admin":

* If you just want XMPP on your own domain, administered by someone who 
really knows what they're doing,
  the people behind the Conversations and Quicksy apps for Android, the 
XMPP standards-compliance tester,
  the OMEMO privacy extensions for XMPP), etc..., also provide 
professional XMPP hosting services
  (and the prices are pretty good):

https://account.conversations.im/domain/

* GNU social can generally be set up on any shared-hosting service that 
lets you run PHP,
  and there are also actually a bunch of services that actually 
specialize in hosting GNU social specifically,
  e.g. (I can't vouch for any of these personally, I've just pulled 
this list from a quick websearch):


Is your kids' school forcing Zoom on them too?

2020-08-07 Thread Joshua Judson Rosen
So..., pandemic. That's still a thing, and school is about to start up.

I hear a lot of schools have decided to make everyone use Zoom,
whether they're at school or remote. That's apparently what's happening at my 
kid's school.

If you haven't heard..., Zoom has turned out to be a complete privacy- and 
security-nightmare
(the set of links out from the Wikipedia article is not even exhaustive, but 
holy crap).
Though I suspect that most of the people on this list know all about it.

How are you dealing with it?

We've been trying to talk to our school's administration ever since they sent 
out an e-mail
telling everyone to `expect to use a video-conferencing tool like Google Meet 
or Zoom'),
and finally managed to get a meeting with... the Assistant Principal (who 
honestly is great, but powerless),
and at this point have basically got a response of "wish you'd raised the issue 
earlier, but we already bought Zoom"
(which might not be _as_ frustrating if we hadn't actually first raised this 
issue back in _March_...).

NH does make it fairly straightforward to just give up and homeschool if it 
comes to that...,
but must it really come to that?


-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is your kids' school forcing Zoom on them too?

2020-08-07 Thread Joshua Judson Rosen
So apparently it's _not just me_ having trouble maintaining a useful attitude 
during "COVID life"? ;p

On 8/7/20 6:18 PM, Ben Scott wrote:
> On Fri, Aug 7, 2020 at 5:52 PM Joshua Judson Rosen
>  wrote:
>> If you haven't heard..., Zoom has turned out to be a complete privacy- and 
>> security-nightmare
> 
> So has everything else created in the past several years.
> 
> To paraphrase Larry Niven, it appears that the concept of "privacy"
> was something of a passing fad.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is your kids' school forcing Zoom on them too?

2020-08-09 Thread Joshua Judson Rosen
On 8/9/20 9:33 AM, Lloyd Kvam wrote:
> On Sun, 2020-08-09 at 08:07 -0400, dmich...@amergin.org wrote:
>> In a pinch you can run Zoom wholly in a browser or other semi-sandboxed
>> environment such as mobile phone or tablet, without using the desktop app.
> 
> I have attempted to join some meetings and discovered that the app was 
> *required*.

One of the first promises that they make in their `how we plan to someday 
actually provide
some of the end-to-end encryption that we lied about already having had for 
ages until we got caught
and exposed for actually not having any security underneath at all' whitepaper
is to stop supporting web browsers


>> I ended up repurposing a neglected chromebook as a zoom meeting appliance
>> and that's been fantastic for many activities.
>>
>>
>>> On Fri, 2020-08-07 at 19:14 -0400, Matt Minuti wrote:
 There's been no remote execution exploits (AFAIK), so that's a
 non-issue.
>>>
>>> There have been remote exploits.
>>> https://www.cvedetails.com/vulnerability-list/vendor_id-2159/Zoom.html
>>> Ignore #5 which is a different Zoom. Apple was forced to create a special
>>> update to clean up
>>> the mess after zoom was "uninstalled".
>>>
>>> That said, it's quite possible that today's Zoom is OK on that score. The
>>> Sans News Bites folks
>>> do not seem to be concerned.
>>>
>>> I installed Zoom on my wife's iPad. I won't install it on my gear. Being
>>> pushed into acquiring
>>> a somewhat isolated device to run Zoom would be annoying.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is your kids' school forcing Zoom on them too?

2020-08-10 Thread Joshua Judson Rosen
On 8/10/20 10:23 AM, r...@mrt4.com wrote:
> I don't have any kids, but my school district and other governments who claim 
> jurisdiction over me also require it.
> 
> Since Zoom had said before that it was secure and it turned out that it 
> wasn't, it certainly doesn't make since to trust them now. The way we do it 
> in the open source community is to have lots of eyeballs looking over the 
> code so we can verify it ourselves. So, where can I get copies of the sources 
> for Zoom's app and server?
> 
> Also, although the CEO of Zoom has become a U.S. citizen and they are 
> headquartered in the U.S., Zoom is essentially a Chinese-owned company. They 
> do whatever the Chinese government tells them to do including shutting down 
> the accounts of U.S. and Chinese human rights activists.

Yeah Though the `developed in China' aspect isn't necessarily a `red flag' 
by itself
(there are good people everywhere, and there have been some really great 
projects
that came out of China specifically--OpenMoko and Qi Hardware
come to mind for me--and IIRC I heard somewhere that Ubuntu/Canonical got
a lot of funding from China; and there's probably a lot of good things that I'm 
forgetting)...,
when taken together with the other symptoms _and_ the general patterns of 
`privacy tonedeafness'
and `what could they possibly hav been thinking when they decided doing that 
was a good idea',
it doesn't really help to shift the scales back in their favor

> There are at least a dozen alternatives:
> 
> https://en.wikipedia.org/wiki/List_of_video_telecommunication_services_and_product_brands#Browser_based_-_does_not_require_software_downloads
> 
> https://techwiser.com/open-source-zoom-alternative/

Well..., the NH State Board of Education signed a deal a couple weeks ago to 
provide
BigBlueButton and a bunch of other hosted open-source services to every school 
in the state
for grades K-12 (the NH universities had already standardized on the same 
things a while ago...):


https://www.ilearnnh.org/sites/default/files/media/2020-07/doe-press-release-july-28-2020.pdf

So hopefully the schools will at least start taking up what the state is 
offering
instead of going it alone (even if the security- and privacy- [which I guess I 
have
to remind people are *not* the same thing...] arguments fall on deaf ears,
there's a pretty obvious *economic* "why are you using your funding for this 
instead of spending
everything you can on our kids and teachers" argument

(and if schools are using Zoom but *not* paying for a contract to ensure 
[supposed] FERPA compliance...,
  ummm...)

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: $5/mo to sponsor linux multitouch touchpad support

2020-06-26 Thread Joshua Judson Rosen
On 6/23/20 2:58 PM, Dan Garthwaite wrote:
> I just saw on hacker news that this project has some legs and I signed up as 
> a $5 sponsor.
> 
> https://github.com/sponsors/gitclear
> 
> It's just one of those things standing in the way of widespread linux 
> adoption.  I'm a lifelong vim user, don't even own a mouse, and for me the 
> lack of multitouch gestures are galling.

What specifically does "multitouch" or "multitouch gesters" mean in this 
context?

I've only skimmed through that series of articles/comments, but it looks like 
that term that people
*use* repeatedly but never bother to define or explain.

Does it mean something like... "pinch-zooming + 3-finger-slide to go 
back/forward in web-browsers"?
Or is it something like "2-finger tap for right-click"? (because apparently the 
author thinks
that functionality doesn't exist with synaptics, referring to some
"need to click in the bottom right corner to effect a right click"?)
Some combination of all of those things?

(I'm not asking from the perspective of ignorance/oblivion; I've been using a 
TouchStream for almost 20 years,
  and watching as various touch technologies, techniques, UIs, and featuresets 
have gone in and out of fashion
  in that time..., so when I say "what do you mean by `multitouch'?", it's 
because I just can't guess anymore at
  which particular tiny subset of multitouch functionality people think they're 
clever or exalted for knowing about...).

Also..., I really don't get this "if only we could make it as good as Mac OS!" 
angles. At all--really;
someone explain it to me? This is going to sound like indignation..., and maybe 
it is?
I think it's actually just remembering frustration, and getting cynical after 
too many years of "splaining"...:

It's been maybe 5 years since I actually tried to use Mac OS..., but I remember 
basically
being blown over by how so much in Mac OS just felt like "cheap and chintzy"--
in a lot of ways, but specifically *including the touchpad input*.

This was relative to the synaptics Xorg driver at the time--if libinput even 
existed, I didn't have it yet..., though now that
everything seems to be moving to libinput and dropping support for synaptics, 
touchpad support on Linux systems does seem
to finally be getting bad enough to be comparable with what I remember from Mac 
OS

IIRC Mac OS had no support for *any* of...:

* circular scrolling?
* locked drags?
* edge motion?
* pressure-sensitive pointer speed?

Does it have any of those today? A quick websearch turns up stuff like this:


https://apple.stackexchange.com/questions/85882/how-to-right-drag-using-trackpad-only-for-windows-on-macbook

... which seems to indicate that there's actually not even a way to 
secondary-click-and-drag in Mac OS?

Whaaat? And I'm guessing the whole idea of "middle-drag" is just complete 
crazytalk to Mac users?

Though..., I don't know--most of the reason I care about secondary and tertiary 
drag
is for window-management, which which was another whole chapter of the
"you have got to be kidding me where are my affordances" saga...; if I could 
accept the
overall lack of window-management affordances, I guess maybe I wouldn't notice
the little thing being missing?

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Joshua Judson Rosen
On 1/6/21 3:07 PM, Bruce Labitt wrote:
> Checked the media, both are readable using the RPI4.  Seems like the power 
> supply is failing.  It's cycling on and off even with no media, dvd, or 
> drives.  I think this is a dead parrot.
> 
> Well, that was fun.  Uh, not really.
> 
> Guess I need to go computer shopping.  It was an i7, 32GB RAM, 17" screen.  
> It had a nvidia GPU so I could play with CUDA.  What's out there that's at 
> least as good performance wise and not a PIA to convert to linux.  It was a 
> Bonobo Extreme 6.  At the time it was pretty high 
> end.  My BonX6 was a boat anchor, but since it hardly moved, it wasn't a 
> problem.  Of course, light and performance is good too.  Any good laptops out 
> there?  Been out of the loop a while.

Showtime Computer  in Hudson now does custom-built 
laptops,
as of some time in the last few years IIRC. They look like they're based on the 
same ODM kits
as the other Linux boutiques I've shopped, and should be solid.

ThinkPenguin  is also based in NH again (Keene, 
last I heard);
looks like they've may have stopped doing laptops for the time being, though
(I don't see any in the listing on their website, just accessories; they have 
_desktops_...).

I was buying all of my computers from ZaReason, but they just went out of 
business
("Unfortunately, the pandemic has been the final KO blow. It has hit our little 
town hard
   and we have not been able to recover from it.
   As of Tuesday, 11/24/20 17:00 EST ZaReason is no longer in business.").

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: NH Linux laptop builders (was: SMART data & Self tests, not sure if my SSD is on it's last gasp)

2021-01-06 Thread Joshua Judson Rosen
On 1/6/21 4:16 PM, Bill Ricker wrote:
> 
> 
> On Wed, Jan 6, 2021 at 3:45 PM Joshua Judson Rosen  <mailto:roz...@hackerposse.com>> wrote:
> 
> Showtime Computer <http://www.showtimepc.com/ 
> <http://www.showtimepc.com/>> in Hudson now does custom-built laptops,
> as of some time in the last few years IIRC. They look like they're based 
> on the same ODM kits
> as the other Linux boutiques I've shopped, and should be solid.
> 
> 
> ?? I do NOT see Linux listed on their Operating Systems page (except for a 
> WSL mention on WinSvr page).

Sorry, I didn't mean to imply that Showtime was itself a `Linux boutique',
just to say that they are apparently capable of assembling the same laptop 
hardware as ZaReason and ThinkPenguin
were selling when last I'd looked; I'm not sure what the Showtime staff's level 
of Linux savvy is,
or if you'd need to bring your own savvy about chipsets and/or 
software-installation/-setup when
picking components for them to put together into a laptop.

It might be worth asking them.

While the experiences I've had with Showtime have been great,
the scope so far has been limited to buying hardware accessories and 
occasionally
having them help me figure out which accessory I need.

I'm not sure where I'll get *my* next laptop--they're on my list of places to 
look,
but my next laptop purchase is probably at least another year away.

> ThinkPenguin <https://www.thinkpenguin.com 
> <https://www.thinkpenguin.com>> is also based in NH again (Keene, last I 
> heard);
> 
> Interesting
> 
> looks like they've may have stopped doing laptops for the time being, 
> though
> (I don't see any in the listing on their website, just accessories; they 
> have _desktops_...).
> 
> Too bad
> 
> I was buying all of my computers from ZaReason, but they just went out of 
> business
> ("Unfortunately, the pandemic has been the final KO blow. It has hit our 
> little town hard
>     and we have not been able to recover from it.
>     As of Tuesday, 11/24/20 17:00 EST ZaReason is no longer in 
> business.").
> 
> 
> Sad.

Indeed! I bought my first ZaReason machine in ~2011, and bought several in the 
intervening years--
including laptops, desktops, small servers, large servers..., and they were 
always a joy:
even fairly exotic requests like "could you install Debian on it with RAID-5 
across 4 disks and LVM on top of that,
and with an initial user login named suchandsuch" were things that they'd 
happily do as part of their setup.

They'd send photos of new laptops when I asked what the access-panels on the 
bottom looked like before ordering.

And I even have a nice little ZaReason-branded phillips-head screwdriver that 
they send to make DIY hardware-upgrades extra easy.


-- 
Connect with me on the GNU social network! 
<https://status.hackerposse.com/rozzin>
Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2020-12-30 Thread Joshua Judson Rosen
Storage still scares me, just as a general principle...,
so I'm basically never going to say "you really have nothing to worry about"...,
but I think I _might_ be able to settle your nerves a little:

On 12/30/20 2:04 PM, Bruce Labitt wrote:
> I think I have a SSD on the way out.  Last reboot took a REALLY long
> time.  Like 30 minutes.
Are you sure your computer wasn't just running an extensive fsck during that 
boot?

Assuming you're running one of the "ext" filesystem variants (ext4, ext3...),
you can try running dumpe2fs on each of your filesystems and looking at the 
"Last checked" field.
If that's the same as the last time you booted..., there you go.

IIRC ext3 used to force periodic full fsck by default I'm not sure what the 
intervals were,
what the current defaults are, or when they might have changed. A lot of people 
liked to
disabled them, though, because otherwise the lengthy fsck always seemed to come 
at the
most unexpected and inopportune times (especially on laptops that might be 
running battery-only).
The relevant fields in the dumpe2fs output here are "Maximum mount count" and 
"Check interval".

Your smartctl output actually doesn't sound any alarms for me:

> I ran the smart data and self test and the SSD
> passes.  Overall assessment is disk is ok.  I really don't know how to
> interpret what the results are.
> 
> I think the disk is in pre-fail based on the smartctl output below

I think you're misreading the `attribute TYPE' column as an `attribute value 
summary interpretation'.

"Pre-Fail" doesn't mean "this drive *is* about to fail according to current 
value of this attribute",
it just means "this drive *would be* about to fail if the current value were
past the value in the THRESHOLD column".

The relevant paragraph from the smartctl manual:

 The Attribute table printed  out  by  smartctl  also  shows  the
 "TYPE"  of  the  Attribute.   Attributes are one of two possible
 types: Pre-failure or Old age.  Pre-failure Attributes are  ones
 which, if less than or equal to their threshold values, indicate
 pending disk failure.  Old age, or usage  Attributes,  are  ones
 which  indicate end-of-product life from old-age or normal aging
 and wearout, if the Attribute value is less than or equal to the
 threshold.   Please  note: the fact that an Attribute is of type
 'Pre-fail' does not mean that your disk is about  to  fail!   It
 only  has  this  meaning  if  the Attribute's current Normalized
 value is less than or equal to the threshold value.


Just going by your smartctl report, this drive looks `practically new' to me...:
the current and `worst ever seen' values are all at 100 and the closest pre-fail
indicator is `not until it gets down to 50' (and the others are
either `not until it gets down to 10' or `not until it gets down to 1').

The Power_On_Hours and Power_Cycle_Count figures show that the drive has 
probably been
in use in a laptop (with typical sleep/wake/powercycle frequency) for a couple 
of years,
but that's all I see.

If you haven't taken a backup recently..., you should do _that_... just 
because... backups.

It's been a while since I researched `SSD failure modes', but my recollection 
was
that `suddenly, completely, and without a lot of warning' was pretty typical--
as opposed to the old spinning-platter disc drives for which `first they get 
hot and noisy'
and `you lose a few sectors first and then an recover the rest' were more 
normal
(someone who's more up-to-date on this than me, please jump in!). So.., 
yeah--backups.

And if it's a couple years old, it might be out of its warranty period--
so consider whether that bothers you, I guess?


> 
> /snip
> 
> === START OF INFORMATION SECTION ===
> Model Family: Crucial/Micron RealSSD m4/C400/P400
> Device Model: M4-CT256M4SSD2
> Serial Number:    1247091DC2FF
> LU WWN Device Id: 5 00a075 1091dc2ff
> Firmware Version: 040H
> User Capacity:    256,060,514,304 bytes [256 GB]
> Sector Size:  512 bytes logical/physical
> Rotation Rate:    Solid State Device
> Form Factor:  2.5 inches
> Device is:    In smartctl database [for details use: -P show]
> ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 6
> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
> Local Time is:    Wed Dec 30 13:49:17 2020 EST
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
> 
> === START OF READ SMART DATA SECTION ===
> SMART overall-health self-assessment test result: PASSED
> 
> /snip
> 
> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
>     1 Raw_Read_Error_Rate 0x002f   100   100   050 Pre-fail
> Always   -   0
>     5 Reallocated_Sector_Ct   0x0033   100   100   010 Pre-fail
> Always   -   0
>     9 Power_On_Hours  0x0032   100   100   001 Old_age
> Always   -   7294
>    12 

Re: Strategy for moving off big tech

2021-01-25 Thread Joshua Judson Rosen
On 1/21/21 4:58 PM, Ray Cote wrote:
> Since we're talking migrations, where did people go after Oracle purchased 
> Dyn? 
> I'm looking to migrate off Oracle DNS this year.
> --Ray

What specific service have you been getting from Dyn/Oracle?

Are we talking about largescale DNS DDOS protection,
ye olde dynamic record-updates for the home-hosted servers with dynamic IP 
addresses,
general DNS-hosting for your domains,
or...?

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Is there a "better NoScript" that makes more sense?

2021-01-22 Thread Joshua Judson Rosen
I've been trying out NoScript in Firefox on one of my computers after having 
seen people recommend it for years,
and I'm finding that NoScript's whole permissions model just seems..., how do I 
put this nicely...:
stupid. Or maybe just `stupidly antiquated'?

Is there something better? More sensible? Let me explain my frustration with 
NoScript, first...:

While it does an OK-ish job of preventing the "some piece of javascript has 
decided to peg my CPU"
problem (but only OK-ish, because that problem really seems to be more due to 
bugs than malice in the JS code),
it seems to be largely useless as far as a `privacy tool'--which is weird, 
because most of the
NoScript advocacy seems to have come from self-styled privacy wonks.

To start with, it's whitelisting-only--so while I can deny JS and some other 
permissions
*to everything by default*, and then whitelist some domains to let everything 
from them in.

Once something (JS loaded from a given site) is enabled, it's *enabled 
globally*--
there's no way of saying "I'm actually _generally_ OK with javascript but 
specifically want
to block this site because it's pegging my CPU [or whatever reasons]". The user 
has to just
accept the much more arduous path of specifically whitelisting `the whole world 
minus this one thing'.

There's no way to just "disable JavaScript [or whatever] it in this container", 
or "disable it in this tab",
or "disable it for this site".

That last one sounds like an oxymoron--like, "what do you mean, once you've 
whitelisted a specific site
there's no way to de-whitelist that site?"..., but actually this takes us to 
the next issue:
that the "per-site whitelisting" is whitelisting of the sites
*from which separate/auxiliary (often third-party) resources are loaded*, not 
whitelisting of
sites that *that load those resources*.

Not only is it "whitelisting-only", the whitelist isn't even governing the 
right things.

So for example, if I ever want to use one of Google's websites, for example 
Google Meet
in my `Work' Firefox container, then I have to whitelist "google.com" as a 
source
of auxiliary JavaScript resources--and I have to do that *globally*, which means
that now every site website out there trying to load a fragment from google.com
as part of a Google advertising-and-tracking campaign will now be allowed to do 
that.

There's no way to say "allow loading google.com trackers and scripts when I'm 
specifically
using a google.com website, but elsewise refuse cross-site loads of google.com 
resources
when I'm just using some random non-Google website that has no business making 
me send info to Google".

If I ever want to use any Google site, I'm stuck having to do the "disable 
NoScript entirely for this tab"
every time, or loading it in an Incognito tab with the NoScript extension 
itself set to
`do not run in Incognito tabs at all' (which is a situation that a bunch of 
other caveats itself).

And Google's just an example of when that situation is even easy to recognize;
a lot of sites load resources from something like 
"fjr88fghdjt92838ngjfhgg82hgjfdskg2388gg22sg.cloudfront.net"--
good luck figuring out what that even is or how many other sites might also be 
calling out to it.

There's a "Temp. TRUSTED" option, but that's `temporary' meaning `until the 
browser exits'
and is still completely global for the duration of the session AFAICT (it 
doesn't appear
to be not per-tab, or per-container, or per-site, or per *anything* that I can 
identify).

In a WWW where practically every resource is loaded cross-site, and where both 
security
and privacy issues (and even `stability' and `usability' issues too!) can have 
as much to do
with the relationships and access-patterns *between* those sites and the user as
with the origin from which any *particular* resource is served..., this just
isn't making a lot of sense to me.

Is there a well-regarded Firefox extension out there that actually does
anything like I would have expected? Or is there something that's actually
already *in NoScript* that I'm somehow overlooking?

Or have I just gone completely mad?

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is there a "better NoScript" that makes more sense?

2021-01-22 Thread Joshua Judson Rosen
On 1/22/21 12:26 PM, Derek Atkins wrote:
> Hi,
> 
> Yes, it is default-deny and you must enable what you want/need.
> You can certainly say "enable this JS source for this website only".  So
> you don't need to enable it globally.

How? As I was hoping to thoroughly convey in my previous message,
I really don't see any way to do that.


> On Fri, January 22, 2021 12:18 pm, Joshua Judson Rosen wrote:
>> I've been trying out NoScript in Firefox on one of my computers after
>> having seen people recommend it for years,
>> and I'm finding that NoScript's whole permissions model just seems..., how
>> do I put this nicely...:
>> stupid. Or maybe just `stupidly antiquated'?
>>
>> Is there something better? More sensible? Let me explain my frustration
>> with NoScript, first...:
>>
>> While it does an OK-ish job of preventing the "some piece of javascript
>> has decided to peg my CPU"
>> problem (but only OK-ish, because that problem really seems to be more due
>> to bugs than malice in the JS code),
>> it seems to be largely useless as far as a `privacy tool'--which is weird,
>> because most of the
>> NoScript advocacy seems to have come from self-styled privacy wonks.
>>
>> To start with, it's whitelisting-only--so while I can deny JS and some
>> other permissions
>> *to everything by default*, and then whitelist some domains to let
>> everything from them in.
>>
>> Once something (JS loaded from a given site) is enabled, it's *enabled
>> globally*--
>> there's no way of saying "I'm actually _generally_ OK with javascript but
>> specifically want
>> to block this site because it's pegging my CPU [or whatever reasons]". The
>> user has to just
>> accept the much more arduous path of specifically whitelisting `the whole
>> world minus this one thing'.
>>
>> There's no way to just "disable JavaScript [or whatever] it in this
>> container", or "disable it in this tab",
>> or "disable it for this site".
>>
>> That last one sounds like an oxymoron--like, "what do you mean, once
>> you've whitelisted a specific site
>> there's no way to de-whitelist that site?"..., but actually this takes us
>> to the next issue:
>> that the "per-site whitelisting" is whitelisting of the sites
>> *from which separate/auxiliary (often third-party) resources are loaded*,
>> not whitelisting of
>> sites that *that load those resources*.
>>
>> Not only is it "whitelisting-only", the whitelist isn't even governing the
>> right things.
>>
>> So for example, if I ever want to use one of Google's websites, for
>> example Google Meet
>> in my `Work' Firefox container, then I have to whitelist "google.com" as a
>> source
>> of auxiliary JavaScript resources--and I have to do that *globally*, which
>> means
>> that now every site website out there trying to load a fragment from
>> google.com
>> as part of a Google advertising-and-tracking campaign will now be allowed
>> to do that.
>>
>> There's no way to say "allow loading google.com trackers and scripts when
>> I'm specifically
>> using a google.com website, but elsewise refuse cross-site loads of
>> google.com resources
>> when I'm just using some random non-Google website that has no business
>> making me send info to Google".
>>
>> If I ever want to use any Google site, I'm stuck having to do the "disable
>> NoScript entirely for this tab"
>> every time, or loading it in an Incognito tab with the NoScript extension
>> itself set to
>> `do not run in Incognito tabs at all' (which is a situation that a bunch
>> of other caveats itself).
>>
>> And Google's just an example of when that situation is even easy to
>> recognize;
>> a lot of sites load resources from something like
>> "fjr88fghdjt92838ngjfhgg82hgjfdskg2388gg22sg.cloudfront.net"--
>> good luck figuring out what that even is or how many other sites might
>> also be calling out to it.
>>
>> There's a "Temp. TRUSTED" option, but that's `temporary' meaning `until
>> the browser exits'
>> and is still completely global for the duration of the session AFAICT (it
>> doesn't appear
>> to be not per-tab, or per-container, or per-site, or per *anything* that I
>> can identify).
>>
>> In a WWW where practically every resource is loaded cross-site, and where
>> both security
>> and privacy issues (and even `stability' and `usability' issues too!) can
>> have 

Re: Python re to separate some data values

2021-04-28 Thread Joshua Judson Rosen
On 4/28/21 5:57 PM, Bruce Labitt wrote:
> If someone could suggest how to do this, I'd appreciate it.  I've 
> scraped a table of fine thread metric screw parameters from a website.  
> I'm having some trouble with regex (re) separating the numbers.  Have 
> everything working save for this last bit.
> 
> Here is a sample string:
> 
> r1[1] = ' 17.98017.87417.65517.59917.43917.291'
> 
> I'm trying to separate the numbers.  It should read like this:
> 
> 17.980, 17.874, 17.655, 17.599, 17.439, 17.291
> 
> There's more than 200 lines of this, so it would be great to automate 
> it!  Each number has 3 digits of precision, so I want to add a comma and 
> a space after the third digit.
> 
> re.search('(\.)\d{3,3}', r1[1]) returns
>  so it found the first instance.
> 
> But, re.sub('(\.)\d{3,3}', '(\.)\d{3,3}, ', r1[1]) yields a KeyError: 
> '\\d' (Python3.8).  Get bad escape \d at position 4.
The second argument [the replacement string] to re.sub(pattern, repl, string) 
is not supposed to
just be a variation of the pattern-matching string that you passed as the first 
argument.

I think the best illustration that I can give here is to just fix this up for 
you:

re.sub(r'(\.)(\d{3,3})', r'\1\2, ', r1[1])

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Python re to separate some data values

2021-04-28 Thread Joshua Judson Rosen

On 4/28/21 7:01 PM, Bruce Labitt wrote:
> On 4/28/21 6:28 PM, Joshua Judson Rosen wrote:
>>> re.search('(\.)\d{3,3}', r1[1]) returns
>>>  so it found the first instance.
>>>
>>> But, re.sub('(\.)\d{3,3}', '(\.)\d{3,3}, ', r1[1]) yields a KeyError:
>>> '\\d' (Python3.8).  Get bad escape \d at position 4.
>> The second argument [the replacement string] to re.sub(pattern, repl, 
>> string) is not supposed to
>> just be a variation of the pattern-matching string that you passed as the 
>> first argument.
>>
>> I think the best illustration that I can give here is to just fix this up 
>> for you:
>>
>>  re.sub(r'(\.)(\d{3,3})', r'\1\2, ', r1[1])
>>
> Thanks for the embarrassingly concise answer.  It is greatly 
> appreciated.  Can you explain the syntax of the 2nd argument?  I haven't 
> seen that before.  Where can I find further examples?
> 
> What astounds me is re.search allowed my 1st argument, but re.sub barfed 
> all over the same 1st argument.

Actually re.search also accepted your first argument just fine.
It was your _second_ argument that it barfed all over,
because your match didn't produce a "matched character group #d",
it only produced a "matched character group #1"
(IIRC Python's RE documentation generally just calls them "groups").

Note that I added a second set of parentheses to your _pattern_
so that you now have also a group #2.

I was trying to make the smallest change possible to your pattern,
but this also would work fine:

re.sub(r'(\.\d{3,3})', r'\1, ', r1[1])


The "\1" (and "\2", in the previous example) are "references",
and are actually explained in an OK-ish way in the online Python library 
manual's
section for re:

https://docs.python.org/3/library/re.html

(there are also a few other backreference syntaxes that you can use in Python,
 so that you can give non-numeric names to them or just avoid ambiguities like
 whether "\20" means `group #2 and then a literal "0"' or `group #20'...).

-- 
Connect with me on the GNU social network! 
<https://status.hackerposse.com/rozzin>
Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is there a "better NoScript" that makes more sense?

2021-01-22 Thread Joshua Judson Rosen
On 1/22/21 1:24 PM, Derek Atkins wrote:
> 
> On Fri, January 22, 2021 1:08 pm, Joshua Judson Rosen wrote:
>> On 1/22/21 12:26 PM, Derek Atkins wrote:
>>> Hi,
>>>
>>> Yes, it is default-deny and you must enable what you want/need.
>>> You can certainly say "enable this JS source for this website only".  So
>>> you don't need to enable it globally.
>>
>> How? As I was hoping to thoroughly convey in my previous message,
>> I really don't see any way to do that.
> 
> https://www.ghacks.net/2015/04/21/how-to-add-custom-site-exclusions-to-noscript/

Oh. That would be great, but that AFAICT ABE no longer exists in 2021 :(

It appears to have been dropped in 2017 with the 10.1.1 release,
when NoScript was rewritten as a WebExtension to work with newer
releases of Firefox.

A new ABE was supposed to be `coming soon' at the time, but

Remark from the maintainer just 6 months ago
<https://forums.informaction.com/viewtopic.php?p=102191#p102201>:

Most users didn't use ABE, but power users (and myself) did,
and I've got some plans for that too, but unfortunately I
first need to ensure it's not all lost work with Manifest V3.
Contextual permissions are likely to come earlier,
probably as a a further CUSTOM option (e.g. an "on this site only" 
checkbox).


So I'm either too late or too early to be able to use ABE.

Other options?

-- 
Connect with me on the GNU social network: 
<https://status.hackerposse.com/rozzin>
Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Joshua Judson Rosen
On 3/23/21 9:07 AM, Bruce Labitt wrote:
> On Tue, Mar 23, 2021 at 8:52 AM Dan Jenkins  > wrote:
> 
> In my experience dd works. Make sure the destination disk is larger than
> the source. I've had problems sometimes when they were the exact same
> size. Any other issue was due to issues on the source disk, in which
> case ddrescue, has worked.
 >
> In my case the disks report to be the same size in lsblk.
> fdisk -l reports the hdd is 1000204886016 bytes, and the sdd is 1000204886016 
> bytes or exactly the same size.
> Guess I will try dd.  Fingers crossed...
If the destination disk does end up not being big enough,
you can probably shrink the data on the source disk a little
(and then fix up the GPT on the destination disk, if you're using GPT--
  because GPT wants to keep a `backup table' at the _physical tail end_
  of the disk and some implementations of GPT will refuse to read
  the partition table if the disk doesn't pass that sniff-test).

Just be sure that you don't have the source filesystem mounted writeably when
you're trying to copy it like that...: it's pretty important
that nothing be actively using a filesystem and causing the data/metadata
stored within it to change as you try to dd a copy of its underlying storage.

And, since I don't think anyone mentioned this: be sure to us a big enough 
blocksize
with dd, because the default 512-byte bs will be incredibly slow
(and I guess *could* in theory cause a lot of extra wear on the SSD
  due to write-amplification, though I guess the Linux block layer
  should protect against that?).

For the rest of my response, I'm going to mostly ignore the "Windows 10"
part of the question and provide guidance to people looking through the list 
archive
for guidance doing the same sort of data-migration but for Linux disks
(though I suspect that the underlying rationales port to other operating 
systems)...:

There are theoretically reasons to make a fresh filesystem / LVM PV / RAID / 
whatever
structure, with its configuration tuned to the native block-size of the 
underlying
flash controller..., but in practice it never seems to matter enough to make it 
worthwhile
to even bother figuring out what any of those numbers are.

There are however real + very practical reasons to initialize storage on 
different physical disks
with different *IDs* if you want to be able to use them together at the same 
time, e.g.:
different ext filesystem UUIDs, or different PV IDs if you want to be able
to use them both with LVM. You can also change those IDs after the fact.

If I understand your use case, you really just have a partition table with
one two filesystems + maybe swap space, and you're going to discard the data on
the old disk (and maybe even discard the old disk) once the transfer is done.
Taking both disks out of use and just dd-ing from one to the other should be
fine for that--though, when doing such a migration for Linux systems, you might 
want
to consider setting the new disk up with LVM (and then dd'ing from the 
_partitions_
on the source disk into the _Logical Volumes_ on the destination disk),
so that things like this are easier next time.

(one of the many nice things about LVM, even if you're only going to have one 
disk `right now',
  is that it makes disk-migrations like this *really easy*: you just plug the 
new
  disk in, ensure that it has been initialized as a PV, run "pvmove", and then 
pull
  the old disk out when it's done--and you can keep using the filesystem while
  the migration is in progress--and LVM makes it pretty straightforward to even
  do things like convert a single disk into a RAID setup at some future point,
  like when you're about to throw the old disk out and decide that it'd be more 
useful
  as a hot spare than as landfill).

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Have suggestions for a "roll your own file server"?

2021-03-10 Thread Joshua Judson Rosen
I'm not sure about the Raspberry Pi 4, but up thru the raspi 3+ there are... 
problems, e.g.:

Beware of USB on the raspi: there are some bugs in the silicon that pretty 
severely
cripple performance when multiple `bulk' devices are used at simultaneously,
sometimes to the point of making it unusable (e.g. if you want to use a better 
Wi-Fi
adapter/antenna than the one built onto the board, and connect an LTE modem so 
that
your raspi roam onto that if Wi-Fi becomes unavailable, throughput on whichever 
of those
interfaces you're actually using can become abysmal). IIRC the issue is 
basically
that the number of USB endpoints that can be assigned interrupts by the raspi 
controller
is _incredibly small_; and it's common for high-throughput devices to have 
multiple endpoints per device--
sometimes even one USB device will have more endpoints that the raspi USB 
controller can handle.

Also, `network fileserver with USB-attached hard drives' is kind of the `peak 
unfitness'
for the raspberry pi. Specifically if you've got it attached to ethernet,
the ethernet is attached through the same slow-ish USB bus as your HDDs.

(the onboard Wi-Fi BTW is SDIO; so if you avoid using the onboard Wi-Fi, I 
guess you might also
 be able to make your µSD card faster...)

ALSO: you'll really want to use an externally-powered USB hub for USB devices
that are not totally trivial, because the raspi's µUSB power supply is already
strained... (and if you're trying to power your raspi from some random USB 
power supply,
don't. Ideally you power it through the 5V pins on the expansion header...).


Though there is a lot of neat stuff that can be done with a Raspberry Pi,
it's really easy to overestimate it.

But on the other hand: YMMV, and there are scenarios where the issues don't 
matter,
and might not even be noticeable. e.g., if you're dumping periodic backups to 
your
raspi asynchronously instead of (say) NFS mounting it and trying to use it 
interactively,
you might not even notice the weird bottlenecks because you're never looking at 
them.
And if you have enough of them as spares running simultaneously, you may not 
care
that every once in a while your fileystems get corrupted or your USB ports stop 
working
or whatever.


On 3/8/21 9:56 PM, jonhal...@comcast.net wrote:
> I will suggest something and let people rip it apart:
> 
> Get two RPis that have at least USB 2.0  Attach two large capacity disks to 
> each one in a RAID-1 configuration (also known as "mirroring") to keep it 
> simple.  If one disk fails the other will still keep working (but you should 
> replace it as soon as possible).
> 
> Put all of your data on both systems.
> 
> Take one of your systems to a friends or relatives house who you trust that 
> has relatively good WiFi.  Make sure the friend is relatively close, but is 
> not in the same flood plain or fire area you are.
> 
> Do an rsync every night to keep them in sync.
> 
> Help your friend/relative do the same thing, keeping a copy of their data in 
> your house.   If your disks are big enough you could share systems and disks.
> 
> Use encryption as you wish.
> 
> Disk failure?   Replace the disk and the data will be replicated.
> Fire, theft, earthquake?   Take the replaced system over to your 
> friends/relatives and copy the data at high speed, then take the copied 
> system back to your house and start using it again.
> 
> You would need three disks to fail at relatively the same time to lose your 
> data.   Or an asteroid crashing that wipes out all life on the planet.  
> Unlikely.
> 
> Realize that nothing is forever.
> 
> md
>> On 03/08/2021 7:33 PM Bruce Labitt  wrote:
>>  
>>  
>> For the second time in 3 months I have had a computer failure.  Oddly, it 
>> was a PS on the motherboard both times.  (Two different MB's.)  Fortunately 
>> the disks were ok.  I'm living on borrowed time.  Next time, I may not be 
>> that lucky.  
>>  
>> Need a file server system with some sort of RAID redundancy.  I want to 
>> backup 2 main computers, plus photos.  Maybe this RPI4 too, since that's 
>> what I'm running on, due to the second failure.  If this SSD goes, I'm gonna 
>> be a sad puppy.  This is for home use, so we are not talking Exabytes.  I'm 
>> thinking about 2-4TB of RAID.  Unless of course, RAID is obsolete these 
>> days.  Honestly, I find some of the levels of RAID confusing.  I want 
>> something that will survive a disk
>> failure (or two) out of the array.  Have any ideas, or can you point me to 
>> some place that discusses this somewhat intelligently?
>>  
>> Are there reasonable systems that one can put together oneself these days?  
>> Can I repurpose an older PC for this purpose?  Or an RPI4?  What are the 
>> gotchas of going this way?
>>  
>> I want to be able to set up a daily rsync or equivalent so we will lose as 
>> little as possible.  At the moment, I'm not thinking about surviving fire or 
>> disaster.  Maybe I should, but I suspect the costs balloon considerably.  

Re: Kind of puzzled about timestamps

2021-03-04 Thread Joshua Judson Rosen
On 3/4/21 7:13 PM, Bruce Labitt wrote:
> Good point.  I'll check that.  Logging machine was set to local time EST.  
> But it does have a wireless link, maybe it set itself internally to UT.  
> Thanks for the hint.

You have your code explicitly calling a function named `UTC from timestamp'.

If you want localtime and not UTC, call the function that doesn't start with 
"utc".

And if you want to assume some particular timezone other than your system's 
default,
you can pass that as an optional argument.

BTW, FYI "UT" is *not* the same thing as "UTC". Timezones are confusing enough,
it's worth spending the extra character to avoid creating even more confusion
(or just call it "Z" and save yourself even more characters).

And as a general word of advice from someone whose been burnt way too many 
times:
if you're going to put timestamps in your filenames, either just use UTC
or explicitly indicate which timezone the timestamps are assuming.

"the local non-UTC timezone" *changes*. Frequently. Like, twice every year if 
you're lucky--
and more frequently than that if you're unlucky. And if you are, for example, 
generating those
files/filenames between 1:00 AM and 2:00 AM when you go from EDT to EST in 
November
(and that "1:00-2:00 localtime" interval *repeats*)..., you'll be sorry.

-- 
Connect with me on the GNU social network! 

Not on the network? Ask me for more info!


> On Thu, Mar 4, 2021, 7:05 PM Dana Nowell  > wrote:
> 
> If I'm reading it correctly, it's a 5 hr difference?  Local vs gmt?
> 
> 
> On Thu, Mar 4, 2021, 6:43 PM Bruce Labitt  > wrote:
> 
> This is an odd question.  It involves both python and linux.
> 
> Have a bunch of files in a directory that I'd like like to sort by 
> similar names and in time order.  This isn't particularly difficult in 
> python.  What is puzzling me is the modified timestamp returned by python 
> doesn't match whats reported by the file manager nautilus or even ls.  (ls 
> and nautilus are consistent)
> 
> $ lsb_release -d Ubuntu 20.04.2 LTS
> $ nautilus --version  GNOME nautilus 3.36.3
> 
> $ python3 --version  Python 3.8.5
> 
> $ ls -lght
> 
> total 4.7M
> -rw-r--r-- 1 bruce 209K Feb 26 01:49 20210226_022134_PLD.edf
> -rw-r--r-- 1 bruce  65K Feb 26 01:49 20210226_022134_SAD.edf
> -rw-r--r-- 1 bruce 2.4M Feb 26 01:49 20210226_022133_BRP.edf
> -rw-r--r-- 1 bruce 1.1K Feb 26 00:58 20210225_224134_EVE.edf
> -rw-r--r-- 1 bruce 1.9M Feb 25 21:18 20210225_224141_BRP.edf
> -rw-r--r-- 1 bruce 169K Feb 25 21:17 20210225_224142_PLD.edf
> -rw-r--r-- 1 bruce  53K Feb 25 21:17 20210225_224142_SAD.edf
> 
> Python3 script
> 
> #!/usr/bin/env python3
> import os
> from datetime import datetime
> 
> def convert_date(timestamp):
>   d = datetime.utcfromtimestamp(timestamp)
>   formatted_date = d.strftime('%d %b %Y  %H:%M:%S')
>   return formatted_date
> 
> with os.scandir('feb262021') as entries:
>   for entry in entries:
>     if entry.is_file():
>   info = entry.stat()
>   print(f'{entry.name }\t Last Modified: 
> {convert_date(info.st_mtime) }' )  # last modification
> 
> info /(after exit) contains/: os.stat_result(st_mode=33188, 
> st_ino=34477637, st_dev=66306, st_nlink=1, st_uid=1000, st_gid=1000, 
> st_size=213416, st_atime=1614379184, st_mtime=1614322176, st_ctime=1614379184)
> 
> Running the script results in:
> 
> 20210226_022133_BRP.edf     Last Modified: 26 Feb 2021  06:49:34
> 20210225_224141_BRP.edf     Last Modified: 26 Feb 2021  02:18:42
> 20210225_224142_PLD.edf     Last Modified: 26 Feb 2021  02:17:44
> 20210225_224142_SAD.edf     Last Modified: 26 Feb 2021  02:17:44
> 20210225_224134_EVE.edf     Last Modified: 26 Feb 2021  05:58:26
> 20210226_022134_SAD.edf     Last Modified: 26 Feb 2021  06:49:36
> 20210226_022134_PLD.edf     Last Modified: 26 Feb 2021  06:49:36
> 
> Actually, what is returned by my script is at least sensible, given 
> that 20210225_224141_BRP.edf started on Feb 25th and ended recording at 
> 2:17am on Feb 26th.  I know this because I can see the data on a separate 
> program.  20210226_022133_BRP.edf started on Feb 26th at around 2:21am and 
> terminated at 6:49am.  BRP files are written to continuously at a 25 Hz rate 
> all evening.  What makes no sense whatsoever is what *ls* is reporting.
> 
> Do *ls* and python3 use different definitions of "last modified"?
> 
> Guess I can keep going, but I really was surprised at the difference 
> between methods.  Default for ls is "last modified", at least as reported by 
> man.  ls's last modified just isn't correct, at least on Ubuntu 

Re: Kind of puzzled about timestamps

2021-03-04 Thread Joshua Judson Rosen
See also: "The Problem with Time and Timezones" 
<https://www.youtube.com/watch?v=-5wpm-gesOY>



On 3/4/21 10:32 PM, Bruce Labitt wrote:
> On 3/4/21 9:56 PM, Joshua Judson Rosen wrote:
>> On 3/4/21 7:13 PM, Bruce Labitt wrote:
>>> Good point.  I'll check that.  Logging machine was set to local time EST.  
>>> But it does have a wireless link, maybe it set itself internally to UT.  
>>> Thanks for the hint.
>> You have your code explicitly calling a function named `UTC from timestamp'.
>>
>> If you want localtime and not UTC, call the function that doesn't start with 
>> "utc".
>>
>> And if you want to assume some particular timezone other than your system's 
>> default,
>> you can pass that as an optional argument.
>>
>> BTW, FYI "UT" is *not* the same thing as "UTC". Timezones are confusing 
>> enough,
>> it's worth spending the extra character to avoid creating even more confusion
>> (or just call it "Z" and save yourself even more characters).
>>
>> And as a general word of advice from someone whose been burnt way too many 
>> times:
>> if you're going to put timestamps in your filenames, either just use UTC
>> or explicitly indicate which timezone the timestamps are assuming.
>>
>> "the local non-UTC timezone" *changes*. Frequently. Like, twice every year 
>> if you're lucky--
>> and more frequently than that if you're unlucky. And if you are, for 
>> example, generating those
>> files/filenames between 1:00 AM and 2:00 AM when you go from EDT to EST in 
>> November
>> (and that "1:00-2:00 localtime" interval *repeats*)..., you'll be sorry.
>>
> These files are written by commercial closed box machines (medical 
> equipment).  There is no choice for the users.  That being said, these 
> machines are designed to basically have the time set once.  (Drift, ntp? 
> what's that?)  If one plays with resetting the time, one can be rewarded 
> by having all your data wiped.
> 
> "UT" was me being lazy.  (Too lazy to type the extra character...)  I 
> don't have any code with explicit timezone stuff in it.  Have to agree 
> it's a good idea to keep time in UTC, to avoid 'many' of the headaches.  
> Nonetheless, it's easy to get confused about all this, especially if 
> external devices don't do time the same way.  (Not all devices handle 
> time correctly.) Then, as you say, you'll be sorry.


-- 
Connect with me on the GNU social network! 
<https://status.hackerposse.com/rozzin>
Not on the network? Ask me for more info!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-08 Thread Joshua Judson Rosen
On 3/6/21 7:46 PM, Ben Scott wrote:
> On Thu, Mar 4, 2021 at 9:57 PM Joshua Judson Rosen
>  wrote:
>> And as a general word of advice from someone whose been burnt way too many 
>> times:
>> if you're going to put timestamps in your filenames, either just use UTC
>> or explicitly indicate which timezone the timestamps are assuming.
> 
> Even that's not enough, because the stupid humans keep changing what
> the time zones mean.  Say you find a file that has a stored time of
> 2007 MAR 31 17:00 UTC.  If that file was written before 2005, then the
> offset to US Eastern is 5 hours.  If that file was written after 2005,
> the offset is 4 hours.  Which did the human mean when they instructed
> the computer to write the file?  No way of knowing, in the general
> case.

Well..., I _did_ say "either... or...". The general idea is just `don't assume
that the reader will just know what scale/units you're using without it being 
declared'.

But some things that I really neglected to mention were:

1) that "indicate which timezone" is itself actually multiple different 
approaches:
   hours offset from UTC, or the _symbolic_ timezone that automatically 
adjusts
   to changing politics.
2) if you want to use those stamps to actually _convey information_, 
then
   which one of _those_ you need depends on specifically what you're 
doing:
   sometimes you want to indicate an actual point on the general 
timeline,
   sometimes you want to indicate how something fits into the local 
schedule
   or relates to `solar time' (e.g.: as a _nerd_, I thought it'd be a 
great idea
   to set my digital cameras' clocks to UTC and just never deal with 
DST or
   any other timezone issues when traveling..., and then as a 
_photographer_ I realized
   what a lousy idea that could end up being...).
3) sometimes you need to indicate _both_
4) you might even need to give your symbolic timezone, your timezone 
offset, _and_ UTC..., _and_

... BUT: even if you only do any arbitrary one of those things, at least you 
won't end up
mistakenly overwriting your records due to multiple distinct points in time
end up generating the same filename.

(the inverse issue, of whether you end up mistakenly _failing to generate 
collisions when you want to_
  can also be a concern of course; but I'd rather leave that as an exercise to 
the reader..., or to Ben ;p)

I *also* didn't even touch on how much all of this will annoy people who like 
nicely-sorting filenames... ;p

Every once in a while, I go back to try to find a solution to all of the other 
problems that also
fits in with _that_, and just fail. Basically..., whenever anyone asks me "what 
time is it?".
And I think I've been at that for 10 years now
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-08 Thread Joshua Judson Rosen
On 3/6/21 9:17 PM, Curt Howland wrote:
> On Saturday 06 March 2021, Ben Scott was heard to say:
>> Even that's not enough, because the stupid humans keep changing
>> what the time zones mean.
> 
> With GMT as the standard time stamp, one can at least know relative
> times of files, even if one does not know such real-world details.
> 
> I've gotten into a couple of heated arguments for/against the idea of
> using GMT for everything, and then adapting what time things happen
> to local conditions. The reactions to that idea have been
> fascinating. Much like the arguments about Daylight Saving.
> 
> I mean, how silly can one be to object to it being dark when you wake
> up, and then demanding that everyone else change the time on their
> clocks so it's light at 7am the way you want it to be?

Isn't that pretty much exactly the _opposite_ of what DST does?

Every Spring, I get delighted that the sun is finally rising early enough
to make it easy to get up at what's supposed to be `a reasonable time' in the 
morning,
and I then DST comes in and makes the sun not rise for another hour.
Ha-ha! Early April Fools'! Another month of dark mornings!

And then all of the news channels are flooded with people opining about
how much `better' it would be `if we just had DST all year' because
all they care about is having daylight-fun hours at the _tail end of the day_,
because..., I don't know--they're all unemployed? Or by the end of the day
they've forgotten how hard it was to get out of the bed and be useful
that morning? And I guess by the time *spring* rolls in they've forgot
that `winter' even was ever a thing

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-08 Thread Joshua Judson Rosen
On 3/8/21 7:28 AM, John Abreau wrote:
> 
> 
> On Sat, Mar 6, 2021 at 7:48 PM Ben Scott  <mailto:dragonh...@gmail.com>> wrote:
> 
> On Thu, Mar 4, 2021 at 9:57 PM Joshua Judson Rosen
> mailto:roz...@hackerposse.com>> wrote:
>  > And as a general word of advice from someone whose been burnt way too 
> many times:
>  > if you're going to put timestamps in your filenames, either just use 
> UTC
>  > or explicitly indicate which timezone the timestamps are assuming.
> 
> Even that's not enough, because the stupid humans keep changing what
> the time zones mean.  Say you find a file that has a stored time of
> 2007 MAR 31 17:00 UTC.  If that file was written before 2005, then the
> offset to US Eastern is 5 hours.  If that file was written after 2005,
> the offset is 4 hours.  Which did the human mean when they instructed
> the computer to write the file?  No way of knowing, in the general
> case.
> 
> 
> I'd argue that this case does not matter, because the human is making a 
> reference to an event in the future, and it is impossible in principle to 
> anticipate unexpected future changes in such definitions.
> 
> You could plan a vacation in Switzerland in 2030, but if an asteroid 
> obliterates Switzerland in 2028, your vacation plans become null and void. 
> It's not a contingency you need to plan for when making your vacation plans.
Next: Opening the software toolbox,  Prev: File timestamps,  Up: Top

29 Date input formats
*

First, a quote:

  Our units of temporal measurement, from seconds on up to months,
  are so complicated, asymmetrical and disjunctive so as to make
  coherent mental reckoning in time all but impossible.  Indeed, had
  some tyrannical god contrived to enslave our minds to time, to make
  it all but impossible for us to escape subjection to sodden
  routines and unpleasant surprises, he could hardly have done better
  than handing down our present system.  It is like a set of
  trapezoidal building blocks, with no vertical or horizontal
  surfaces, like a language in which the simplest thought demands
  ornate constructions, useless particles and lengthy
  circumlocutions.  Unlike the more successful patterns of language
  and science, which enable us to face experience boldly or at least
  level-headedly, our system of temporal calculation silently and
  persistently encourages our terror of time.

  ... It is as though architects had to measure length in feet, width
  in meters and height in ells; as though basic instruction manuals
  demanded a knowledge of five different languages.  It is no wonder
  then that we often look into our own immediate past or future, last
  Tuesday or a week from Sunday, with feelings of helpless confusion.
  ...

  —Robert Grudin, ‘Time and the Art of Living’.

-Info: (coreutils)Date input formats, 50 lines --Top


-- 
Connect with me on the GNU social network: 
<https://status.hackerposse.com/rozzin>
Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-08 Thread Joshua Judson Rosen
On 3/8/21 2:16 PM, Jerry Feldman wrote:
> I love this discussion. I've been involved with computer time since the early 
> 1970s. While at burger King I wrote a standardized set of time utilities in 
> cobol. Later at Digital I was responsible for the utmp libraries, and the 
> standard test failed. The issue was that the 
> standard test used a future time beyond 2035. Back then tine_t was a signed 
> 32 bit integer

I bought a house with a 30-year mortgage in late 2008. My first house, actually.

All of the things that people talk about being afraid of with being a new 
home-buyer...,
well..., none of them compared to the sense of dread that I felt when I looked 
at
the end-date on the mortgage and asked myself:

What's the likelihood that this date is going to pass through a computer
where time_t is not wider than 32 bits before then?

So I pay a little extra each month.
Hopefully I can have the account closed and expunged before that point ;p

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-05 Thread Joshua Judson Rosen
On 3/5/21 2:15 PM, Bruce Labitt wrote:
>
> On 3/4/21 10:51 PM, Joshua Judson Rosen wrote:
> >
> > See also: "The Problem with Time and Timezones" 
> > <https://www.youtube.com/watch?v=-5wpm-gesOY>
> >
> > 
>
> That was somewhat comical.  Yeah, been trying to keep everything with 
> respect to UTC.  It can be a little difficult at times, as it's easy to 
> goof up and fall in to quite a few time trap holes.

See also:

http://falsehoodsabouttime.com/


http://www.creativedeletion.com/2015/01/28/falsehoods-programmers-date-time-zones.html


> One of the more difficult things has been indexing into the time array.  
> I've been using numpy's timedate64 and timedelta64 but occasionally 
> still get tripped up. Handling time is complicated.  Fortunately, all 
> that I care about for this project is relative time.  Start time, end 
> time and time is "linear" in between.  According to the the youtuber, 
> even that's not guaranteed if one spans the new year and we need a leap 
> second!

Indeed! Though I fear that the reality is actually worse than the impression 
you got

A lot of the `if this happens and also' conditions are actually `if _either_ 
this _or_ that'.

e.g.: most days have 86400 seconds, but...:

* some have 86401 (+ leap seconds)
* some have 86399 (- leap seconds--significantly rarer: hasn't happened 
_yet_, but...)
* some have 82800 (i.e. "some days only have 23 hours", normal 
spring-forward DST shift)
* some have 9 (i.e. "some days have 25 hours", normal fall-backward 
DST shift)
* conceivably some may even have 90001 or 8

(Really! RE: negative leap seconds, `there is a first time for everything':
 <https://www.livescience.com/earth-spinning-faster-negative-leap-second.html>)

And yeah..., even if you're using unix time (seconds since the epoch)..., unix 
time
specifically does _not_ count leap seconds..., which is both wonderful and 
terrible

Quoting the time(2) man page I have here:

This value is not the same as the actual number of seconds between the 
time and the Epoch,
because of leap seconds and because system clocks are not required to 
be synchronized
to a standard reference.  The intention is that the interpretation of 
seconds since the Epoch
values be consistent; see POSIX.1-2008 Rationale A.4.15 for further 
rationale.

Wikipedia has some text on this, as well 
<https://en.wikipedia.org/wiki/Unix_time#Leap_seconds>:

When a leap second occurs, the UTC day is not exactly 86400 seconds 
long and the Unix time number
(which always increases by exactly 86400 each day) experiences a 
discontinuity.
Leap seconds may be positive or negative. No negative leap second has 
ever been declared,
but if one were to be, then at the end of a day with a negative leap 
second,
the Unix time number would jump up by 1 to the start of the next day.
During a positive leap second at the end of a day, which occurs about 
every year and a half on average,
the Unix time number increases continuously into the next day during 
the leap second and then at the end
of the leap second jumps back by 1 (returning to the start of the next 
day).


"all I have to care about is relative time" _should_ make your life easier..., 
in theory...,
_assuming_ that the timestamps that you get and need to diff _really are_ on a 
linear timescale.

Good luck. I actually would love to hear about whatever linear timescale you 
end up settling on.

This is why astronomers are using `Julian years'


Oh! ALSO: I think you may have mentioned previously that you're also reading 
these files
from a FAT-formatted SD card or something..., which is, itself, multiple 
additional sources of confusion:

* FAT can only store timestamps down to *2-second* resolution, which 
means that
  all file-timestamps get rounded to the nearest *even second*.

* FAT doesn't store timestamps on an _absolute_ timescale, it only 
stores them
  (in `broken-down time') _relative to a given instantaneous timezone_.

* FAT doesn't actually give the timezone.

So..., when you for example load a FAT-formatted SD card into a Linux 
computer,
and the vfat driver in Linux needs to convert those `broken-down timestamps 
relative
to an unspecified instantaneous timezone' into `absolute seconds since the 
epoch',
IIRC it basically assumes that _Linux's current instantaneous system timezone 
offset_
is appropriate for interpreting the broken-down time stamps in the FAT 
filesystem.

If you are on the opposite side of a DST transition from when the files were 
stamped
(even if it's only 1 second of difference... or, uh..., would that be 2 
seconds?);
_or_ if you're actually in a different timezone (e.g. because you're

Anyone use System 76's Pop!_OS? Opinions?

2021-08-10 Thread Joshua Judson Rosen
Are there any current or past users of System 76's Pop!_OS here?

Just looking for some opinions rooted in actual use about what it's like
and what sort of quality-of-life improvements exist for people using it vs.
just installing upstream Ubuntu and managing that config yourself.

If you tried it and kept it, why--what did you like about it?
If you tried it and then ditched it for something else, why?

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


New NH "SOFTWARE Act" legislation, RE: FOSS

2022-01-09 Thread Joshua Judson Rosen
Hopefully everyone here has seen this by now, but maybe not since I didn't see 
any messages here about it yet:


https://www.fsf.org/blogs/community/new-hampshire-residents-make-your-voice-heard-on-january-11th


https://gencourt.state.nh.us/bill_status/legacy/bs2016/billText.aspx?sy=2022=1363=html


Summary:


> HB 1273  - AS INTRODUCED
> 
> 2022 SESSION
> 
> 22-2270
> 05/04
> 
> HOUSE BILL 1273
> 
> AN ACT relative to the use of free and open source software.
> 
> SPONSORS: Rep. Gallager, Merr. 15; Rep. Marsh, Carr. 8
> 
> COMMITTEE: Executive Departments and Administration
> 
> -
> 
> ANALYSIS
> 
> This bill:
> 
> I.  Prohibits certain non-compete clauses and non-disclosure agreements 
> regarding free software projects and the sharing of open source software.
> 
> II.  Prohibits, with limited exception, state agencies from using proprietary 
> software in interactions with the public.
> 
> III.  Recognizes the value of data portability and directs the department of 
> information technology to adopt a policy protecting data portability.
> 
> IV.  Prohibits state and local law enforcement from participating in the 
> enforcement of copyright claims against free and open source software 
> projects.
> 
> V.  Establishes a commission to study the use of free software by state 
> agencies.
> 
> VI.  Establishes a software purchasing policy that permits the purchase of 
> proprietary software and hardware only when free software alternatives are 
> not available.
> 
> VII.  Allows the defendant to examine the source code of proprietary software 
> used to generate evidence against the defendant in a criminal proceeding.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: New NH "SOFTWARE Act" legislation, RE: FOSS

2022-01-11 Thread Joshua Judson Rosen
FYI it looks like the NH legislative "remote sign-in" is open until late 
tonight,
so there's still time to register your opinion; link from the twitter/nitter 
thread:


http://www.gencourt.state.nh.us/house/committees/remotetestimony/default.aspx

Update: while remote testimony might not be back this year, remote 
sign-in *is* back this year!

http://www.gencourt.state.nh.us/house/committees/remotetestimony/default.aspx
Note that you have to choose the date for the hearing from the calendar 
first
before any of the other menu fields will become interactive.


There are some parts of the bill as-written that I think really need to be 
supported;
there are also some "... AND PONIES!" items that I guess are there because, 
well...,
that seems to be how this works.


On 1/9/22 5:22 PM, Joshua Judson Rosen wrote:
> Hopefully everyone here has seen this by now, but maybe not since I didn't 
> see any messages here about it yet:
> 
>   
> https://www.fsf.org/blogs/community/new-hampshire-residents-make-your-voice-heard-on-january-11th
> 
>   
> https://gencourt.state.nh.us/bill_status/legacy/bs2016/billText.aspx?sy=2022=1363=html
> 
> 
> Summary:
> 
> 
>> HB 1273  - AS INTRODUCED
>>
>> 2022 SESSION
>>
>> 22-2270
>> 05/04
>>
>> HOUSE BILL 1273
>>
>> AN ACT relative to the use of free and open source software.
>>
>> SPONSORS: Rep. Gallager, Merr. 15; Rep. Marsh, Carr. 8
>>
>> COMMITTEE: Executive Departments and Administration
>>
>> -
>>
>> ANALYSIS
>>
>> This bill:
>>
>> I.  Prohibits certain non-compete clauses and non-disclosure agreements 
>> regarding free software projects and the sharing of open source software.
>>
>> II.  Prohibits, with limited exception, state agencies from using 
>> proprietary software in interactions with the public.
>>
>> III.  Recognizes the value of data portability and directs the department of 
>> information technology to adopt a policy protecting data portability.
>>
>> IV.  Prohibits state and local law enforcement from participating in the 
>> enforcement of copyright claims against free and open source software 
>> projects.
>>
>> V.  Establishes a commission to study the use of free software by state 
>> agencies.
>>
>> VI.  Establishes a software purchasing policy that permits the purchase of 
>> proprietary software and hardware only when free software alternatives are 
>> not available.
>>
>> VII.  Allows the defendant to examine the source code of proprietary 
>> software used to generate evidence against the defendant in a criminal 
>> proceeding.
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

-- 
Connect with me on the GNU social network: 
<https://status.hackerposse.com/rozzin>
Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Email & Spam

2023-03-12 Thread Joshua Judson Rosen
 > On 3/10/23 12:43, Bruce Labitt wrote:
 >> In email headers, are there any fields which are not spoof-able?  Or is 
 >> email simply a morass that is totally unsolvable and broken?  Simply 
 >> impossible to filter spam?  Now I am getting spam that is passing all the 
 >> dmarc, spf, and dkim checks.  Volume is relatively low at the
 >> moment, 6 in 12 hours, but I am sure the bad guys are working on increasing 
 >> the volume.
 >>
 >> In particular, is
 >>
 >> X-Origin-Country reliable?  Or is this data field unsuitable for filtering 
 >> as well?
 >>
 >> Are there any mail client pre-filtering packages that can be added?  Or is 
 >> this a game best left to?

On 3/10/23 17:02, Bruce Dawson wrote:
> Essentially, no - all email headers are spoofable except the ones put on by 
> your server. > Your server should insert a Received-by header that indicates 
> who sent that message to you.
Though in the case of the headers providing DKIM signatures, those are 
"unspoofable" to the extent that they're used,
since that's a cryptographic signature that you can verify.

There are caveats there, basically that the DKIM signatures are only for select 
_parts_ of the message...,
but _generally_ if you have a valid DKIM signature then you at least know where 
the message
actually came from.

And if you've got "spam that is passing all the dmarc, spf, and dkim checks", 
then
you know even more assuredly who's sending you spam.

So, at least in theory, that gets you past the `detecting spoofs' point,
so now you just have to worry about the spam coming in from new
domains that you haven't blocked yet

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Let's try this again: 16 February, support software freedom bill HB-617-FN in Concord

2023-02-15 Thread Joshua Judson Rosen
There will be a hearing mid-day tomorrow (Thursday) in Concord
regarding House Bill 617-FN, "AN ACT prohibiting, with limited exceptions,
state agencies from requiring use of proprietary software in interactions with 
the public."

Unfortunately I have no transportation tomorrow, but maybe you'd be able to go 
and show your support?

More details at 


If you were on this list around this time last year, you may remember that 
there was a similar bill put forward then,
which failed to become law.
This version has apparently been trimmed to hopefully give it a better chance 
at doing so.

-- 
Connect with me on the GNU social network: 

Not on the network? Ask me for an invitation to a social hub!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


<    1   2   3   4   5   6