Re: Frames and Motorola's New Router.

2023-03-12 Thread Bruce Labitt
On 3/12/23 9:29 PM, Benjamin Scott wrote:
> At 2023 Mar 12 Sun 10:42 PM +, Lori Nagel  wrote:
>> Why don't linux machines let me use the Wi-Fi when the router is set to 
>> frames.
> I'm not sure what you mean here.  A "frame" is the unit of network data 
> transmission at the data link level (Ethernet or Wifi).  Literally all 
> routers use frames.  Everything you send or receive gets encapsulated into 
> frames, whether you're using Linux or Windows.
>
> What model router do you have?
>
> -- Ben
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
Perhaps the router was set to use jumbo frames?

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Email & Spam

2023-03-10 Thread Bruce Labitt

Thought that might be the case.

Is dnschecker.org at least slightly accurate?  My last spam sender 
address seemed to originate from or around the Kremlin Palace Complex.


Output is in my email of 4:34pm.

Honestly thinking about at least asking one of our Senators if this is 
something I should be concerned about.  I had a high level clearance, at 
one time.


Or maybe it's ordinary spam, that just so happens to be spewing from 
55.7522,37.6155 from IP address 194.87.244.234, from JSC MediaSoft 
Ekspert. https://dnschecker.org/ip-location.php?ip=194.87.244.234  Does 
spook me a bit, to be honest.




On 3/10/23 5:02 PM, Bruce Dawson wrote:


Essentially, no - all email headers are spoofable except the ones put 
on by your server. Your server should insert a Received-by header that 
indicates who sent that message to you.


You can "generally" trust headers put on by the likes of Google 
(because your server can get the IP address of the server that 
connected to you) and Google IP addresses are moderately static.  
However, this is not always the case.


--Bruce

On 3/10/23 12:43, Bruce Labitt wrote:
In email headers, are there any fields which are not spoof-able?  Or 
is email simply a morass that is totally unsolvable and broken?  
Simply impossible to filter spam?  Now I am getting spam that is 
passing all the dmarc, spf, and dkim checks.  Volume is relatively 
low at the moment, 6 in 12 hours, but I am sure the bad guys are 
working on increasing the volume.


In particular, is

X-Origin-Country reliable?  Or is this data field unsuitable for 
filtering as well?


Are there any mail client pre-filtering packages that can be added?  
Or is this a game best left to?





On 3/9/23 2:44 PM, Bruce Labitt wrote:
Spoke too soon.  I am far from understanding this all, but why would 
my ISP send me mail that failed the following tests?
dmarc, spf or dkim?  The latest spam I received failed _all_ three 
tests.


It appears not everyone is consistent with using this stuff, I found 
an email from South West Airlines that apparently doesn't use dmarc, 
but at least it passed spf and dkim.  What a mess.


I tried to send this email and it was blocked when I included the 
dmarc text.


On 3/9/23 11:49 AM, Bruce Labitt wrote:

Crossing fingers, my spam storm has paused.  No spam since 3:27 EST
yesterday.

Cleaned out tons of old spam off my phone, which was tedious.  Found
some miss-classified spam that were legitimate emails, like from
attorneys and banks, that I never received.  Loads of stock tips, scams,
assorted pharmaceuticals, and of course, invitations to honeypots of the
female persuasion.  Some were quite amusing.

Need to get back to the email spam storm on my wife's account now.
Not sure if one her groups she belongs to was compromised and her email
account sold to spammers or not. Seems like it.

My kids, both on different ISP's had no increase in spam in the past
week.  I asked them last night, trying to figure out if this was a local
thing, or more wide spread.  Guess it was local, or their ISP's were
more on the ball.



On 3/8/23 5:59 PM, Bruce Labitt wrote:

I think that something has been going on for a bit now.

However, I did go through some ancient spam emails (don't ask me why
they were still around, I plumb forgot they were accumulating) and found
quite a few of them posing as family members and people I knew, but were
not legitimate.  Examining the headers showed they were trying to fool
me.  All of them wanted me to click on some link - hoping to do some
nefarious thing or another to me.  Many were from RU.

Oh, I have been using the filters!  I have filtered every domain ending
in xyz, .store and a few others.  It's not as easy to filter against
yourself...

Is it better to have these messages go to junk, or direct to trash?
Using Thunderbird if that matters.


On 3/8/23 5:22 PM, Ronald Smith wrote:

Hi all,

There is a coordinated attack happening right now on many forms of 
communication; email, social media, everything -- someone doesn't want people 
communicating right now. The increase in spam is just part of it.

Emails that I've sent to gmail have been bounced, maybe because gmail has 
tightened their filters, maybe it's a false flag. I'm not sure and I'm not 
going waste my time tracking it down right now. If someone wants to reach me, 
they can just call me on the phone.

To the guy who said you should block all the IP's in the header -- that's 
ABSOLUTELY WRONG! Whoever has launched this attack wants folks to do that -- 
they want folks to block stuff to further limit communication. Don't do that!

You can only trust the top "Received" notice in your email header. SMTP servers are 
supposed to tack on their info to the top of the message and send it along to the next server, but 
spammers or provocateurs will often falsify the tracking info below the most recent 
"Received&quo

Re: Email & Spam

2023-03-10 Thread Bruce Labitt
Found dnschecker.org  As suspected, most of these stupid spams are 
coming from Moscow. Today's stupid pillow spam ad analyzed:


Email Source Ip Info
Source IP Address     194.87.244.234
Source IP Hostname     194.87.244.234
Country     Russia
State     Moscow
City     Moscow (Vostochnyy administrativnyy okrug)
Zip Code     null
Latitude     55.8106
Longitude     37.8166
ISP     JSC "RetnNet"
Organization     JSC "RetnNet"
Threat Level     low

% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf

% Note: this output has been filtered.
%   To receive output for a database update, use the "-B" flag.

% Information related to '194.87.244.0 - 194.87.244.255'

% Abuse contact for '194.87.244.0 - 194.87.244.255' is 'ab...@mtw.ru'

inetnum:    194.87.244.0 - 194.87.244.255
netname:    RUCLOUD
descr:  Startup maintainer
country:    RU
org:    ORG-JME1-RIPE
admin-c:    AK14258-RIPE
tech-c: AK14258-RIPE
mnt-routes: MNT-RETN
mnt-domains:    MNT-RETN
status: ASSIGNED PA
mnt-by: interlir-mnt
created:    2022-11-15T17:11:09Z
last-modified:  2022-12-20T16:11:23Z
source: RIPE

organisation:   ORG-JME1-RIPE
org-name:   JSC Mediasoft ekspert
country:    RU
org-type:   LIR
address:    2a Schelkovskoe sh.
address:    105122
address:    Moscow
address:    RUSSIAN FEDERATION
phone:  +74957295734
fax-no: +74957295734
admin-c:    FVV36-RIPE
admin-c:    PSK26-RIPE
admin-c:    EE761-RIPE
abuse-c:    MN3617-RIPE
mnt-ref:    RIPE-NCC-HM-MNT
mnt-ref:    MTW-MNT
mnt-ref:    AS2118-MNT
mnt-by: RIPE-NCC-HM-MNT
mnt-by: MTW-MNT
created:    2008-02-11T11:21:07Z
last-modified:  2020-12-16T13:05:31Z
source: RIPE # Filtered

person: Alexey Khoroshilov
address:    117403, Moscow, MKAD, 32nd km, 7A
phone:  +7 (495) 134-01-12
nic-hdl:    AK14258-RIPE
mnt-by: MT-TECHNOLOGY-NET
created:    2015-06-24T12:10:58Z
last-modified:  2015-06-24T12:10:58Z
source: RIPE # Filtered

% Information related to '194.87.244.0/24AS9002'

route:  194.87.244.0/24
origin: AS9002
mnt-by: interlir-mnt
created:    2022-11-15T17:11:52Z
last-modified:  2022-11-15T17:11:52Z
source: RIPE

% This query was served by the RIPE Database Query Service version 1.106 
(SHETLAND)


Tracing the location (probably not accurate) gives me a location right 
next to the "State Kremlin Palace".

55.752199,37.6155

Yeah, that sounds benign...  So is this normal, or should I contact the FBI?







On 3/10/23 12:43 PM, Bruce Labitt wrote:
In email headers, are there any fields which are not spoof-able?  Or 
is email simply a morass that is totally unsolvable and broken?  
Simply impossible to filter spam?  Now I am getting spam that is 
passing all the dmarc, spf, and dkim checks.  Volume is relatively low 
at the moment, 6 in 12 hours, but I am sure the bad guys are working 
on increasing the volume.


In particular, is

X-Origin-Country reliable?  Or is this data field unsuitable for 
filtering as well?


Are there any mail client pre-filtering packages that can be added?  
Or is this a game best left to?





On 3/9/23 2:44 PM, Bruce Labitt wrote:
Spoke too soon.  I am far from understanding this all, but why would 
my ISP send me mail that failed the following tests?
dmarc, spf or dkim?  The latest spam I received failed _all_ three 
tests.


It appears not everyone is consistent with using this stuff, I found 
an email from South West Airlines that apparently doesn't use dmarc, 
but at least it passed spf and dkim.  What a mess.


I tried to send this email and it was blocked when I included the 
dmarc text.


On 3/9/23 11:49 AM, Bruce Labitt wrote:

Crossing fingers, my spam storm has paused.  No spam since 3:27 EST
yesterday.

Cleaned out tons of old spam off my phone, which was tedious.  Found
some miss-classified spam that were legitimate emails, like from
attorneys and banks, that I never received.  Loads of stock tips, scams,
assorted pharmaceuticals, and of course, invitations to honeypots of the
female persuasion.  Some were quite amusing.

Need to get back to the email spam storm on my wife's account now.
Not sure if one her groups she belongs to was compromised and her email
account sold to spammers or not. Seems like it.

My kids, both on different ISP's had no increase in spam in the past
week.  I asked them last night, trying to figure out if this was a local
thing, or more wide spread.  Guess it was local, or their ISP's were
more on the ball.



On 3/8/23 5:59 PM, Bruce Labitt wrote:

I think that something has been going on for a bit now.

However, I did go through

Re: Email & Spam

2023-03-10 Thread Bruce Labitt
In email headers, are there any fields which are not spoof-able?  Or is 
email simply a morass that is totally unsolvable and broken?  Simply 
impossible to filter spam? Now I am getting spam that is passing all the 
dmarc, spf, and dkim checks.  Volume is relatively low at the moment, 6 
in 12 hours, but I am sure the bad guys are working on increasing the 
volume.


In particular, is

X-Origin-Country reliable?  Or is this data field unsuitable for 
filtering as well?


Are there any mail client pre-filtering packages that can be added?  Or 
is this a game best left to?





On 3/9/23 2:44 PM, Bruce Labitt wrote:
Spoke too soon.  I am far from understanding this all, but why would 
my ISP send me mail that failed the following tests?

dmarc, spf or dkim?  The latest spam I received failed _all_ three tests.

It appears not everyone is consistent with using this stuff, I found 
an email from South West Airlines that apparently doesn't use dmarc, 
but at least it passed spf and dkim.  What a mess.


I tried to send this email and it was blocked when I included the 
dmarc text.


On 3/9/23 11:49 AM, Bruce Labitt wrote:

Crossing fingers, my spam storm has paused.  No spam since 3:27 EST
yesterday.

Cleaned out tons of old spam off my phone, which was tedious.  Found
some miss-classified spam that were legitimate emails, like from
attorneys and banks, that I never received.  Loads of stock tips, scams,
assorted pharmaceuticals, and of course, invitations to honeypots of the
female persuasion.  Some were quite amusing.

Need to get back to the email spam storm on my wife's account now.
Not sure if one her groups she belongs to was compromised and her email
account sold to spammers or not. Seems like it.

My kids, both on different ISP's had no increase in spam in the past
week.  I asked them last night, trying to figure out if this was a local
thing, or more wide spread.  Guess it was local, or their ISP's were
more on the ball.



On 3/8/23 5:59 PM, Bruce Labitt wrote:

I think that something has been going on for a bit now.

However, I did go through some ancient spam emails (don't ask me why
they were still around, I plumb forgot they were accumulating) and found
quite a few of them posing as family members and people I knew, but were
not legitimate.  Examining the headers showed they were trying to fool
me.  All of them wanted me to click on some link - hoping to do some
nefarious thing or another to me.  Many were from RU.

Oh, I have been using the filters!  I have filtered every domain ending
in xyz, .store and a few others.  It's not as easy to filter against
yourself...

Is it better to have these messages go to junk, or direct to trash?
Using Thunderbird if that matters.


On 3/8/23 5:22 PM, Ronald Smith wrote:

Hi all,

There is a coordinated attack happening right now on many forms of 
communication; email, social media, everything -- someone doesn't want people 
communicating right now. The increase in spam is just part of it.

Emails that I've sent to gmail have been bounced, maybe because gmail has 
tightened their filters, maybe it's a false flag. I'm not sure and I'm not 
going waste my time tracking it down right now. If someone wants to reach me, 
they can just call me on the phone.

To the guy who said you should block all the IP's in the header -- that's 
ABSOLUTELY WRONG! Whoever has launched this attack wants folks to do that -- 
they want folks to block stuff to further limit communication. Don't do that!

You can only trust the top "Received" notice in your email header. SMTP servers are 
supposed to tack on their info to the top of the message and send it along to the next server, but 
spammers or provocateurs will often falsify the tracking info below the most recent 
"Received" line, so you should just ignore that.

Just put up with the spam for now; don't over-react. Your email providers will 
know how to handle this if they have enough experience. Use the filters in your 
client if you need to.

Have fun...

Ronald Smith
r...@mrt4.com
603-360-1000

- - - -

On Wed, 8 Mar 2023 13:31:56 -0500
Bruce Labitt  wrote:


Seems to be an uptick in spam received lately.  Doesn't seem that my ISP
is on top of it.  In the past 48 hours have received at least three
dozen spams from similar parties.  Many seem to be coming from *.store
domains.  I haven't knowingly ever visited one of these domains.

I don't think I want to run my own email server - mostly because 1) I
really don't know how to set one up, and 2) it sounds like a bit of work
to maintain.  Of course, I could be wrong, which is why I am asking.

I did a whois, and due to privacy cr*p, there's no longer a way to get
to the registrants.  I can see why this might be, but it does make it
harder to report people.  I did report a couple of domains as spammers
to godaddy, since I *think* they were the registrar.  Thi

Re: Email & Spam

2023-03-09 Thread Bruce Labitt
Spoke too soon.  I am far from understanding this all, but why would my 
ISP send me mail that failed the following tests?

dmarc, spf or dkim?  The latest spam I received failed _all_ three tests.

It appears not everyone is consistent with using this stuff, I found an 
email from South West Airlines that apparently doesn't use dmarc, but at 
least it passed spf and dkim.  What a mess.


I tried to send this email and it was blocked when I included the dmarc 
text.


On 3/9/23 11:49 AM, Bruce Labitt wrote:

Crossing fingers, my spam storm has paused.  No spam since 3:27 EST
yesterday.

Cleaned out tons of old spam off my phone, which was tedious.  Found
some miss-classified spam that were legitimate emails, like from
attorneys and banks, that I never received.  Loads of stock tips, scams,
assorted pharmaceuticals, and of course, invitations to honeypots of the
female persuasion.  Some were quite amusing.

Need to get back to the email spam storm on my wife's account now.
Not sure if one her groups she belongs to was compromised and her email
account sold to spammers or not. Seems like it.

My kids, both on different ISP's had no increase in spam in the past
week.  I asked them last night, trying to figure out if this was a local
thing, or more wide spread.  Guess it was local, or their ISP's were
more on the ball.



On 3/8/23 5:59 PM, Bruce Labitt wrote:

I think that something has been going on for a bit now.

However, I did go through some ancient spam emails (don't ask me why
they were still around, I plumb forgot they were accumulating) and found
quite a few of them posing as family members and people I knew, but were
not legitimate.  Examining the headers showed they were trying to fool
me.  All of them wanted me to click on some link - hoping to do some
nefarious thing or another to me.  Many were from RU.

Oh, I have been using the filters!  I have filtered every domain ending
in xyz, .store and a few others.  It's not as easy to filter against
yourself...

Is it better to have these messages go to junk, or direct to trash?
Using Thunderbird if that matters.


On 3/8/23 5:22 PM, Ronald Smith wrote:

Hi all,

There is a coordinated attack happening right now on many forms of 
communication; email, social media, everything -- someone doesn't want people 
communicating right now. The increase in spam is just part of it.

Emails that I've sent to gmail have been bounced, maybe because gmail has 
tightened their filters, maybe it's a false flag. I'm not sure and I'm not 
going waste my time tracking it down right now. If someone wants to reach me, 
they can just call me on the phone.

To the guy who said you should block all the IP's in the header -- that's 
ABSOLUTELY WRONG! Whoever has launched this attack wants folks to do that -- 
they want folks to block stuff to further limit communication. Don't do that!

You can only trust the top "Received" notice in your email header. SMTP servers are 
supposed to tack on their info to the top of the message and send it along to the next server, but 
spammers or provocateurs will often falsify the tracking info below the most recent 
"Received" line, so you should just ignore that.

Just put up with the spam for now; don't over-react. Your email providers will 
know how to handle this if they have enough experience. Use the filters in your 
client if you need to.

Have fun...

Ronald Smith
r...@mrt4.com
603-360-1000

- - - -

On Wed, 8 Mar 2023 13:31:56 -0500
Bruce Labitt  wrote:


Seems to be an uptick in spam received lately.  Doesn't seem that my ISP
is on top of it.  In the past 48 hours have received at least three
dozen spams from similar parties.  Many seem to be coming from *.store
domains.  I haven't knowingly ever visited one of these domains.

I don't think I want to run my own email server - mostly because 1) I
really don't know how to set one up, and 2) it sounds like a bit of work
to maintain.  Of course, I could be wrong, which is why I am asking.

I did a whois, and due to privacy cr*p, there's no longer a way to get
to the registrants.  I can see why this might be, but it does make it
harder to report people.  I did report a couple of domains as spammers
to godaddy, since I *think* they were the registrar.  This really
doesn't seem kosher to me, since godaddy gets revenue from the
spammers.  I also reported a domain or two to my ISP.  Things have
slightly slowed down, but I am not holding my breath.

In my wife's case, one or more of her acquaintances (with Windows
computers?) have had their accounts compromised or information stolen,
and she has been super subscribed to what seems like dozens and dozens
of spamming lists.  Her spam folder on her phone receives may hundreds
of emails a day - it's really out of control.  How can we get out of
this mess?

Anyways, are there any practical ways to get a better h

Re: Email & Spam

2023-03-09 Thread Bruce Labitt
Crossing fingers, my spam storm has paused.  No spam since 3:27 EST 
yesterday.

Cleaned out tons of old spam off my phone, which was tedious.  Found 
some miss-classified spam that were legitimate emails, like from 
attorneys and banks, that I never received.  Loads of stock tips, scams, 
assorted pharmaceuticals, and of course, invitations to honeypots of the 
female persuasion.  Some were quite amusing.

Need to get back to the email spam storm on my wife's account now.
Not sure if one her groups she belongs to was compromised and her email 
account sold to spammers or not. Seems like it.

My kids, both on different ISP's had no increase in spam in the past 
week.  I asked them last night, trying to figure out if this was a local 
thing, or more wide spread.  Guess it was local, or their ISP's were 
more on the ball.



On 3/8/23 5:59 PM, Bruce Labitt wrote:
> I think that something has been going on for a bit now.
>
> However, I did go through some ancient spam emails (don't ask me why
> they were still around, I plumb forgot they were accumulating) and found
> quite a few of them posing as family members and people I knew, but were
> not legitimate.  Examining the headers showed they were trying to fool
> me.  All of them wanted me to click on some link - hoping to do some
> nefarious thing or another to me.  Many were from RU.
>
> Oh, I have been using the filters!  I have filtered every domain ending
> in xyz, .store and a few others.  It's not as easy to filter against
> yourself...
>
> Is it better to have these messages go to junk, or direct to trash?
> Using Thunderbird if that matters.
>
>
> On 3/8/23 5:22 PM, Ronald Smith wrote:
>> Hi all,
>>
>> There is a coordinated attack happening right now on many forms of 
>> communication; email, social media, everything -- someone doesn't want 
>> people communicating right now. The increase in spam is just part of it.
>>
>> Emails that I've sent to gmail have been bounced, maybe because gmail has 
>> tightened their filters, maybe it's a false flag. I'm not sure and I'm not 
>> going waste my time tracking it down right now. If someone wants to reach 
>> me, they can just call me on the phone.
>>
>> To the guy who said you should block all the IP's in the header -- that's 
>> ABSOLUTELY WRONG! Whoever has launched this attack wants folks to do that -- 
>> they want folks to block stuff to further limit communication. Don't do that!
>>
>> You can only trust the top "Received" notice in your email header. SMTP 
>> servers are supposed to tack on their info to the top of the message and 
>> send it along to the next server, but spammers or provocateurs will often 
>> falsify the tracking info below the most recent "Received" line, so you 
>> should just ignore that.
>>
>> Just put up with the spam for now; don't over-react. Your email providers 
>> will know how to handle this if they have enough experience. Use the filters 
>> in your client if you need to.
>>
>> Have fun...
>>
>> Ronald Smith
>> r...@mrt4.com
>> 603-360-1000
>>
>> - - - -
>>
>> On Wed, 8 Mar 2023 13:31:56 -0500
>> Bruce Labitt  wrote:
>>
>>> Seems to be an uptick in spam received lately.  Doesn't seem that my ISP
>>> is on top of it.  In the past 48 hours have received at least three
>>> dozen spams from similar parties.  Many seem to be coming from *.store
>>> domains.  I haven't knowingly ever visited one of these domains.
>>>
>>> I don't think I want to run my own email server - mostly because 1) I
>>> really don't know how to set one up, and 2) it sounds like a bit of work
>>> to maintain.  Of course, I could be wrong, which is why I am asking.
>>>
>>> I did a whois, and due to privacy cr*p, there's no longer a way to get
>>> to the registrants.  I can see why this might be, but it does make it
>>> harder to report people.  I did report a couple of domains as spammers
>>> to godaddy, since I *think* they were the registrar.  This really
>>> doesn't seem kosher to me, since godaddy gets revenue from the
>>> spammers.  I also reported a domain or two to my ISP.  Things have
>>> slightly slowed down, but I am not holding my breath.
>>>
>>> In my wife's case, one or more of her acquaintances (with Windows
>>> computers?) have had their accounts compromised or information stolen,
>>> and she has been super subscribed to what seems like dozens and dozens
>>> of spamming lists.  Her spam folder

Re: Email & Spam

2023-03-08 Thread Bruce Labitt
I think that something has been going on for a bit now.

However, I did go through some ancient spam emails (don't ask me why 
they were still around, I plumb forgot they were accumulating) and found 
quite a few of them posing as family members and people I knew, but were 
not legitimate.  Examining the headers showed they were trying to fool 
me.  All of them wanted me to click on some link - hoping to do some 
nefarious thing or another to me.  Many were from RU.

Oh, I have been using the filters!  I have filtered every domain ending 
in xyz, .store and a few others.  It's not as easy to filter against 
yourself...

Is it better to have these messages go to junk, or direct to trash?  
Using Thunderbird if that matters.


On 3/8/23 5:22 PM, Ronald Smith wrote:
> Hi all,
>
> There is a coordinated attack happening right now on many forms of 
> communication; email, social media, everything -- someone doesn't want people 
> communicating right now. The increase in spam is just part of it.
>
> Emails that I've sent to gmail have been bounced, maybe because gmail has 
> tightened their filters, maybe it's a false flag. I'm not sure and I'm not 
> going waste my time tracking it down right now. If someone wants to reach me, 
> they can just call me on the phone.
>
> To the guy who said you should block all the IP's in the header -- that's 
> ABSOLUTELY WRONG! Whoever has launched this attack wants folks to do that -- 
> they want folks to block stuff to further limit communication. Don't do that!
>
> You can only trust the top "Received" notice in your email header. SMTP 
> servers are supposed to tack on their info to the top of the message and send 
> it along to the next server, but spammers or provocateurs will often falsify 
> the tracking info below the most recent "Received" line, so you should just 
> ignore that.
>
> Just put up with the spam for now; don't over-react. Your email providers 
> will know how to handle this if they have enough experience. Use the filters 
> in your client if you need to.
>
> Have fun...
>
> Ronald Smith
> r...@mrt4.com
> 603-360-1000
>
> - - - -
>
> On Wed, 8 Mar 2023 13:31:56 -0500
> Bruce Labitt  wrote:
>
>> Seems to be an uptick in spam received lately.  Doesn't seem that my ISP
>> is on top of it.  In the past 48 hours have received at least three
>> dozen spams from similar parties.  Many seem to be coming from *.store
>> domains.  I haven't knowingly ever visited one of these domains.
>>
>> I don't think I want to run my own email server - mostly because 1) I
>> really don't know how to set one up, and 2) it sounds like a bit of work
>> to maintain.  Of course, I could be wrong, which is why I am asking.
>>
>> I did a whois, and due to privacy cr*p, there's no longer a way to get
>> to the registrants.  I can see why this might be, but it does make it
>> harder to report people.  I did report a couple of domains as spammers
>> to godaddy, since I *think* they were the registrar.  This really
>> doesn't seem kosher to me, since godaddy gets revenue from the
>> spammers.  I also reported a domain or two to my ISP.  Things have
>> slightly slowed down, but I am not holding my breath.
>>
>> In my wife's case, one or more of her acquaintances (with Windows
>> computers?) have had their accounts compromised or information stolen,
>> and she has been super subscribed to what seems like dozens and dozens
>> of spamming lists.  Her spam folder on her phone receives may hundreds
>> of emails a day - it's really out of control.  How can we get out of
>> this mess?
>>
>> Anyways, are there any practical ways to get a better handle on this?
>> Looking for some ideas.  Thanks for any and all suggestions.  I hope
>> this would be a topic of interest to others on this list.  If for no
>> other reason to share what worked and what didn't.
>>
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Email & Spam

2023-03-08 Thread Bruce Labitt

Radix?  That name did not show in my $ whois output.

$ whois --version
Version 5.5.13.

Well I did send all the message source info to godaddy when I filed the 
abuse complaint.  I also sent the same info to my ISP.  There's 
practically been no change so far, just a new "sender" has arrived to 
take the previous one's place.


Apparently this is an ongoing battle. Are there any better ISP's that 
are more aggressive about taking this spam issue more seriously?  Are 
there any anti spam laws in NH, or the US?  There is the CAN-SPAM act, 
according to the FTC, but these spammers put in enough info to 
technically be close to compliance.


The fact that many of these spams are

X-Origin-Country: RU

Gives me pause.


On 3/8/23 4:24 PM, Bryan Borsa wrote:

The registry is Radix
The registrar is GoDaddy

My command line whois outputs more info than what is below ( the 
registry info for example ) , but the Registrar info is the same.


Domains By Proxy is also GoDaddy, well, owned by the same guy that 
founded it anyway, they’re connected. It is almost certain that this 
domain name was purchased from them.


To know where a spam email originated from though, you would have to 
parse the email headers, which would list the IP address of every mail 
server it went through.  Reporting those IP’s is generally more 
effective at stopping spam than reporting domain names.


There are likely automated ways of doing that, but I am not familiar 
with them.  I do know that mail server reputation is something that 
mail providers / businesses care about ( to some extent anyway, and 
some more than others ), because they get shut off if it gets too low. 
( other people won’t take their mail ).




 - Bryan








On Mar 8, 2023, at 2:06 PM, Bruce Labitt 
 wrote:


Perhaps I am misunderstanding how to interpret the output.  This is 
one of the outputs of whois


$ whois aagyemang.store
Domain Name: AAGYEMANG.STORE
Registry Domain ID: D345146502-CNIC
Registrar WHOIS Server: whois.godaddy.com
Registrar URL: https://www.godaddy.com/
Updated Date: 2023-02-23T09:25:07.0Z
Creation Date: 2023-01-23T21:28:02.0Z
Registry Expiry Date: 2024-01-23T23:59:59.0Z
Registrar: Go Daddy, LLC
Registrar IANA ID: 146
Domain Status: serverTransferProhibited 
https://icann.org/epp#serverTransferProhibited
Domain Status: clientRenewProhibited 
https://icann.org/epp#clientRenewProhibited
Domain Status: clientTransferProhibited 
https://icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited 
https://icann.org/epp#clientUpdateProhibited
Domain Status: clientDeleteProhibited 
https://icann.org/epp#clientDeleteProhibited

Registrant Organization: Domains By Proxy, LLC
Registrant State/Province: Arizona
Registrant Country: US
Registrant Email: Please query the RDDS service of the Registrar of 
Record identified in this output for information on how to contact 
the Registrant, Admin, or Tech contact of the queried domain name.
Admin Email: Please query the RDDS service of the Registrar of Record 
identified in this output for information on how to contact the 
Registrant, Admin, or Tech contact of the queried domain name.
Tech Email: Please query the RDDS service of the Registrar of Record 
identified in this output for information on how to contact the 
Registrant, Admin, or Tech contact of the queried domain name.

Name Server: NS37.DOMAINCONTROL.COM
Name Server: NS38.DOMAINCONTROL.COM
DNSSEC: unsigned
Billing Email: Please query the RDDS service of the Registrar of 
Record identified in this output for information on how to contact 
the Registrant, Admin, or Tech contact of the queried domain name.

Registrar Abuse Contact Email: ab...@godaddy.com
Registrar Abuse Contact Phone: +1.4805058800
URL of the ICANN Whois Inaccuracy Complaint Form: 
https://www.icann.org/wicf/

>>> Last update of WHOIS database: 2023-03-08T18:40:36.0Z <<<

For more information on Whois status codes, please visit 
https://icann.org/epp


>>> IMPORTANT INFORMATION ABOUT THE DEPLOYMENT OF RDAP: please visit
https://www.centralnic.com/support/rdap <<<

The Whois and RDAP services are provided by CentralNic, and contain
information pertaining to Internet domain names registered by our
our customers. By using this service you are agreeing (1) not to use any
information presented here for any purpose other than determining
ownership of domain names, (2) not to store or reproduce this data in
any way, (3) not to use any high-volume, automated, electronic processes
to obtain data from this service. Abuse of this service is monitored and
actions in contravention of these terms will result in being permanently
blacklisted. All data is (c) CentralNic Ltd (https://www.centralnic.com)

Access to the Whois and RDAP services is rate limited. For more
information, visit 
https://registrar-console.centralnic.com/pub/whois_guidance.



Registrar is godaddy.  I did contact ab...@godaddy.com.  Is there a

Re: Email & Spam

2023-03-08 Thread Bruce Labitt
Perhaps I am misunderstanding how to interpret the output.  This is one 
of the outputs of whois


$ whois aagyemang.store
Domain Name: AAGYEMANG.STORE
Registry Domain ID: D345146502-CNIC
Registrar WHOIS Server: whois.godaddy.com
Registrar URL: https://www.godaddy.com/
Updated Date: 2023-02-23T09:25:07.0Z
Creation Date: 2023-01-23T21:28:02.0Z
Registry Expiry Date: 2024-01-23T23:59:59.0Z
Registrar: Go Daddy, LLC
Registrar IANA ID: 146
Domain Status: serverTransferProhibited 
https://icann.org/epp#serverTransferProhibited
Domain Status: clientRenewProhibited 
https://icann.org/epp#clientRenewProhibited
Domain Status: clientTransferProhibited 
https://icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited 
https://icann.org/epp#clientUpdateProhibited
Domain Status: clientDeleteProhibited 
https://icann.org/epp#clientDeleteProhibited

Registrant Organization: Domains By Proxy, LLC
Registrant State/Province: Arizona
Registrant Country: US
Registrant Email: Please query the RDDS service of the Registrar of 
Record identified in this output for information on how to contact the 
Registrant, Admin, or Tech contact of the queried domain name.
Admin Email: Please query the RDDS service of the Registrar of Record 
identified in this output for information on how to contact the 
Registrant, Admin, or Tech contact of the queried domain name.
Tech Email: Please query the RDDS service of the Registrar of Record 
identified in this output for information on how to contact the 
Registrant, Admin, or Tech contact of the queried domain name.

Name Server: NS37.DOMAINCONTROL.COM
Name Server: NS38.DOMAINCONTROL.COM
DNSSEC: unsigned
Billing Email: Please query the RDDS service of the Registrar of Record 
identified in this output for information on how to contact the 
Registrant, Admin, or Tech contact of the queried domain name.

Registrar Abuse Contact Email: ab...@godaddy.com
Registrar Abuse Contact Phone: +1.4805058800
URL of the ICANN Whois Inaccuracy Complaint Form: 
https://www.icann.org/wicf/

>>> Last update of WHOIS database: 2023-03-08T18:40:36.0Z <<<

For more information on Whois status codes, please visit 
https://icann.org/epp


>>> IMPORTANT INFORMATION ABOUT THE DEPLOYMENT OF RDAP: please visit
https://www.centralnic.com/support/rdap <<<

The Whois and RDAP services are provided by CentralNic, and contain
information pertaining to Internet domain names registered by our
our customers. By using this service you are agreeing (1) not to use any
information presented here for any purpose other than determining
ownership of domain names, (2) not to store or reproduce this data in
any way, (3) not to use any high-volume, automated, electronic processes
to obtain data from this service. Abuse of this service is monitored and
actions in contravention of these terms will result in being permanently
blacklisted. All data is (c) CentralNic Ltd (https://www.centralnic.com)

Access to the Whois and RDAP services is rate limited. For more
information, visit 
https://registrar-console.centralnic.com/pub/whois_guidance.



Registrar is godaddy.  I did contact ab...@godaddy.com.  Is there a more 
automated (scripted?) way of getting this done?  So it doesn't take so 
much of my time?  It feels like tilting at windmills, but, it would be 
good to fight back a little.  Domains by Proxy is the intermediary - a 
corporation set up to "manage unsolicited contacts from third parties 
and keeping the domains owners' personal information secret". 
https://en.wikipedia.org/wiki/Domains_by_Proxy


Is ab...@godaddy.com the only (legitimate) mechanism available to me?

What does the domain status above mean?  That the status is unavailable 
to me?  Or something else?





On 3/8/23 1:36 PM, Bryan Borsa wrote:
The Registry and Registrar should still be visible regardless of 
domain registrant privacy settings.




On Mar 8, 2023, at 1:31 PM, Bruce Labitt 
 wrote:


I did a whois, and due to privacy cr*p, there's no longer a way to get
to the registrants.  I can see why this might be, but it does make it
harder to report people


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Email & Spam

2023-03-08 Thread Bruce Labitt
Seems to be an uptick in spam received lately.  Doesn't seem that my ISP 
is on top of it.  In the past 48 hours have received at least three 
dozen spams from similar parties.  Many seem to be coming from *.store 
domains.  I haven't knowingly ever visited one of these domains.

I don't think I want to run my own email server - mostly because 1) I 
really don't know how to set one up, and 2) it sounds like a bit of work 
to maintain.  Of course, I could be wrong, which is why I am asking.

I did a whois, and due to privacy cr*p, there's no longer a way to get 
to the registrants.  I can see why this might be, but it does make it 
harder to report people.  I did report a couple of domains as spammers 
to godaddy, since I *think* they were the registrar.  This really 
doesn't seem kosher to me, since godaddy gets revenue from the 
spammers.  I also reported a domain or two to my ISP.  Things have 
slightly slowed down, but I am not holding my breath.

In my wife's case, one or more of her acquaintances (with Windows 
computers?) have had their accounts compromised or information stolen, 
and she has been super subscribed to what seems like dozens and dozens 
of spamming lists.  Her spam folder on her phone receives may hundreds 
of emails a day - it's really out of control.  How can we get out of 
this mess?

Anyways, are there any practical ways to get a better handle on this? 
Looking for some ideas.  Thanks for any and all suggestions.  I hope 
this would be a topic of interest to others on this list.  If for no 
other reason to share what worked and what didn't.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Book or online source on modern Linux system files and organization

2022-12-21 Thread Bruce Labitt
Feeling like a bit of a fossil and not knowing what files do what, or 
where things are located.  Need to fix an obnoxious problem with a 
keyboard and realize I just don't know even how to investigate this 
anymore.  What are recommended sources for a modern overview of system 
files, purposes and organization?

Think my laptop now believes my Logitech keyboard is the default 
keyboard.  This is bad, because it has a different number of keys and 
the mapping is different.  This a royal pia.  I am typing with a mouse.  
Even the space bar doesn't work.  Practically it makes a laptop into a 
desktop system.  I don't even know where to start since a lot of the 
Linux cheese moved, in the past 10 years.

System76 Oryx 6 Pro laptop.  POPOS 22.04.

Any tips or pointers to well written overviews on the modern 
organization would be appreciated.  Perhaps I could learn enough to at 
least know the correct search terms.

TIA, Bruce

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: setting up my own git server

2022-10-09 Thread Bruce Labitt
On 10/9/22 3:00 PM, Bruce Labitt wrote:
> On 10/9/22 2:41 PM, Šarūnas wrote:
>> On 09/10/2022 14.32, Bruce Labitt wrote:
>>> I am trying to accomplish this and am running into a couple of
>>> problems.  Following the instructions at:
>>> https://www.linuxfoundation.org/blog/blog/classic-sysadmin-how-to-run-your-own-git-server
>>>
>>>   Specifically, the command $
>>>
>>> cat ~/.ssh/id_rsa.pub | sshgit@remote-server  "mkdir -p ~/.ssh && cat
>>>>>    ~/.ssh/authorized_keys"
>>> fails  with the message bash: /home/git/.ssh/authorized_keys:
>>> Permission denied
>> I don't know about git server (probably just a host with SSH access?),
>> but copying your public key to remote host can be done with:
>>
>> ssh-copy-id -i ~/.ssh/id_rsa.pub git@remote-server
>>
>> It always worked.
>>
>> Good luck,
>>
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> I am getting permission denied.  Here's the question.  I am logged in as
> bruce on the remote, and pushing my id_rsa.pub to user git on the
> remote-server.  Do I need to instead be user git on the remote, create
> ssh keys, and then push them to remote-server?  When I created the git
> user on both computers, they have no sudoer privileges.  When I did
> adduser, I forgot to use the -m flag, so there's no home/git on the
> remote.  But there is a /home/git on the server.
>
> $ ssh-copy-id -i ~/.ssh/id_rsa.pub git@rpi4.local
> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
> "/home/bruce/.ssh/id_rsa.pub"
> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to
> filter out any that are already installed
> /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you
> are prompted now it is to install the new keys
> git@rpi4.local's password:
> sh: 1: cannot create .ssh/authorized_keys: Permission denied
>
> This stuff always puzzles me.  There's an obvious explanation for the
> failure, but beats me what it is.
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
Strangely, it worked after a second time - after I deleted the empty 
authorized_keys.  Was able to ssh in to the git@remote-server using the 
passphrase, and checked there was an entry in the authorized keys 
corresponding to the remote computer.

Back to the git issue.  At least there's some progress :)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: setting up my own git server

2022-10-09 Thread Bruce Labitt
On 10/9/22 2:41 PM, Šarūnas wrote:
> On 09/10/2022 14.32, Bruce Labitt wrote:
>> I am trying to accomplish this and am running into a couple of 
>> problems.  Following the instructions at: 
>> https://www.linuxfoundation.org/blog/blog/classic-sysadmin-how-to-run-your-own-git-server
>>
>>  Specifically, the command $
>>
>> cat ~/.ssh/id_rsa.pub | sshgit@remote-server  "mkdir -p ~/.ssh && cat
>> >>  ~/.ssh/authorized_keys"
>>
>> fails  with the message bash: /home/git/.ssh/authorized_keys:
>> Permission denied
>
> I don't know about git server (probably just a host with SSH access?), 
> but copying your public key to remote host can be done with:
>
> ssh-copy-id -i ~/.ssh/id_rsa.pub git@remote-server
>
> It always worked.
>
> Good luck,
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

I am getting permission denied.  Here's the question.  I am logged in as 
bruce on the remote, and pushing my id_rsa.pub to user git on the 
remote-server.  Do I need to instead be user git on the remote, create 
ssh keys, and then push them to remote-server?  When I created the git 
user on both computers, they have no sudoer privileges.  When I did 
adduser, I forgot to use the -m flag, so there's no home/git on the 
remote.  But there is a /home/git on the server.

$ ssh-copy-id -i ~/.ssh/id_rsa.pub git@rpi4.local
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: 
"/home/bruce/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to 
filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you 
are prompted now it is to install the new keys
git@rpi4.local's password:
sh: 1: cannot create .ssh/authorized_keys: Permission denied

This stuff always puzzles me.  There's an obvious explanation for the 
failure, but beats me what it is.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


setting up my own git server

2022-10-09 Thread Bruce Labitt
I am trying to accomplish this and am running into a couple of 
problems.  Following the instructions at: 
https://www.linuxfoundation.org/blog/blog/classic-sysadmin-how-to-run-your-own-git-server


Specifically, the command $

cat ~/.ssh/id_rsa.pub | sshgit@remote-server  "mkdir -p ~/.ssh && cat >>  
~/.ssh/authorized_keys"

fails  with the message bash: /home/git/.ssh/authorized_keys: Permission denied

Is this because the user git is not a sudoer? Strangely, the directory 
.ssh was created, but nothing else. I tried this twice. The first 
failure message was


bash: /home/git/.ssh/authorized_keys: No such file or directory

Then I sudo touched the file into existence.  The second error was

bash: /home/git/.ssh/authorized_keys: Permission denied

The article leaves out some details, assuming one is a master of the 
art.  It does not tell you what user you are when you are ssh'd in.  Are 
you logged in as user git in both cases?  If so, why is he changing to 
/home/swapnil to create his directory?  Am I to create 
/home/bruce/project-1.git?


Is there a better article to set up password less ssh login?  And maybe 
setting up a master git repo?  I have a local git, but really want to 
have a dedicated server just do the repo.  Any hints or guidance would 
be helpful.  TIA.  I have multiple satellite computers that need common 
git managed code.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is there a decent file attribute (date) conserving way to download your photos from Google?

2022-06-22 Thread Bruce Labitt

Thank you.  The links are very helpful.  I will check them all out.
Think I have enough info to make a valiant attempt at sorting this all out.

Bruce

On 6/22/22 12:26 PM, Dan Jenkins wrote:
Summary: The JSON files contain ALL the metadata from EXIF info for 
each photo. You need to merge the JSON info back into the JPG files. 
There is a (purportedly) very good tool for doing that. I have not 
used the tool myself.


Hope this helps.

Here are supporting links:

  * Article on /How to Export Your Images From Google Photos Using
Takeout/
https://metadatafixer.com/learn/how-to-export-images-google-photos-takeout
  * The tool (EXIFTool) itself: https://exiftool.org/
  * Apple forum on the topic, with instructions:
https://discussions.apple.com/thread/253234040
  * EXIFTool forum with instructions for Google Takeout json files:
https://exiftool.org/forum/index.php?topic=11064.0


On 2022-06-21 13:41, Mark Komarinski wrote:

There should be EXIF metadata in each photo which should include the date taken.

Should.

-Mark

On Jun 21, 2022 1:27 PM, Bruce Labitt  wrote:

 Recently got a message (well really quite a few) warning me that my
 "free storage" on google is running out.  This, of course, is yet a new
 way for Google to monetize all the free stuff that they had been
 providing for a while.  I do have strong opinions on re-negging on
 promises, but lets not go there.

 Google apparently provides a way to extract your data, more or less.
 You can export your data using "Google Takeout".  So I wanted to takeout
 my photos, since it seemed they were the dominant storage hog.  I
 exported my photos, and got 8 2GB zip files.  Google touched the files
 and they all have today's date. This stinks because I usually sort on
 date.  For some of the photos, the date is embedded in the file name.
 For the earlier ones, the camera manufacturer didn't do that.  (Takeout
 only exports the data, it does not delete it.)  In the export, it seems
 there are json files for every jpg downloaded.  Seems like a lot of
 clutter, what use are these json files?  Apparently they had some value
 to Google, because they made them.

 Is there some way to extract the photos from google with the dates intact?

 If not, can the files be parsed for their date taken and the attribute
 date reset to the taken date?  Say one were to do this in python, it
 seems one could do this with PIL, and os.walk through the directories.
 Not quite as sure about resetting the date attribute, but pretty sure it
 can be done.  Seems like it could be an interesting exercise.  (Suppose
 one could also extract the GPS info if available and further categorize
 the photos.)

 Are there any pitfalls to the the paragraph above?  Can any of you
 suggest a better way to do this?

 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Is there a decent file attribute (date) conserving way to download your photos from Google?

2022-06-21 Thread Bruce Labitt
Recently got a message (well really quite a few) warning me that my 
"free storage" on google is running out.  This, of course, is yet a new 
way for Google to monetize all the free stuff that they had been 
providing for a while.  I do have strong opinions on re-negging on 
promises, but lets not go there.

Google apparently provides a way to extract your data, more or less.  
You can export your data using "Google Takeout".  So I wanted to takeout 
my photos, since it seemed they were the dominant storage hog.  I 
exported my photos, and got 8 2GB zip files.  Google touched the files 
and they all have today's date. This stinks because I usually sort on 
date.  For some of the photos, the date is embedded in the file name.  
For the earlier ones, the camera manufacturer didn't do that.  (Takeout 
only exports the data, it does not delete it.)  In the export, it seems 
there are json files for every jpg downloaded.  Seems like a lot of 
clutter, what use are these json files?  Apparently they had some value 
to Google, because they made them.

Is there some way to extract the photos from google with the dates intact?

If not, can the files be parsed for their date taken and the attribute 
date reset to the taken date?  Say one were to do this in python, it 
seems one could do this with PIL, and os.walk through the directories.  
Not quite as sure about resetting the date attribute, but pretty sure it 
can be done.  Seems like it could be an interesting exercise.  (Suppose 
one could also extract the GPS info if available and further categorize 
the photos.)

Are there any pitfalls to the the paragraph above?  Can any of you 
suggest a better way to do this?


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debugging linux crashes

2022-06-14 Thread Bruce Labitt
The USB mode changes during the flashing, wonder if that is what is 
confusing the kernel or not. During the "wake up the Teensy bootloader 
mode", I think the USB is talking to /dev/hidraw4, once the device is 
programmed, the Teensy (M7) appears as /dev/ttyACM0.  Maybe the kernel 
can't handle a lot of these transitions, falsely thinking there's an 
error, or there's a goofy hard limit programmed in the kernel...


Can anyone think of why USB transactions or USB mode switches might trip 
a trip into lala land?



On 6/13/22 10:20 PM, Bruce Labitt wrote:
FWIW, this total system crash has been isolated to the kernel.  Kernel 
5.17.5-76051705 crashes. Which was pushed out to apt on, guess what, 
May 26, 2022, the date my computer went to hades.


Kernel 5.15.0-37-generic does not crash.  Kernel 5.18.2 also crashes, 
only if I use TyCommander, but not necessarily during a USB flashing.  
I had 5.18.2 crash while I was using Firefox, but while Arduino IDE 
and TyCommander were active.


Now running on 5.15 and things are stable.

I know nothing about kernels and stuff like this.  Been forced into 
it.  An average Joe like me shouldn't have to deal with this kind of 
thing.


TyCommander works fine on an RPI4 running Raspberry Pi OS 64 bit, 
which as I understand it is a Debian derivative.  The kernel is 
5.15.30, according to Wikipedia.  Not looking forward to an update of 
that kernel.


I have no idea how to make a minimal dying example for any developers...



On 6/6/22 4:05 PM, Bruce Labitt wrote:

Followup with SW related items.

$ cat /etc/os-release
NAME="Pop!_OS"
VERSION="22.04 LTS"
ID=pop
ID_LIKE="ubuntu debian"
PRETTY_NAME="Pop!_OS 22.04 LTS"
VERSION_ID="22.04"
HOME_URL="https://pop.system76.com";
SUPPORT_URL="https://support.system76.com";
BUG_REPORT_URL="https://github.com/pop-os/pop/issues";
PRIVACY_POLICY_URL="https://system76.com/privacy";
VERSION_CODENAME=jammy
UBUNTU_CODENAME=jammy
LOGO=distributor-logo-pop-os

$ uname -a
Linux pop-os 5.17.5-76051705-generic 
#202204271406~1653440576~22.04~6277a18 SMP PREEMPT Wed May 25 01 
x86_64 x86_64 x86_64 GNU/Linux


Tytools from https://github.com/Koromix/tytools
Teensyduino from: https://www.pjrc.com/teensy/td_download.html
Arduino download from: https://www.arduino.cc/en/software V1.8.19.
Data on Teensy 4.1 microcontroller (Arm M7, NXP IMXRT1060) 
https://www.pjrc.com/store/teensy41.html
IMXRT1060 Processor Reference Manual 
https://www.pjrc.com/teensy/IMXRT1060RM_rev3.pdf


Me, I am writing code to make an electronic lead screw for my lathe. 
Motor control works with NEMA-24 stepper motor and rotary encoder.  
Working on the UI on a touch panel tft display. Or, I was, until my 
laptop crashed...



On 6/6/22 3:47 PM, Bruce Labitt wrote:


Will try my best.  It's tough to keep your cool when your life, ie. 
your own computer is crapping out.  Much easier, when it is someone 
else's. Pity the machine is not up at the moment.  Been busy 
transferring my life to an RPI4, which hasn't been as easy as it 
seems like it should.  Writing this on my RPI4-8GB with RaspiOS-64bit.


Laptop in question, with the problem: System76 Oryx6. 32GB RAM, 1TB 
SSD Samsung 970 EVO Plus


HW Details:

=

Intel-10875H CPU, Intel HM470 chipset, MX25L12872F flash chip 
running System76 Open Firmware BIOS,
ITE IT5570E runningSystem76 EC <https://github.com/system76/ec>, 
NVIDIA GeForce RTX 2060, 15.6" 1920x1080@144Hz LCD, LCD panel: Panda 
LM156LF1F (or equivalent)
External video outputs: 1x HDMI, 1x Mini DisplayPort 1.4, 1x 
DisplayPort over USB-C

MemoryUp to 64GB (2x32GB) dual-channel DDR4 SO-DIMMs @ 3200 MHz -- 32 GB

Networking:Gigabit Ethernet,M.2 PCIe/CNVi WiFi/Bluetooth,Intel Wi-Fi 
6 AX200/AX201


Power: 180W (19.5V, 9.23A) DC-in port,Barrel size: 5.5mm (outer), 
2.5mm (inner),Included AC adapter: Chicony A17-180P4A,AC power cord 
type: IEC C5,73Wh 3-cell battery


Sound:Internal speakers & microphone,Combined headphone & microphone 
3.5mm jack,Combined microphone & S/PDIF (optical) 3.5mm jack,HDMI, 
Mini DisplayPort, USB-C DisplayPort audio


Storage:1x M.2 (PCIe NVMe or SATA) - NVME 1 TB installed, 1x M.2 
(PCIe NVMe only) - empty,MicroSD card reader


USB:3x USB 3.2 Gen 1 Type-A,1x USB Type-C with Thunderbolt 3

Dimensions:15": 35.75cm x 23.8cm x 1.98cm, 1.99kg

=== End HW details 
==


Pop-OS-64 bit.  22.04.  Fresh install over existing Ubuntu 20.04 LTS.

I need to reboot the computer to get the kernel stuff.  Will 
followup with uname -a.


Problem occurs when using USB to program Teensy 4.1 
microcontroller.  Active programs at time of crash = Arduino IDE V 
1.8.19

Re: Debugging linux crashes

2022-06-13 Thread Bruce Labitt
FWIW, this total system crash has been isolated to the kernel.  Kernel 
5.17.5-76051705 crashes.  Which was pushed out to apt on, guess what, 
May 26, 2022, the date my computer went to hades.


Kernel 5.15.0-37-generic does not crash.  Kernel 5.18.2 also crashes, 
only if I use TyCommander, but not necessarily during a USB flashing.  I 
had 5.18.2 crash while I was using Firefox, but while Arduino IDE and 
TyCommander were active.


Now running on 5.15 and things are stable.

I know nothing about kernels and stuff like this.  Been forced into it.  
An average Joe like me shouldn't have to deal with this kind of thing.


TyCommander works fine on an RPI4 running Raspberry Pi OS 64 bit, which 
as I understand it is a Debian derivative.  The kernel is 5.15.30, 
according to Wikipedia.  Not looking forward to an update of that kernel.


I have no idea how to make a minimal dying example for any developers...



On 6/6/22 4:05 PM, Bruce Labitt wrote:

Followup with SW related items.

$ cat /etc/os-release
NAME="Pop!_OS"
VERSION="22.04 LTS"
ID=pop
ID_LIKE="ubuntu debian"
PRETTY_NAME="Pop!_OS 22.04 LTS"
VERSION_ID="22.04"
HOME_URL="https://pop.system76.com";
SUPPORT_URL="https://support.system76.com";
BUG_REPORT_URL="https://github.com/pop-os/pop/issues";
PRIVACY_POLICY_URL="https://system76.com/privacy";
VERSION_CODENAME=jammy
UBUNTU_CODENAME=jammy
LOGO=distributor-logo-pop-os

$ uname -a
Linux pop-os 5.17.5-76051705-generic 
#202204271406~1653440576~22.04~6277a18 SMP PREEMPT Wed May 25 01 
x86_64 x86_64 x86_64 GNU/Linux


Tytools from https://github.com/Koromix/tytools
Teensyduino from: https://www.pjrc.com/teensy/td_download.html
Arduino download from: https://www.arduino.cc/en/software V1.8.19.
Data on Teensy 4.1 microcontroller (Arm M7, NXP IMXRT1060) 
https://www.pjrc.com/store/teensy41.html
IMXRT1060 Processor Reference Manual 
https://www.pjrc.com/teensy/IMXRT1060RM_rev3.pdf


Me, I am writing code to make an electronic lead screw for my lathe. 
Motor control works with NEMA-24 stepper motor and rotary encoder.  
Working on the UI on a touch panel tft display.  Or, I was, until my 
laptop crashed...



On 6/6/22 3:47 PM, Bruce Labitt wrote:


Will try my best.  It's tough to keep your cool when your life, ie. 
your own computer is crapping out.  Much easier, when it is someone 
else's.  Pity the machine is not up at the moment.  Been busy 
transferring my life to an RPI4, which hasn't been as easy as it 
seems like it should.  Writing this on my RPI4-8GB with RaspiOS-64bit.


Laptop in question, with the problem: System76 Oryx6. 32GB RAM, 1TB 
SSD Samsung 970 EVO Plus


HW Details:

=

Intel-10875H CPU, Intel HM470 chipset, MX25L12872F flash chip running 
System76 Open Firmware BIOS,
ITE IT5570E runningSystem76 EC <https://github.com/system76/ec>, 
NVIDIA GeForce RTX 2060, 15.6" 1920x1080@144Hz LCD, LCD panel: Panda 
LM156LF1F (or equivalent)
External video outputs: 1x HDMI, 1x Mini DisplayPort 1.4, 1x 
DisplayPort over USB-C

MemoryUp to 64GB (2x32GB) dual-channel DDR4 SO-DIMMs @ 3200 MHz -- 32 GB

Networking:Gigabit Ethernet,M.2 PCIe/CNVi WiFi/Bluetooth,Intel Wi-Fi 
6 AX200/AX201


Power: 180W (19.5V, 9.23A) DC-in port,Barrel size: 5.5mm (outer), 
2.5mm (inner),Included AC adapter: Chicony A17-180P4A,AC power cord 
type: IEC C5,73Wh 3-cell battery


Sound:Internal speakers & microphone,Combined headphone & microphone 
3.5mm jack,Combined microphone & S/PDIF (optical) 3.5mm jack,HDMI, 
Mini DisplayPort, USB-C DisplayPort audio


Storage:1x M.2 (PCIe NVMe or SATA) - NVME 1 TB installed, 1x M.2 
(PCIe NVMe only) - empty,MicroSD card reader


USB:3x USB 3.2 Gen 1 Type-A,1x USB Type-C with Thunderbolt 3

Dimensions:15": 35.75cm x 23.8cm x 1.98cm, 1.99kg

=== End HW details 
==


Pop-OS-64 bit.  22.04.  Fresh install over existing Ubuntu 20.04 LTS.

I need to reboot the computer to get the kernel stuff.  Will followup 
with uname -a.


Problem occurs when using USB to program Teensy 4.1 microcontroller.  
Active programs at time of crash = Arduino IDE V 1.8.19, Teensyduino 
1.56 (required to allow Arduino to recognize and program Teensy 
microcontrollers), and Tytools, 0.9.7, which is a tool to program and 
manage Teensy processors.  Prior to 26 May 2022, this all worked 
flawlessly.


And, the above SW does work flawlessly on the RPI4B, running 
RaspberryPiOS-64bit, but not on my laptop.  On my laptop I get system 
crashes.


Only clues I have found are in syslog, and dmesg, but they only show 
some normal USB transactions, then the computer powering up again.


Thanks Ben, for at least answering (humoring?) me.  Been an awful 
week wit

Re: Debugging linux crashes

2022-06-06 Thread Bruce Labitt

Followup with SW related items.

$ cat /etc/os-release
NAME="Pop!_OS"
VERSION="22.04 LTS"
ID=pop
ID_LIKE="ubuntu debian"
PRETTY_NAME="Pop!_OS 22.04 LTS"
VERSION_ID="22.04"
HOME_URL="https://pop.system76.com";
SUPPORT_URL="https://support.system76.com";
BUG_REPORT_URL="https://github.com/pop-os/pop/issues";
PRIVACY_POLICY_URL="https://system76.com/privacy";
VERSION_CODENAME=jammy
UBUNTU_CODENAME=jammy
LOGO=distributor-logo-pop-os

$ uname -a
Linux pop-os 5.17.5-76051705-generic 
#202204271406~1653440576~22.04~6277a18 SMP PREEMPT Wed May 25 01 x86_64 
x86_64 x86_64 GNU/Linux


Tytools from https://github.com/Koromix/tytools
Teensyduino from: https://www.pjrc.com/teensy/td_download.html
Arduino download from: https://www.arduino.cc/en/software  V1.8.19.
Data on Teensy 4.1 microcontroller (Arm M7, NXP IMXRT1060) 
https://www.pjrc.com/store/teensy41.html
IMXRT1060 Processor Reference Manual 
https://www.pjrc.com/teensy/IMXRT1060RM_rev3.pdf


Me, I am writing code to make an electronic lead screw for my lathe.  
Motor control works with NEMA-24 stepper motor and rotary encoder. 
Working on the UI on a touch panel tft display.  Or, I was, until my 
laptop crashed...



On 6/6/22 3:47 PM, Bruce Labitt wrote:


Will try my best.  It's tough to keep your cool when your life, ie. 
your own computer is crapping out.  Much easier, when it is someone 
else's.  Pity the machine is not up at the moment.  Been busy 
transferring my life to an RPI4, which hasn't been as easy as it seems 
like it should.  Writing this on my RPI4-8GB with RaspiOS-64bit.


Laptop in question, with the problem: System76 Oryx6. 32GB RAM, 1TB 
SSD Samsung 970 EVO Plus


HW Details:

=

Intel-10875H CPU, Intel HM470 chipset, MX25L12872F flash chip running 
System76 Open Firmware BIOS,
ITE IT5570E runningSystem76 EC <https://github.com/system76/ec>, 
NVIDIA GeForce RTX 2060, 15.6" 1920x1080@144Hz LCD, LCD panel: Panda 
LM156LF1F (or equivalent)
External video outputs: 1x HDMI, 1x Mini DisplayPort 1.4, 1x 
DisplayPort over USB-C

MemoryUp to 64GB (2x32GB) dual-channel DDR4 SO-DIMMs @ 3200 MHz -- 32 GB

Networking:Gigabit Ethernet,M.2 PCIe/CNVi WiFi/Bluetooth,Intel Wi-Fi 6 
AX200/AX201


Power: 180W (19.5V, 9.23A) DC-in port,Barrel size: 5.5mm (outer), 
2.5mm (inner),Included AC adapter: Chicony A17-180P4A,AC power cord 
type: IEC C5,73Wh 3-cell battery


Sound:Internal speakers & microphone,Combined headphone & microphone 
3.5mm jack,Combined microphone & S/PDIF (optical) 3.5mm jack,HDMI, 
Mini DisplayPort, USB-C DisplayPort audio


Storage:1x M.2 (PCIe NVMe or SATA) - NVME 1 TB installed, 1x M.2 (PCIe 
NVMe only) - empty,MicroSD card reader


USB:3x USB 3.2 Gen 1 Type-A,1x USB Type-C with Thunderbolt 3

Dimensions:15": 35.75cm x 23.8cm x 1.98cm, 1.99kg

=== End HW details 
==


Pop-OS-64 bit.  22.04.  Fresh install over existing Ubuntu 20.04 LTS.

I need to reboot the computer to get the kernel stuff.  Will followup 
with uname -a.


Problem occurs when using USB to program Teensy 4.1 microcontroller.  
Active programs at time of crash = Arduino IDE V 1.8.19, Teensyduino 
1.56 (required to allow Arduino to recognize and program Teensy 
microcontrollers), and Tytools, 0.9.7, which is a tool to program and 
manage Teensy processors.  Prior to 26 May 2022, this all worked 
flawlessly.


And, the above SW does work flawlessly on the RPI4B, running 
RaspberryPiOS-64bit, but not on my laptop.  On my laptop I get system 
crashes.


Only clues I have found are in syslog, and dmesg, but they only show 
some normal USB transactions, then the computer powering up again.


Thanks Ben, for at least answering (humoring?) me.  Been an awful week 
with this crash.  These crashes are so bad, that there's practically 
nothing in the logs.  Last entry is using the USB port.  And the power 
turns off.  This is a stab at it.  Let me know if there's anything 
else I need to add.  Beats me what the crucial details are, if I knew 
them, it would have been fixed by now.


The title of the thread was really about how to go about doing the 
debugging.  The methodology.  It's improbable that anyone else would 
have experienced this particular crash type.



On 6/6/22 14:09, Ben Scott wrote:

On Sun, Jun 5, 2022 at 12:09 PM Bruce Labitt
  wrote:

I am experiencing severe Linux crashes ...

Long meandering messages with critical details hidden throughout and
others omitted entirely will reduce the likelihood that others will
give you help for free.  (Or even when paid.)

In particular, specify what hardware you have, and the software you're
running, in one place.  If it's a scavenger hunt 

Re: Debugging linux crashes

2022-06-06 Thread Bruce Labitt
Will try my best.  It's tough to keep your cool when your life, ie. your 
own computer is crapping out. Much easier, when it is someone else's.  
Pity the machine is not up at the moment.  Been busy transferring my 
life to an RPI4, which hasn't been as easy as it seems like it should.  
Writing this on my RPI4-8GB with RaspiOS-64bit.


Laptop in question, with the problem: System76 Oryx6. 32GB RAM, 1TB SSD 
Samsung 970 EVO Plus


HW Details:

=

Intel-10875H CPU, Intel HM470 chipset, MX25L12872F flash chip running 
System76 Open Firmware BIOS,
ITE IT5570E runningSystem76 EC <https://github.com/system76/ec>, NVIDIA 
GeForce RTX 2060, 15.6" 1920x1080@144Hz LCD, LCD panel: Panda LM156LF1F 
(or equivalent)
External video outputs: 1x HDMI, 1x Mini DisplayPort 1.4, 1x DisplayPort 
over USB-C

MemoryUp to 64GB (2x32GB) dual-channel DDR4 SO-DIMMs @ 3200 MHz -- 32 GB

Networking:Gigabit Ethernet,M.2 PCIe/CNVi WiFi/Bluetooth,Intel Wi-Fi 6 
AX200/AX201


Power: 180W (19.5V, 9.23A) DC-in port,Barrel size: 5.5mm (outer), 2.5mm 
(inner),Included AC adapter: Chicony A17-180P4A,AC power cord type: IEC 
C5,73Wh 3-cell battery


Sound:Internal speakers & microphone,Combined headphone & microphone 
3.5mm jack,Combined microphone & S/PDIF (optical) 3.5mm jack,HDMI, Mini 
DisplayPort, USB-C DisplayPort audio


Storage:1x M.2 (PCIe NVMe or SATA) - NVME 1 TB installed, 1x M.2 (PCIe 
NVMe only) - empty,MicroSD card reader


USB:3x USB 3.2 Gen 1 Type-A,1x USB Type-C with Thunderbolt 3

Dimensions:15": 35.75cm x 23.8cm x 1.98cm, 1.99kg

=== End HW details 
==


Pop-OS-64 bit.  22.04.  Fresh install over existing Ubuntu 20.04 LTS.

I need to reboot the computer to get the kernel stuff.  Will followup 
with uname -a.


Problem occurs when using USB to program Teensy 4.1 microcontroller.  
Active programs at time of crash = Arduino IDE V 1.8.19, Teensyduino 
1.56 (required to allow Arduino to recognize and program Teensy 
microcontrollers), and Tytools, 0.9.7, which is a tool to program and 
manage Teensy processors.  Prior to 26 May 2022, this all worked flawlessly.


And, the above SW does work flawlessly on the RPI4B, running 
RaspberryPiOS-64bit, but not on my laptop. On my laptop I get system 
crashes.


Only clues I have found are in syslog, and dmesg, but they only show 
some normal USB transactions, then the computer powering up again.


Thanks Ben, for at least answering (humoring?) me.  Been an awful week 
with this crash.  These crashes are so bad, that there's practically 
nothing in the logs.  Last entry is using the USB port.  And the power 
turns off.  This is a stab at it.  Let me know if there's anything else 
I need to add.  Beats me what the crucial details are, if I knew them, 
it would have been fixed by now.


The title of the thread was really about how to go about doing the 
debugging.  The methodology.  It's improbable that anyone else would 
have experienced this particular crash type.



On 6/6/22 14:09, Ben Scott wrote:

On Sun, Jun 5, 2022 at 12:09 PM Bruce Labitt
  wrote:

I am experiencing severe Linux crashes ...

Long meandering messages with critical details hidden throughout and
others omitted entirely will reduce the likelihood that others will
give you help for free.  (Or even when paid.)

In particular, specify what hardware you have, and the software you're
running, in one place.  If it's a scavenger hunt just to find that
information you'll get a poor response.  I didn't see any mention of
the model of machine, for example.  List major components with model
or type (CPU model and speed, RAM size, type and size of storage,
model/type video controller, etc.).  You mention distribution and
version, which is good, but also please provide kernel version.  Also
include steps to reproduce (when it happens, when it doesn't),
commands you've tried, places you've looked for files, error messages
received, etc., etc.

I know you've been around long enough that you've seen plenty of bug
reports and knowledge base articles and the like.  Follow their
example.

http://www.catb.org/~esr/faqs/smart-questions.html

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debugging linux crashes

2022-06-05 Thread Bruce Labitt
, date = 2021-02-07

The last error I get is related to USB, then the system hard collapses.  
Nothing else happens.  syslog got a string of \00\00... to it.
dmesg shows the same time for the reboot, the same USB message and 
nothing else.

This has been pretty frustrating.



On 6/5/22 12:08 PM, Bruce Labitt wrote:
> I am experiencing severe Linux crashes, due to some unknown causes.
> They appear to be related to the use of Ty Commander and programming a
> Teensy 4.1 in the Arduino IDE environment, but I am not positive.  All I
> know at the moment, is that the screen freezes and is unresponsive to
> keyboard or mouse inputs.  The freeze lasts about 10 seconds and the
> laptop computer simply powers off.  The error seems to be quite
> repeatable.  It may or may not have started after a Nvidia graphics
> update.  I do know after a normal system update in Ubuntu 20.04 on May
> 26, my system has been subject to this problem.
>
> I have purged nvidia drivers and reinstalled them.  Didn't matter.
> Syslog and journalctl really didn't seem to show anything interesting,
> at least to my unsophisticated gaze.  Have been a wits end.  In the
> interim, I moved a lot of my "life" off the laptop onto an RPI4B-8GB
> running Raspberry Pi OS 64 bits.  Been a difficult transition,
> recovering my capabilities, computing-wise.
>
> I backed up my whole laptop, and decided, perhaps insanely, to do a
> fresh install of Pop-OS 22.04.  With the base install, I ran into the
> same darn wifi issue that initially plagued me with this laptop.  After
> 2 days of seaching, I found the answer to the slow and constantly
> rebooting iwlwifi card was to set powersave =2, which for some darned
> reason means powersave is off!  I am amazed that this issue still exists
> this day and age, I found mentions of it over all Linux distros.  The
> issue is the wrong microcode is being sent to the wifi adapter, and the
> adapter kernel panics when it receives illegal commands from the Linux
> OS.  You can see it in the syslog very clearly.  I changed the powersave
> to 2, and the wifi adapter seemingly works fine now.  This is the Intel
> WiFi 6 AX200/AX201.
>
> Anyways, even with this brand new installation, and fixed wifi, my
> computer crashes with the combination of Arduino IDE 1.8.19, Teensyduino
> 1.56, and Ty Commander 0.9.7.
>
> However, I found that if I use Arduino IDE, and Teensyduino 1.56 only, I
> have been able to program my Teensy 4.1 without crashing. Unfortunately,
> I'd like to use Ty Commander to enable multiple Teensy's to be debugged.
>
> I'm looking for some suggestions on how to proceed.
>
> Is there a way to get or retain more information on system crashes?  I
> don't even know the true cause of the crash yet.  For some reason, I am
> having difficulty opening the kernel log.  Is it a text file?  Or do I
> need a special viewer?  I can open the syslog without issue.  In Pop OS
> I don't see multiple or older versions of logs.  I don't know why.
> Often older logs have a .0 or .1 extension.  The log I want to see is
> not the one created after the boot following the crash, it is the one
> before!
>
> I'm not sure I really have the skills to deal with linux-crashdump.  I
> haven't seen a step by step procedure that I feel comfortable enough to
> proceed with.  The references use a lot of words, re: fiddling with
> grub, not so many pictures, and I don't want to really go there without
> a really complete script. Is this the path I need to take?
>
> I could compile Ty Commander with debug binaries, I just need to type
> out:  cmake -DCMAKE_BUILD_TYPE=Debug ../..
>
> Can I run Ty Commander with valgrind, or something like that would
> prevent yet another total crash, but perhaps capture something useful?
>
>
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Debugging linux crashes

2022-06-05 Thread Bruce Labitt
I am experiencing severe Linux crashes, due to some unknown causes.  
They appear to be related to the use of Ty Commander and programming a 
Teensy 4.1 in the Arduino IDE environment, but I am not positive.  All I 
know at the moment, is that the screen freezes and is unresponsive to 
keyboard or mouse inputs.  The freeze lasts about 10 seconds and the 
laptop computer simply powers off.  The error seems to be quite 
repeatable.  It may or may not have started after a Nvidia graphics 
update.  I do know after a normal system update in Ubuntu 20.04 on May 
26, my system has been subject to this problem.

I have purged nvidia drivers and reinstalled them.  Didn't matter.  
Syslog and journalctl really didn't seem to show anything interesting, 
at least to my unsophisticated gaze.  Have been a wits end.  In the 
interim, I moved a lot of my "life" off the laptop onto an RPI4B-8GB 
running Raspberry Pi OS 64 bits.  Been a difficult transition, 
recovering my capabilities, computing-wise.

I backed up my whole laptop, and decided, perhaps insanely, to do a 
fresh install of Pop-OS 22.04.  With the base install, I ran into the 
same darn wifi issue that initially plagued me with this laptop.  After 
2 days of seaching, I found the answer to the slow and constantly 
rebooting iwlwifi card was to set powersave =2, which for some darned 
reason means powersave is off!  I am amazed that this issue still exists 
this day and age, I found mentions of it over all Linux distros.  The 
issue is the wrong microcode is being sent to the wifi adapter, and the 
adapter kernel panics when it receives illegal commands from the Linux 
OS.  You can see it in the syslog very clearly.  I changed the powersave 
to 2, and the wifi adapter seemingly works fine now.  This is the Intel 
WiFi 6 AX200/AX201.

Anyways, even with this brand new installation, and fixed wifi, my 
computer crashes with the combination of Arduino IDE 1.8.19, Teensyduino 
1.56, and Ty Commander 0.9.7.

However, I found that if I use Arduino IDE, and Teensyduino 1.56 only, I 
have been able to program my Teensy 4.1 without crashing. Unfortunately, 
I'd like to use Ty Commander to enable multiple Teensy's to be debugged.

I'm looking for some suggestions on how to proceed.

Is there a way to get or retain more information on system crashes?  I 
don't even know the true cause of the crash yet.  For some reason, I am 
having difficulty opening the kernel log.  Is it a text file?  Or do I 
need a special viewer?  I can open the syslog without issue.  In Pop OS 
I don't see multiple or older versions of logs.  I don't know why.  
Often older logs have a .0 or .1 extension.  The log I want to see is 
not the one created after the boot following the crash, it is the one 
before!

I'm not sure I really have the skills to deal with linux-crashdump.  I 
haven't seen a step by step procedure that I feel comfortable enough to 
proceed with.  The references use a lot of words, re: fiddling with 
grub, not so many pictures, and I don't want to really go there without 
a really complete script. Is this the path I need to take?

I could compile Ty Commander with debug binaries, I just need to type 
out:  cmake -DCMAKE_BUILD_TYPE=Debug ../..

Can I run Ty Commander with valgrind, or something like that would 
prevent yet another total crash, but perhaps capture something useful?



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Permanently changing nameserver

2021-08-02 Thread Bruce Labitt
Think it is being overridden in /etc/network/interfaces as well.  Tried
enabling prepend domain-name-servers in dhclient.conf, but that didn't work.

On Mon, Aug 2, 2021 at 1:37 PM Bruce Dawson  wrote:

> This is being set by dhclient when it gets the DHCP info.
>
> I believe you can "fix" this by removing the 'domain-name-servers' from
> the 'request' stanza in /etc/dhcp/dhclient.conf. You probably want to do
> this ONLY on the machines that you don't want to get the DNS servers from
> DHCP.
>
> --Bruce
> On 8/2/21 11:45 AM, Bruce Labitt wrote:
>
> Due to some ISP snafus, and network reconfiguration, some of my RPI's are
> pointing to the wrong nameserver.  I use a pihole for DNS.  Anyways, I have
> a single RPI2 as a print server and it is stubbornly pointing to the wrong
> IP address.  The RPI2 is running on Raspberry Pi Debian Stretch.  I only
> use this node as the cups printer.
>
> If I do cat /etc/resolve.conf I get, # generated by resolvconf
> nameserver  192.168.1.xxx
> I want to have it permanently point to 192.168.1.1, ie the gateway so that
> I let the router redirect DNS traffic to pihole.  Looking at resolvconf
> info, it says NOT to use it directly.  So how does one do this?
>
>
> ___
> gnhlug-discuss mailing 
> listgnhlug-discuss@mail.gnhlug.orghttp://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Raid1 Issue on RPI4

2021-05-29 Thread Bruce Labitt
Seems to be a NVME case related thing. Perhaps my clamping arangement 
shorted out a connection.

I have the disks up.  However, the nvme disk is reporting as /dev/sdd 
now, not /dev/sdb.
This is dumb.  mdadm.conf has configured the array as:

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=rpi4:0 
UUID=82415afc:85be4701:d47937be:cdb8b4e8
    devices=/dev/sdb1,/dev/sdc1

Is there a way to make the devices by UUID so this always works?  If so 
how do I get the uuid of the individual disks?
Right now, due to the RAID1 sdc and sdd have the identical UUID.
But,

$ sudo mdadm --detail /dev/md0
/dev/md0:
    Version : 1.2
  Creation Time : Wed May 26 09:47:08 2021
     Raid Level : raid1
     Array Size : 976628736 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat May 29 19:47:24 2021
  State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
  Spare Devices : 0

Consistency Policy : bitmap

     Number   Major   Minor   RaidDevice State
    -   0    0    0  removed
    1   8   17    1  active sync

Is there a way to get back to a working 2 device RAID1 array?
There's nothing of any value on the array at the moment.  But I'm really 
not too happy about a disk disappearing - this doesn't seem robust.

On 5/29/21 4:57 PM, Bruce Labitt wrote:
> So I got the raid1 array running on the RPI4.  Today, I tried to add a
> sub directory to the array.  Apparently something bad happened and one
> of the disks disappeared.  When this happened there was some kind of
> major upset, as the OS stopped functioning, like it no longer knew where
> commands were located.  sudo, stopped working.  I could type in commands
> via ssh and see the characters, but the command interpreter didn't
> function correctly.  I got a bash message saying it couldn't find the
> command.  Could not establish another ssh session with the RPI.  At that
> point, I pulled the power.
>
> Upon a normal reboot, I find sdb is missing.  md0 is still intact with
> just sdc1.  I'm not sure how mkdir would cause this, but...
>
> $ sudo mdadm --detaill /dev/md0 states there are 2 RAID devices, but 1
> total devices.  1 Active device, 1 working device, 0 failed devices, 0
> spare devices.  Disk0 state is removed, and Disk1 state is active sync
> /dev/sdc1
>
> Assuming I can get sdb back, how do I get it back in the array? Just $
> sudo mdadm --manage /dev/md0 --add /dev/sdb1 ?  Is there a way with UUID's?
>
> Weird that the NVME disk sdb has just disappeared.  Even after a reboot
> it isn't present, even though the idiot light is on.  fdisk doesn't show
> it.  parted allows me to select it but print shows info on sda?  parted
> will select and print data on sdc.
>
> Is there anything that can be done to 'rescue' the nvme disk sdb?
>
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Raid1 Issue on RPI4

2021-05-29 Thread Bruce Labitt
So I got the raid1 array running on the RPI4.  Today, I tried to add a 
sub directory to the array.  Apparently something bad happened and one 
of the disks disappeared.  When this happened there was some kind of 
major upset, as the OS stopped functioning, like it no longer knew where 
commands were located.  sudo, stopped working.  I could type in commands 
via ssh and see the characters, but the command interpreter didn't 
function correctly.  I got a bash message saying it couldn't find the 
command.  Could not establish another ssh session with the RPI.  At that 
point, I pulled the power.

Upon a normal reboot, I find sdb is missing.  md0 is still intact with 
just sdc1.  I'm not sure how mkdir would cause this, but...

$ sudo mdadm --detaill /dev/md0 states there are 2 RAID devices, but 1 
total devices.  1 Active device, 1 working device, 0 failed devices, 0 
spare devices.  Disk0 state is removed, and Disk1 state is active sync 
/dev/sdc1

Assuming I can get sdb back, how do I get it back in the array? Just $ 
sudo mdadm --manage /dev/md0 --add /dev/sdb1 ?  Is there a way with UUID's?

Weird that the NVME disk sdb has just disappeared.  Even after a reboot 
it isn't present, even though the idiot light is on.  fdisk doesn't show 
it.  parted allows me to select it but print shows info on sda?  parted 
will select and print data on sdc.

Is there anything that can be done to 'rescue' the nvme disk sdb?


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Have suggestions for a "roll your own file server"?

2021-05-28 Thread Bruce Labitt
fstab updated.  Just needed to use UUID=, not PARTUUID.  I do notice 
significantly long boot times now, guess it takes a while to get the 
RAID1 up.  Used to take 5 seconds to be available to ssh into, now takes 
about 45 seconds to a minute.

I think I'd like the server to initiate the backup.  So, first, I think 
I should set up ssh keys.  So the client/server (who's on first!) 
confusion rears it's ugly head.  For the server (the storage center) to 
contact the client (one of several laptops), I need to use the client's 
public key?  So I need to create the client's key pair?  Once I have the 
client's public key I use the ssh-copy-id command from the client to the 
server?

Now if I want no passwords sent what do I do?  (Just keys).  I didn't 
quite understand the example on 
https://www.howtogeek.com/424510/how-to-create-and-install-ssh-keys-from-the-linux-shell/
 
It seems one can have the passphrase last a session, how to get it to 
remember to use the clients' keys in "perpetuity"?  If there's a 
problem, how to revoke the keys?

Have some questions on rsync --backup option, is there a way to have 
like a rotating backup, say a weeks worth?  I don't understand the 
difference between --backup and just using -avr, what is the advantage 
of --backup?  The rsync man file was not clear to me (today).

Thanks

Sorry, had to truncate the message - gnhlug server complained about it 
being over 50KB




___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Have suggestions for a "roll your own file server"?

2021-05-26 Thread Bruce Labitt
Thanks for the compliment.  Put a bit of work into it.  Self taught hobby
machinist.  Self taught Linux as well.  Only have had machines for 18
months.  Embarrassed to say how long I've been using Linux, as I keep on
asking basic questions.

On Wed, May 26, 2021, 12:14 PM Tom Buskey  wrote:

> My Fedora /etc/fstab has spaces
> UUID=54103729-6e0a-4345-a2b8-8b8cded29ee1 /boot   ext4
>  defaults1 2
>
> I've had clients initiate rsync for security.  I think the client
> initiation would offload the rsync compute from the server.
> For a home server, it's nice to just monitor the server instead of
> multiple clients.
>
> Nice buiild
>
> On Wed, May 26, 2021 at 11:00 AM Bruce Labitt <
> bruce.lab...@myfairpoint.net> wrote:
>
>> Finally back to this.  Built a stack of metal plates that house my RPI4,
>> a boot SSD, a 1TB RAID1 array, and both active and passive USB3 hubs.
>> Machined parts so everything is bolted and clamped down.  Have a PWM fan
>> that cools the RPI4 proportional to load that runs under systemd.  System
>> boots from SSD.  (No SD card.)  It's kind of a brick sh!thouse, but it's
>> sturdy.  Have created the RAID1 device - or it will be finished in 45
>> minutes.  It is still syncing.
>>
>> Now I'd like to add the md0 device to /etc/fstab.  The example I see is
>> with the device name.  From
>> https://www.tecmint.com/create-raid1-in-linux/
>> /dev/md0/mnt/raid1ext4defaults0 0
>>
>> I've read it is better to use the UUID.  Is the following the correct
>> syntax?
>>
>> PARTUUID=my_complete_md0_UUID  /mnt/raid1ext4defaults  0 0
>>
>> where my_complete_md0_UUID comes from
>> $ lsblk -o UUID /dev/md0
>>
>> Does one need to use tabs in fstab, or are spaces ok?
>>
>> Once I figure this out - I have to figure out some rsync magic.  Is it
>> better for the server to initiate the rsync, or the remote devices?
>>
>> After all this I have to make another one.  That shouldn't take as long
>> as the first time!  For some pictures of the hardware build see
>> https://www.hobby-machinist.com/threads/an-rpi4-based-file-server.92273/#post-846939
>>
>>
>>
>> On 3/10/21 8:49 PM, Bruce Labitt wrote:
>>
>> I'll take a look at that.  Thanks for the link.
>>
>> On Wed, Mar 10, 2021 at 8:15 PM Marc Nozell (m...@nozell.com) <
>> noz...@gmail.com> wrote:
>>
>>> Just to put a plug in for a colleague's work:
>>> https://perfectmediaserver.com/It covers everything from disk
>>> purchasing strategies, burn-in, filesystems (ZFS, SnapRAID, etc).
>>>
>>> He also hosts a podcast that folks here may find interesting:
>>> https://selfhosted.show/
>>>
>>> -marc
>>>
>>> On Wed, Mar 10, 2021 at 8:08 PM  wrote:
>>>
>>>> OK:
>>>>
>>>> s/RPi4/some-other-cheap-computer-with-USB-3.x>/g
>>>>
>>>> Unless you build multiple Ethernet or WiFi or LTE modem connections
>>>> your networking will still be the slowest thing.
>>>>
>>>> You do not need huge amounts of CPU power, or huge amounts of RAM.
>>>>
>>>> My basic point is that if you stick with simple RAID (like mirroring)
>>>> but also set up a unit that is remote from your own home you could protect
>>>> your own data from fire, flood and theft to a reasonable level and even
>>>> protect your friend's data by backing up their data to your device.
>>>>
>>>> Add snapshots as suggested by Tom Buskey,perhaps encryption of file
>>>> systems and data-streams and you can have a rather simple, server where you
>>>> learn a lot by planning it out and setting it up rather than buying an "off
>>>> the shelf" solution or simply using a "web backup".
>>>>
>>>> And good catch on the USB power supply.
>>>>
>>>> md
>>>> > On 03/10/2021 6:53 PM Joshua Judson Rosen 
>>>> wrote:
>>>> >
>>>> >
>>>> > I'm not sure about the Raspberry Pi 4, but up thru the raspi 3+ there
>>>> are... problems, e.g.:
>>>> >
>>>> > Beware of USB on the raspi: there are some bugs in the silicon that
>>>> pretty severely
>>>> > cripple performance when multiple `bulk' devices are used at
>>>> simultaneously,
>>>> > sometimes to the point of making it unusable (e.g. if you want to use

Re: Have suggestions for a "roll your own file server"?

2021-05-26 Thread Bruce Labitt
Finally back to this.  Built a stack of metal plates that house my RPI4, 
a boot SSD, a 1TB RAID1 array, and both active and passive USB3 hubs.  
Machined parts so everything is bolted and clamped down.  Have a PWM fan 
that cools the RPI4 proportional to load that runs under systemd.  
System boots from SSD.  (No SD card.)  It's kind of a brick sh!thouse, 
but it's sturdy.  Have created the RAID1 device - or it will be finished 
in 45 minutes.  It is still syncing.


Now I'd like to add the md0 device to /etc/fstab.  The example I see is 
with the device name.  From https://www.tecmint.com/create-raid1-in-linux/

/dev/md0    /mnt/raid1    ext4 defaults    0 0

I've read it is better to use the UUID.  Is the following the correct 
syntax?


PARTUUID=my_complete_md0_UUID /mnt/raid1    ext4    defaults  0 0

where my_complete_md0_UUID comes from
$ lsblk -o UUID /dev/md0

Does one need to use tabs in fstab, or are spaces ok?

Once I figure this out - I have to figure out some rsync magic.  Is it 
better for the server to initiate the rsync, or the remote devices?


After all this I have to make another one.  That shouldn't take as long 
as the first time!  For some pictures of the hardware build see 
https://www.hobby-machinist.com/threads/an-rpi4-based-file-server.92273/#post-846939




On 3/10/21 8:49 PM, Bruce Labitt wrote:

I'll take a look at that.  Thanks for the link.

On Wed, Mar 10, 2021 at 8:15 PM Marc Nozell (m...@nozell.com 
<mailto:m...@nozell.com>) mailto:noz...@gmail.com>> 
wrote:


Just to put a plug in for a colleague's work:
https://perfectmediaserver.com/ <https://perfectmediaserver.com/>
  It covers everything from disk purchasing strategies, burn-in,
filesystems (ZFS, SnapRAID, etc).

He also hosts a podcast that folks here may find interesting:
https://selfhosted.show/ <https://selfhosted.show/>

-marc

On Wed, Mar 10, 2021 at 8:08 PM mailto:jonhal...@comcast.net>> wrote:

OK:

s/RPi4/some-other-cheap-computer-with-USB-3.x>/g

Unless you build multiple Ethernet or WiFi or LTE modem
connections your networking will still be the slowest thing.

You do not need huge amounts of CPU power, or huge amounts of RAM.

My basic point is that if you stick with simple RAID (like
mirroring) but also set up a unit that is remote from your own
home you could protect your own data from fire, flood and
theft to a reasonable level and even protect your friend's
data by backing up their data to your device.

Add snapshots as suggested by Tom Buskey,perhaps encryption of
file systems and data-streams and you can have a rather
simple, server where you learn a lot by planning it out and
setting it up rather than buying an "off the shelf" solution
or simply using a "web backup".

And good catch on the USB power supply.

md
> On 03/10/2021 6:53 PM Joshua Judson Rosen
mailto:roz...@hackerposse.com>> wrote:
>
>
> I'm not sure about the Raspberry Pi 4, but up thru the raspi
3+ there are... problems, e.g.:
>
> Beware of USB on the raspi: there are some bugs in the
silicon that pretty severely
> cripple performance when multiple `bulk' devices are used at
simultaneously,
> sometimes to the point of making it unusable (e.g. if you
want to use a better Wi-Fi
> adapter/antenna than the one built onto the board, and
connect an LTE modem so that
> your raspi roam onto that if Wi-Fi becomes unavailable,
throughput on whichever of those
> interfaces you're actually using can become abysmal). IIRC
the issue is basically
> that the number of USB endpoints that can be assigned
interrupts by the raspi controller
> is _incredibly small_; and it's common for high-throughput
devices to have multiple endpoints per device--
> sometimes even one USB device will have more endpoints that
the raspi USB controller can handle.
>
> Also, `network fileserver with USB-attached hard drives' is
kind of the `peak unfitness'
> for the raspberry pi. Specifically if you've got it attached
to ethernet,
> the ethernet is attached through the same slow-ish USB bus
as your HDDs.
>
> (the onboard Wi-Fi BTW is SDIO; so if you avoid using the
onboard Wi-Fi, I guess you might also
>  be able to make your µSD card faster...)
>
> ALSO: you'll really want to use an externally-powered USB
hub for USB devices
> that are not totally trivial, because the raspi

Re: Open source IP to Fax software?

2021-05-03 Thread Bruce Labitt
On 5/3/21 1:16 PM, Derek Atkins wrote:
> Hi,
>
> On Mon, May 3, 2021 1:01 pm, Bruce Labitt wrote:
>> Not sure if this is a unicorn or it is common.  A simple search gave
>> ambiguous results.  I'm looking for some program that will send a pdf
>> file to a remote FAX machine.  I don't have a fax machine anymore.  I
>> don't have a fax modem.  It's hard to remember how long ago that was
>> even a thing...
> Do you have a VOIP line?  If so, you can set up Asterisk to send a Fax
> over SIP.  You do need to (manually) convert the PDF to a proper TIFF file
> and then *that* gets sent.  I can give you more details if you have the
> VOIP support, otherwise there's not much point in explaining the configs
> and scripts.
>
>> I can solve this problem in a different way, but wondering if there was
>> such a beast.  Thanks.
> -derek
>
Well technically, I'd think it was VOIP.  All my service comes over 
fiber.  There's no POTS line to the house, just a single non-conducting 
fiber.

At the moment, this appears to be bigger PIA than necessary. I'll just 
toss the thing in the mail.  Yes, snail mail...  Oh well.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Open source IP to Fax software?

2021-05-03 Thread Bruce Labitt
Not sure if this is a unicorn or it is common.  A simple search gave 
ambiguous results.  I'm looking for some program that will send a pdf 
file to a remote FAX machine.  I don't have a fax machine anymore.  I 
don't have a fax modem.  It's hard to remember how long ago that was 
even a thing...

I can solve this problem in a different way, but wondering if there was 
such a beast.  Thanks.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Python re to separate some data values

2021-04-28 Thread Bruce Labitt
On 4/28/21 6:35 PM, Henry Gessau wrote:
> On 4/28/2021 17:57, Bruce Labitt wrote:
>> I've looked in https://www.w3schools.com/python/python_regex.asp,
>> https://docs.python.org/3/library/re.html,
>> https://docs.python.org/3.8/howto/regex.html,
>> https://www.guru99.com/python-regular-expressions-complete-tutorial.html#2,
>> https://www.makeuseof.com/regular-expressions-python/, and
>> https://www.dataquest.io/blog/regular-expressions-data-scientists/ and
>> https://realpython.com/regex-python/
> You've missed the best site of all for regexes: https://regex101.com
> Indispensable for developing and debugging regexes.
>
> Here is my 2-minute attempt: https://regex101.com/r/jGu82j/1
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
How does that site work?  I went there but there wasn't any apparent 
instructions or guidelines what to do?  It's not clear to me that what's 
on that site directly maps to re in python.  Still, it is an interesting 
concept.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Python re to separate some data values

2021-04-28 Thread Bruce Labitt
On 4/28/21 6:28 PM, Joshua Judson Rosen wrote:
> On 4/28/21 5:57 PM, Bruce Labitt wrote:
>> If someone could suggest how to do this, I'd appreciate it.  I've
>> scraped a table of fine thread metric screw parameters from a website.
>> I'm having some trouble with regex (re) separating the numbers.  Have
>> everything working save for this last bit.
>>
>> Here is a sample string:
>>
>> r1[1] = ' 17.98017.87417.65517.59917.43917.291'
>>
>> I'm trying to separate the numbers.  It should read like this:
>>
>> 17.980, 17.874, 17.655, 17.599, 17.439, 17.291
>>
>> There's more than 200 lines of this, so it would be great to automate
>> it!  Each number has 3 digits of precision, so I want to add a comma and
>> a space after the third digit.
>>
>> re.search('(\.)\d{3,3}', r1[1]) returns
>>  so it found the first instance.
>>
>> But, re.sub('(\.)\d{3,3}', '(\.)\d{3,3}, ', r1[1]) yields a KeyError:
>> '\\d' (Python3.8).  Get bad escape \d at position 4.
> The second argument [the replacement string] to re.sub(pattern, repl, string) 
> is not supposed to
> just be a variation of the pattern-matching string that you passed as the 
> first argument.
>
> I think the best illustration that I can give here is to just fix this up for 
> you:
>
>   re.sub(r'(\.)(\d{3,3})', r'\1\2, ', r1[1])
>
Thanks for the embarrassingly concise answer.  It is greatly 
appreciated.  Can you explain the syntax of the 2nd argument?  I haven't 
seen that before.  Where can I find further examples?

What astounds me is re.search allowed my 1st argument, but re.sub barfed 
all over the same 1st argument.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Python re to separate some data values

2021-04-28 Thread Bruce Labitt
If someone could suggest how to do this, I'd appreciate it.  I've 
scraped a table of fine thread metric screw parameters from a website.  
I'm having some trouble with regex (re) separating the numbers.  Have 
everything working save for this last bit.

Here is a sample string:

r1[1] = ' 17.98017.87417.65517.59917.43917.291'

I'm trying to separate the numbers.  It should read like this:

17.980, 17.874, 17.655, 17.599, 17.439, 17.291

There's more than 200 lines of this, so it would be great to automate 
it!  Each number has 3 digits of precision, so I want to add a comma and 
a space after the third digit.

re.search('(\.)\d{3,3}', r1[1]) returns
 so it found the first instance.

But, re.sub('(\.)\d{3,3}', '(\.)\d{3,3}, ', r1[1]) yields a KeyError: 
'\\d' (Python3.8).  Get bad escape \d at position 4.

And, if one adds enough escapes to avoid a KeyError, the function 
actually does nothing, since Out[117] is the same as r1[1]

In [117]: re.sub('(\.)\\\d{3,3}', '(\.)\\\d{3,3}, ', r1[1])
Out[117]: ' 17.98017.87417.65517.59917.43917.291'

I've looked in https://www.w3schools.com/python/python_regex.asp, 
https://docs.python.org/3/library/re.html, 
https://docs.python.org/3.8/howto/regex.html, 
https://www.guru99.com/python-regular-expressions-complete-tutorial.html#2, 
https://www.makeuseof.com/regular-expressions-python/, and 
https://www.dataquest.io/blog/regular-expressions-data-scientists/ and 
https://realpython.com/regex-python/

Is there a way to do this with re?  re.finditer seems to work ok, it 
finds all the indices correctly.

In [121]: it = re.finditer('(\.)\d{3,3}', r1[1])
In [122]: next(it)
Out[122]: 
In [123]: next(it)
Out[123]: 
In [124]: next(it)
Out[124]: 
In [125]: next(it)
Out[125]: 
In [126]: next(it)
Out[126]: 
In [127]: next(it)
Out[127]: 

Suppose I could brute force it at this point, but one would think 
re.sub  should work, if the magic flooby dust was appropriately 
sprinkled about.  I'm clearly missing something important.  Anyone got a 
hint?





___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Trying to figure out if I have a bad NVME device

2021-03-23 Thread Bruce Labitt
It would seem there was an issue with the socket on the nvme enclosure -
actually the WD NVME connector is slightly differently dimensioned than the
Samsung.  I swapped housings and I got the WD NVME to be recognized.  Now
it can be partitioned.
I'm in the midst of partitioning with gparted and selected gpt over MBR.
Now to create a new partition.  I'd like to use the whole 1TB as a
partition.  It's defaulting to free space preceding = 1MiB, free space
following = 0MiB.

What does one use as a Partition name?  What is a standard convention?
What about label?  I am selecting EXT4 as the file system.  Any advice?
Thanks.

On Sun, Mar 21, 2021 at 11:05 AM Bruce Labitt  wrote:

> I have a USB3.2-PCIE NVME adapter (actually have 3 of them) using the
> JM583 chip.  I also have a Samsung 970 EVO Pro 1TB NVME and recently bought
> 2 WD 1TB NVME cards.
>
> I cannot get a WD NVME to be recognized by the OS.  I haven't tried the
> second WD NVME as I am leaving the package unopened.
>
> lsusb reveals the JMicron adapter so I have the ID
> fdisk -l and lsblk do not show the WD device, nor the JM adapter
> The WD disk has never been partitioned or formatted.
>
> sudo dmesg | tail -n 50 shows some sort of issue.
> [ 4015.843758] usb 2-2.1.2: USB disconnect, device number 10
> [ 7180.940213] usb 2-2.1.4: USB disconnect, device number 5
> [ 7186.023443] usb 2-2.1.2: new SuperSpeed Gen 1 USB device number 11
> using xhci_hcd
> [ 7186.054656] usb 2-2.1.2: New USB device found, idVendor=152d,
> idProduct=0583, bcdDevice= 2.09
> [ 7186.054671] usb 2-2.1.2: New USB device strings: Mfr=1, Product=2,
> SerialNumber=3
> [ 7186.054684] usb 2-2.1.2: Product: USB to PCIE Bridge
> [ 7186.054695] usb 2-2.1.2: Manufacturer: JMicron
> [ 7186.054706] usb 2-2.1.2: SerialNumber: 0123456789ABCDEF
> [ 7186.069481] scsi host1: uas
> [ 7186.070909] scsi 1:0:0:0: Direct-Access JMicron  Generic
>  0209 PQ: 0 ANSI: 6
> [ 7186.072283] sd 1:0:0:0: Attached scsi generic sg1 type 0
> [ 7194.231709] sd 1:0:0:0: [sdb] Unit Not Ready
> [ 7194.231728] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
> [ 7194.231744] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
> [ 7194.233639] sd 1:0:0:0: [sdb] Read Capacity(16) failed: Result:
> hostbyte=0x00 driverbyte=0x08
> [ 7194.233657] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
> [ 7194.233673] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
> [ 7194.235281] sd 1:0:0:0: [sdb] Read Capacity(10) failed: Result:
> hostbyte=0x00 driverbyte=0x08
> [ 7194.235297] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
> [ 7194.235313] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
> [ 7194.236113] sd 1:0:0:0: [sdb] 0 512-byte logical blocks: (0 B/0 B)
> [ 7194.236129] sd 1:0:0:0: [sdb] 0-byte physical blocks
> [ 7194.237556] sd 1:0:0:0: [sdb] Test WP failed, assume Write Enabled
> [ 7194.238118] sd 1:0:0:0: [sdb] Asking for cache data failed
> [ 7194.238133] sd 1:0:0:0: [sdb] Assuming drive cache: write through
> [ 7194.239724] sd 1:0:0:0: [sdb] Optimal transfer size 33553920 bytes not
> a multiple of physical block size (0 bytes)
> [ 7194.284844] sd 1:0:0:0: [sdb] Unit Not Ready
> [ 7194.284866] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
> [ 7194.284881] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
> [ 7194.286398] sd 1:0:0:0: [sdb] Read Capacity(16) failed: Result:
> hostbyte=0x00 driverbyte=0x08
> [ 7194.286415] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
> [ 7194.286430] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
> [ 7194.288145] sd 1:0:0:0: [sdb] Read Capacity(10) failed: Result:
> hostbyte=0x00 driverbyte=0x08
> [ 7194.288166] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
> [ 7194.288183] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
> [ 7194.293600] sd 1:0:0:0: [sdb] Attached SCSI disk
>
> However, the same adapter works fine using the 970 Pro NVME device.  On
> the 970 I have the complete backup to my laptop.  The 970 auto mounts.
> Is it possible that UAS doesn't work for the WD NVME?  Or is the WD NVME
> defective?  Or the WD NVME hasn't been prepared properly?   My google-fu
> seems to be weak - it's been difficult to find appropriate information.
> Hope it's just that I missed a step...
> I have heard of blacklisting the UAS driver, but it seems odd the adapter
> works reliably for the 970 but not the WD.
>
> I'd greatly appreciate a hint.  Currently running on an RPI4B-4GB
> Raspberry Pi OS 32bit since my new laptop bit the dust.  The laptop has
> been sent back for repair.
> Thanks for your patience...
>
>
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
Worked on the first boot.  Booted in maybe 10 seconds!  Ran CrystalDiskMark
on the before and after.  It's the random 4k stuff that really makes a
difference.
The Samsung EVO860 SSD is a minimum of 77x faster on random write and 100x
faster on random read as compared to the WD HDD.  (RND4K, Q1T1).  It's like
a totally different machine.  Hope windows won't complain later.

So... for the record, dd did work to clone a Win10 disk to a SSD.  It
copied over all 5 partitions, including the MS secret partition and EFI,
plus whatever nonsense partitions that Dell put on this disk.  Will check
to see if all this survives a reboot...  Yes!
Well, hot diggity dog!  At least for this disk pair, dd worked great!  No
BS, no FUD, it just worked. :)

On Tue, Mar 23, 2021 at 1:00 PM Bruce Labitt  wrote:

> I was 20 minutes into dd when I noticed that the disks were mounted.
> Aborted dd.  Unmounted both disks and restarted dd.  I'm at the tail end of
> the run, and hoping it completed without errors.  If it barfs I may try
> ddrescue, the gnu version.
>
> The old disk has really bad random access rates.  It's like less than 0.5
> MB/sec for random 4k access.  On Win10 it's incredibly slow.
>
> FYI, the only reason I even turned on this sorry excuse of a laptop was
> the NH VINI website would not render the Covid-19 vaccine registration site
> correctly in Chromium, Chrome or Firefox in Linux yesterday.  It was
> impossible to fill out the form.  After 3 hours of really slow windows
> updating, I could install the latest version of Chrome on Windows 10.  Then
> and only then could I register.  It took 6 hours to update the laptop to be
> current.  So despite the fact that I don't use Win10 regularly, it was
> useful yesterday.  The laptop experience was so painful that I decided to
> replace the HDD with a spare SSD.
>
> The State of NH made the registration forms only work with the latest
> version of Edge, Chrome and Firefox.  Unfortunately, yesterday, in Linux,
> the versions were slightly older.  For Firefox the Windows version was
> 86.0.1, and in Linux it was 86.0.  Today I noticed that 86.0.1 is available
> for Linux.
>
> Any ways, that's why I'm attempting this.  If for some odd reason I have
> to use the Win10 laptop again maybe it won't be so painful.  20 minute boot
> times are just awful.  An SSD should bring it to 20 seconds at least
> according to a disk benchmark tool I ran.
>
> Personally, I hope my real laptop gets repaired.  It croaked after a
> couple of months.  Brand new.  In the interim, I am dd'ing with a RPI4, as
> that's all I have.  The RPI boots direct from SSD and is overclocked to
> 2Ghz, so it's tolerable, but not speedy.
>
> On Tue, Mar 23, 2021, 12:19 PM Joshua Judson Rosen 
> wrote:
>
>> On 3/23/21 9:07 AM, Bruce Labitt wrote:
>> > On Tue, Mar 23, 2021 at 8:52 AM Dan Jenkins > d...@rastech.com>> wrote:
>> >
>> > In my experience dd works. Make sure the destination disk is larger
>> than
>> > the source. I've had problems sometimes when they were the exact
>> same
>> > size. Any other issue was due to issues on the source disk, in which
>> > case ddrescue, has worked.
>>  >
>> > In my case the disks report to be the same size in lsblk.
>> > fdisk -l reports the hdd is 1000204886016 bytes, and the sdd
>> is 1000204886016 bytes or exactly the same size.
>> > Guess I will try dd.  Fingers crossed...
>> If the destination disk does end up not being big enough,
>> you can probably shrink the data on the source disk a little
>> (and then fix up the GPT on the destination disk, if you're using GPT--
>>   because GPT wants to keep a `backup table' at the _physical tail end_
>>   of the disk and some implementations of GPT will refuse to read
>>   the partition table if the disk doesn't pass that sniff-test).
>>
>> Just be sure that you don't have the source filesystem mounted writeably
>> when
>> you're trying to copy it like that...: it's pretty important
>> that nothing be actively using a filesystem and causing the data/metadata
>> stored within it to change as you try to dd a copy of its underlying
>> storage.
>>
>> And, since I don't think anyone mentioned this: be sure to us a big
>> enough blocksize
>> with dd, because the default 512-byte bs will be incredibly slow
>> (and I guess *could* in theory cause a lot of extra wear on the SSD
>>   due to write-amplification, though I guess the Linux block layer
>>   should protect against that?).
>>
>> For the rest of my response, I'm going to mostly ignore the "

Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
I was 20 minutes into dd when I noticed that the disks were mounted.
Aborted dd.  Unmounted both disks and restarted dd.  I'm at the tail end of
the run, and hoping it completed without errors.  If it barfs I may try
ddrescue, the gnu version.

The old disk has really bad random access rates.  It's like less than 0.5
MB/sec for random 4k access.  On Win10 it's incredibly slow.

FYI, the only reason I even turned on this sorry excuse of a laptop was the
NH VINI website would not render the Covid-19 vaccine registration site
correctly in Chromium, Chrome or Firefox in Linux yesterday.  It was
impossible to fill out the form.  After 3 hours of really slow windows
updating, I could install the latest version of Chrome on Windows 10.  Then
and only then could I register.  It took 6 hours to update the laptop to be
current.  So despite the fact that I don't use Win10 regularly, it was
useful yesterday.  The laptop experience was so painful that I decided to
replace the HDD with a spare SSD.

The State of NH made the registration forms only work with the latest
version of Edge, Chrome and Firefox.  Unfortunately, yesterday, in Linux,
the versions were slightly older.  For Firefox the Windows version was
86.0.1, and in Linux it was 86.0.  Today I noticed that 86.0.1 is available
for Linux.

Any ways, that's why I'm attempting this.  If for some odd reason I have to
use the Win10 laptop again maybe it won't be so painful.  20 minute boot
times are just awful.  An SSD should bring it to 20 seconds at least
according to a disk benchmark tool I ran.

Personally, I hope my real laptop gets repaired.  It croaked after a couple
of months.  Brand new.  In the interim, I am dd'ing with a RPI4, as that's
all I have.  The RPI boots direct from SSD and is overclocked to 2Ghz, so
it's tolerable, but not speedy.

On Tue, Mar 23, 2021, 12:19 PM Joshua Judson Rosen 
wrote:

> On 3/23/21 9:07 AM, Bruce Labitt wrote:
> > On Tue, Mar 23, 2021 at 8:52 AM Dan Jenkins  d...@rastech.com>> wrote:
> >
> > In my experience dd works. Make sure the destination disk is larger
> than
> > the source. I've had problems sometimes when they were the exact same
> > size. Any other issue was due to issues on the source disk, in which
> > case ddrescue, has worked.
>  >
> > In my case the disks report to be the same size in lsblk.
> > fdisk -l reports the hdd is 1000204886016 bytes, and the sdd
> is 1000204886016 bytes or exactly the same size.
> > Guess I will try dd.  Fingers crossed...
> If the destination disk does end up not being big enough,
> you can probably shrink the data on the source disk a little
> (and then fix up the GPT on the destination disk, if you're using GPT--
>   because GPT wants to keep a `backup table' at the _physical tail end_
>   of the disk and some implementations of GPT will refuse to read
>   the partition table if the disk doesn't pass that sniff-test).
>
> Just be sure that you don't have the source filesystem mounted writeably
> when
> you're trying to copy it like that...: it's pretty important
> that nothing be actively using a filesystem and causing the data/metadata
> stored within it to change as you try to dd a copy of its underlying
> storage.
>
> And, since I don't think anyone mentioned this: be sure to us a big enough
> blocksize
> with dd, because the default 512-byte bs will be incredibly slow
> (and I guess *could* in theory cause a lot of extra wear on the SSD
>   due to write-amplification, though I guess the Linux block layer
>   should protect against that?).
>
> For the rest of my response, I'm going to mostly ignore the "Windows 10"
> part of the question and provide guidance to people looking through the
> list archive
> for guidance doing the same sort of data-migration but for Linux disks
> (though I suspect that the underlying rationales port to other operating
> systems)...:
>
> There are theoretically reasons to make a fresh filesystem / LVM PV / RAID
> / whatever
> structure, with its configuration tuned to the native block-size of the
> underlying
> flash controller..., but in practice it never seems to matter enough to
> make it worthwhile
> to even bother figuring out what any of those numbers are.
>
> There are however real + very practical reasons to initialize storage on
> different physical disks
> with different *IDs* if you want to be able to use them together at the
> same time, e.g.:
> different ext filesystem UUIDs, or different PV IDs if you want to be able
> to use them both with LVM. You can also change those IDs after the fact.
>
> If I understand your use case, you really just have a partition table with
> one two filesystems + maybe swap space, 

Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
Thanks for the update.  The HDD is a WD.  If dd fails, I may try the
Acronis.

On Tue, Mar 23, 2021, 11:33 AM Greg Kettmann  wrote:

> I used Acronis True Image. It's free but only works if one of the drives
> is Western Digital.  It worked great for my brother as well.
>
> I believe I've also used EaseUS, further in the past. Even  further back I
> think I used Partition Magic (to clone a boot drive) but I don't think
> that's free anymore.
>
> I've cloned boot drives (mostly Windows, I usually rebuild Linux machines)
> quite a few times and have had excellent luck with the utilities.  They're
> flexible with (larger) drive or partition sizes. I've not had multiple
> other partitions to contend with but copying them shouldn't be difficult.
> It's the MBR that is the challenge and having all the boot pointers
> pointing to the right locations.
>
> Good luck.  It's well worth it.  On an old machine the boot time was
> reduced by a factor of ten. Your mileage may vary but I've done this at
> least 6 times and never been disappointed.  Now I always make my boot drive
> is an SSD.
>
> Greg
>
> Get TypeApp for Android <http://www.typeapp.com/r?b=16417>
> On Mar 23, 2021, at 7:33 AM, Bruce Labitt  wrote:
>>
>> I'd be grateful to learn what worked.  No need to waste my time more than
>> necessary.
>>
>> On Tue, Mar 23, 2021, 8:30 AM Greg Kettmann < g...@kettmann.com> wrote:
>>
>>> I don't know if dd works. I've done this several times using freely
>>> available utilities.  In one case I tried and it failed.  I simply used a
>>> different utility and it worked.  I was impressed with the results,
>>> particularly with dramatically improved boot times.
>>>
>>> Sorry to be vague. You were asking about dd.  If you're interested in
>>> which utility(s) I used just let me know.  I should have records.  The last
>>> time was a year ago.
>>>
>>> Greg
>>>
>>> Get TypeApp for Android <http://www.typeapp.com/r?b=16417>
>>> On Mar 22, 2021, at 10:19 PM, Bruce Labitt < bdlab...@gmail.com> wrote:
>>>>
>>>> Have this excruciatingly slow Win10 HDD I'd like to clone to SSD.
>>>> Reading about how to do this leads me to dd as a way to clone the disk.
>>>> The disks are close in size.  According to lsblk, the HDD sdb is 931.5GB,
>>>> and the SSD sdf is 931.5GB.
>>>>
>>>> sdb has 5 partitions on it.
>>>> 1) EFI  500MiB
>>>> 2) MS reserved partition  128MiB
>>>> 3) OS "basic data" partition 918.07GiB
>>>> 4) WINRETOOLS 852MiB
>>>> 5) Image   11.56GiB
>>>>
>>>> sdf has stuff on it, which I presume will be wiped out by dd.
>>>>
>>>> Since the sizes are "equal", can I just # dd if=/dev/sdb of=/dev/sdf
>>>> bs=1M status=progress and be done with it?  Is there anything else that I'd
>>>> need to do to get it to boot?
>>>>
>>>>   --
>>>>
>>>> gnhlug-discuss mailing list
>>>> gnhlug-discuss@mail.gnhlug.org
>>>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>>>
>>>>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
It would seem clonezilla requires linux x86* to run.  Unfortunately I'm on
a RPI4, as my "real linux laptop" is being repaired.  Seems I'm stuck with
dd or gddrescue.
At the moment it has taken 6571s to copy 677GB...  hope this works.  Less
than an hour to go, I hope.

On Tue, Mar 23, 2021 at 11:13 AM Tom Buskey  wrote:

> I've used https://clonezilla.org/ in the past with great success.
> Windows 7 & Server 2008 were the last windows systems I've used it on.
>
> I always connected via ssh to a machine with storage to place the images.
> You can restore to the same size drive or larger.
> The image is created with partclone so it's smaller, but it will fall back
> to dd if needed.
>
>
>
> On Tue, Mar 23, 2021 at 9:09 AM Bruce Labitt  wrote:
>
>> In my case the disks report to be the same size in lsblk.
>> fdisk -l reports the hdd is 1000204886016 bytes, and the sdd
>> is 1000204886016 bytes or exactly the same size.
>> Guess I will try dd.  Fingers crossed...
>>
>>
>>
>>
>> On Tue, Mar 23, 2021 at 8:52 AM Dan Jenkins  wrote:
>>
>>> In my experience dd works. Make sure the destination disk is larger than
>>> the source. I've had problems sometimes when they were the exact same
>>> size. Any other issue was due to issues on the source disk, in which
>>> case ddrescue, has worked.
>>>
>>> On 2021-03-23 08:33, Bruce Labitt wrote:
>>> > I'd be grateful to learn what worked.  No need to waste my time more
>>> than
>>> > necessary.
>>> >
>>> > On Tue, Mar 23, 2021, 8:30 AM Greg Kettmann  wrote:
>>> >
>>> >> I don't know if dd works. I've done this several times using freely
>>> >> available utilities.  In one case I tried and it failed.  I simply
>>> used a
>>> >> different utility and it worked.  I was impressed with the results,
>>> >> particularly with dramatically improved boot times.
>>> >>
>>> >> Sorry to be vague. You were asking about dd.  If you're interested in
>>> >> which utility(s) I used just let me know.  I should have records.
>>> The last
>>> >> time was a year ago.
>>> >>
>>> >> Greg
>>> >>
>>> >> Get TypeApp for Android <http://www.typeapp.com/r?b=16417>
>>> >> On Mar 22, 2021, at 10:19 PM, Bruce Labitt 
>>> wrote:
>>> >>> Have this excruciatingly slow Win10 HDD I'd like to clone to SSD.
>>> >>> Reading about how to do this leads me to dd as a way to clone the
>>> disk.
>>> >>> The disks are close in size.  According to lsblk, the HDD sdb is
>>> 931.5GB,
>>> >>> and the SSD sdf is 931.5GB.
>>> >>>
>>> >>> sdb has 5 partitions on it.
>>> >>> 1) EFI  500MiB
>>> >>> 2) MS reserved partition  128MiB
>>> >>> 3) OS "basic data" partition 918.07GiB
>>> >>> 4) WINRETOOLS 852MiB
>>> >>> 5) Image   11.56GiB
>>> >>>
>>> >>> sdf has stuff on it, which I presume will be wiped out by dd.
>>> >>>
>>> >>> Since the sizes are "equal", can I just # dd if=/dev/sdb of=/dev/sdf
>>> >>> bs=1M status=progress and be done with it?  Is there anything else
>>> that I'd
>>> >>> need to do to get it to boot?
>>> >>>
>>> >>> --
>>> >>>
>>> >>> gnhlug-discuss mailing list
>>> >>> gnhlug-discuss@mail.gnhlug.org
>>> >>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>> >>>
>>> >>>
>>> >
>>> > ___
>>> > gnhlug-discuss mailing list
>>> > gnhlug-discuss@mail.gnhlug.org
>>> > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>>
>>> ___
>>> gnhlug-discuss mailing list
>>> gnhlug-discuss@mail.gnhlug.org
>>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>>
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
In my case the disks report to be the same size in lsblk.
fdisk -l reports the hdd is 1000204886016 bytes, and the sdd
is 1000204886016 bytes or exactly the same size.
Guess I will try dd.  Fingers crossed...




On Tue, Mar 23, 2021 at 8:52 AM Dan Jenkins  wrote:

> In my experience dd works. Make sure the destination disk is larger than
> the source. I've had problems sometimes when they were the exact same
> size. Any other issue was due to issues on the source disk, in which
> case ddrescue, has worked.
>
> On 2021-03-23 08:33, Bruce Labitt wrote:
> > I'd be grateful to learn what worked.  No need to waste my time more than
> > necessary.
> >
> > On Tue, Mar 23, 2021, 8:30 AM Greg Kettmann  wrote:
> >
> >> I don't know if dd works. I've done this several times using freely
> >> available utilities.  In one case I tried and it failed.  I simply used
> a
> >> different utility and it worked.  I was impressed with the results,
> >> particularly with dramatically improved boot times.
> >>
> >> Sorry to be vague. You were asking about dd.  If you're interested in
> >> which utility(s) I used just let me know.  I should have records.  The
> last
> >> time was a year ago.
> >>
> >> Greg
> >>
> >> Get TypeApp for Android <http://www.typeapp.com/r?b=16417>
> >> On Mar 22, 2021, at 10:19 PM, Bruce Labitt  wrote:
> >>> Have this excruciatingly slow Win10 HDD I'd like to clone to SSD.
> >>> Reading about how to do this leads me to dd as a way to clone the disk.
> >>> The disks are close in size.  According to lsblk, the HDD sdb is
> 931.5GB,
> >>> and the SSD sdf is 931.5GB.
> >>>
> >>> sdb has 5 partitions on it.
> >>> 1) EFI  500MiB
> >>> 2) MS reserved partition  128MiB
> >>> 3) OS "basic data" partition 918.07GiB
> >>> 4) WINRETOOLS 852MiB
> >>> 5) Image   11.56GiB
> >>>
> >>> sdf has stuff on it, which I presume will be wiped out by dd.
> >>>
> >>> Since the sizes are "equal", can I just # dd if=/dev/sdb of=/dev/sdf
> >>> bs=1M status=progress and be done with it?  Is there anything else
> that I'd
> >>> need to do to get it to boot?
> >>>
> >>> --
> >>>
> >>> gnhlug-discuss mailing list
> >>> gnhlug-discuss@mail.gnhlug.org
> >>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> >>>
> >>>
> >
> > ___
> > gnhlug-discuss mailing list
> > gnhlug-discuss@mail.gnhlug.org
> > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
I'd be grateful to learn what worked.  No need to waste my time more than
necessary.

On Tue, Mar 23, 2021, 8:30 AM Greg Kettmann  wrote:

> I don't know if dd works. I've done this several times using freely
> available utilities.  In one case I tried and it failed.  I simply used a
> different utility and it worked.  I was impressed with the results,
> particularly with dramatically improved boot times.
>
> Sorry to be vague. You were asking about dd.  If you're interested in
> which utility(s) I used just let me know.  I should have records.  The last
> time was a year ago.
>
> Greg
>
> Get TypeApp for Android <http://www.typeapp.com/r?b=16417>
> On Mar 22, 2021, at 10:19 PM, Bruce Labitt  wrote:
>>
>> Have this excruciatingly slow Win10 HDD I'd like to clone to SSD.
>> Reading about how to do this leads me to dd as a way to clone the disk.
>> The disks are close in size.  According to lsblk, the HDD sdb is 931.5GB,
>> and the SSD sdf is 931.5GB.
>>
>> sdb has 5 partitions on it.
>> 1) EFI  500MiB
>> 2) MS reserved partition  128MiB
>> 3) OS "basic data" partition 918.07GiB
>> 4) WINRETOOLS 852MiB
>> 5) Image   11.56GiB
>>
>> sdf has stuff on it, which I presume will be wiped out by dd.
>>
>> Since the sizes are "equal", can I just # dd if=/dev/sdb of=/dev/sdf
>> bs=1M status=progress and be done with it?  Is there anything else that I'd
>> need to do to get it to boot?
>>
>> --
>>
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dd cloning a Win10 HDD to SSD

2021-03-23 Thread Bruce Labitt
Is there a compelling reason to use ddrescue over dd for this?  Is gnu
ddrescue the preferred choice of the ddrescues?

Or is this a fool's errand?

On Mon, Mar 22, 2021, 11:14 PM Curt Howland  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On Monday 22 March 2021, Bruce Labitt was heard to say:
> > Since the sizes are "equal", can I just # dd if=/dev/sdb
> > of=/dev/sdf bs=1M status=progress and be done with it?  Is there
> > anything else that I'd need to do to get it to boot?
>
> Since using dd to copy boot .iso images works, I would assume the boot
> record of a hdd would be copied over as well.
>
> Just an assumption, since that's not something I've ever tried.
>
> So long as your sdf has nothing on it of consequence, the worst thing
> that could happen is failure and you go back to your existing sdb.
>
> Do say what happens, curiosity abounds.
>
> Curt-
>
> - --
> You may my glories and my state dispose,
> But not my griefs; still am I king of those.
>  --- William Shakespeare, "Richard II"
>
> -BEGIN PGP SIGNATURE-
>
> iHUEAREIAB0WIQTaYVhJsIalt8scIDa2T1fo1pHhqQUCYFlc9gAKCRC2T1fo1pHh
> qZKzAP4xMjt0ZsdN1EMdxijarCGTIVLLe1R1b19pifL+lrf51AEA45dOzCreHmw1
> 80x1evx+s4p6piUHjlONFkgcoDjKgXE=
> =I2Q8
> -END PGP SIGNATURE-
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


dd cloning a Win10 HDD to SSD

2021-03-22 Thread Bruce Labitt
Have this excruciatingly slow Win10 HDD I'd like to clone to SSD.  Reading
about how to do this leads me to dd as a way to clone the disk.  The disks
are close in size.  According to lsblk, the HDD sdb is 931.5GB, and the SSD
sdf is 931.5GB.

sdb has 5 partitions on it.
1) EFI  500MiB
2) MS reserved partition  128MiB
3) OS "basic data" partition 918.07GiB
4) WINRETOOLS 852MiB
5) Image   11.56GiB

sdf has stuff on it, which I presume will be wiped out by dd.

Since the sizes are "equal", can I just # dd if=/dev/sdb of=/dev/sdf bs=1M
status=progress and be done with it?  Is there anything else that I'd need
to do to get it to boot?
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Trying to figure out if I have a bad NVME device

2021-03-21 Thread Bruce Labitt
I have a USB3.2-PCIE NVME adapter (actually have 3 of them) using the JM583
chip.  I also have a Samsung 970 EVO Pro 1TB NVME and recently bought 2 WD
1TB NVME cards.

I cannot get a WD NVME to be recognized by the OS.  I haven't tried the
second WD NVME as I am leaving the package unopened.

lsusb reveals the JMicron adapter so I have the ID
fdisk -l and lsblk do not show the WD device, nor the JM adapter
The WD disk has never been partitioned or formatted.

sudo dmesg | tail -n 50 shows some sort of issue.
[ 4015.843758] usb 2-2.1.2: USB disconnect, device number 10
[ 7180.940213] usb 2-2.1.4: USB disconnect, device number 5
[ 7186.023443] usb 2-2.1.2: new SuperSpeed Gen 1 USB device number 11 using
xhci_hcd
[ 7186.054656] usb 2-2.1.2: New USB device found, idVendor=152d,
idProduct=0583, bcdDevice= 2.09
[ 7186.054671] usb 2-2.1.2: New USB device strings: Mfr=1, Product=2,
SerialNumber=3
[ 7186.054684] usb 2-2.1.2: Product: USB to PCIE Bridge
[ 7186.054695] usb 2-2.1.2: Manufacturer: JMicron
[ 7186.054706] usb 2-2.1.2: SerialNumber: 0123456789ABCDEF
[ 7186.069481] scsi host1: uas
[ 7186.070909] scsi 1:0:0:0: Direct-Access JMicron  Generic
 0209 PQ: 0 ANSI: 6
[ 7186.072283] sd 1:0:0:0: Attached scsi generic sg1 type 0
[ 7194.231709] sd 1:0:0:0: [sdb] Unit Not Ready
[ 7194.231728] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
[ 7194.231744] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
[ 7194.233639] sd 1:0:0:0: [sdb] Read Capacity(16) failed: Result:
hostbyte=0x00 driverbyte=0x08
[ 7194.233657] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
[ 7194.233673] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
[ 7194.235281] sd 1:0:0:0: [sdb] Read Capacity(10) failed: Result:
hostbyte=0x00 driverbyte=0x08
[ 7194.235297] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
[ 7194.235313] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
[ 7194.236113] sd 1:0:0:0: [sdb] 0 512-byte logical blocks: (0 B/0 B)
[ 7194.236129] sd 1:0:0:0: [sdb] 0-byte physical blocks
[ 7194.237556] sd 1:0:0:0: [sdb] Test WP failed, assume Write Enabled
[ 7194.238118] sd 1:0:0:0: [sdb] Asking for cache data failed
[ 7194.238133] sd 1:0:0:0: [sdb] Assuming drive cache: write through
[ 7194.239724] sd 1:0:0:0: [sdb] Optimal transfer size 33553920 bytes not a
multiple of physical block size (0 bytes)
[ 7194.284844] sd 1:0:0:0: [sdb] Unit Not Ready
[ 7194.284866] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
[ 7194.284881] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
[ 7194.286398] sd 1:0:0:0: [sdb] Read Capacity(16) failed: Result:
hostbyte=0x00 driverbyte=0x08
[ 7194.286415] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
[ 7194.286430] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
[ 7194.288145] sd 1:0:0:0: [sdb] Read Capacity(10) failed: Result:
hostbyte=0x00 driverbyte=0x08
[ 7194.288166] sd 1:0:0:0: [sdb] Sense Key : 0x4 [current]
[ 7194.288183] sd 1:0:0:0: [sdb] ASC=0x44 <>ASCQ=0x81
[ 7194.293600] sd 1:0:0:0: [sdb] Attached SCSI disk

However, the same adapter works fine using the 970 Pro NVME device.  On the
970 I have the complete backup to my laptop.  The 970 auto mounts.
Is it possible that UAS doesn't work for the WD NVME?  Or is the WD NVME
defective?  Or the WD NVME hasn't been prepared properly?   My google-fu
seems to be weak - it's been difficult to find appropriate information.
Hope it's just that I missed a step...
I have heard of blacklisting the UAS driver, but it seems odd the adapter
works reliably for the 970 but not the WD.

I'd greatly appreciate a hint.  Currently running on an RPI4B-4GB Raspberry
Pi OS 32bit since my new laptop bit the dust.  The laptop has been sent
back for repair.
Thanks for your patience...
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: RPI HW PWM fan control

2021-03-20 Thread Bruce Labitt
Apparently one cannot control both pwm outputs simultaneously.  When one of
the pwm outputs was disabled, the service does work correctly after a
reboot.  The bcm2835 driver which is used by this project apparently
silently fails!  It sure would be nice if the applet logged this
somewhere.  This was a devil to find...  There was an oblique reference to
silent failing in the bcm2835 library documentation, which gave me a clue.

On Fri, Mar 19, 2021 at 5:05 PM Bruce Labitt  wrote:

> I found a project on
> https://gist.github.com/alwynallan/1c13096c4cd675f38405702e89e0c536 for a
> hardware controlled pwm fan for the RPI4.  This should have low resource
> usage, as compared to software controlled pwm.  I don't have a Noctua type
> pwm fan, but I made a simple circuit with a resistor, a transistor and a
> flyback diode to drive a brushless DC motor from the GPIO pin.  The
> circuit, cobbled together on some perf board, works great.
>
> This project works, but doesn't.  By that I mean, if one makes and
> installs the program along with a systemd service it works.  However, the
> fan control does not survive a reboot.  More accurately, the service is
> relaunched after boot - I can see it with # systemctl | grep pi_fan_hwpwm,
> but the actual pin is not being controlled.  The SW thinks it is
> controlling the pwm pin output, but there is NO physical control of the
> pin.  As the core temp gets hotter and hotter the PWM duty factor
> increases, as designed, but no signal is at the GPIO#18 pin.
>
> The pi_fan_hwpwm.service is
> [Unit]
> Description=Hardware PWM control for Raspberry Pi 4 Case Fan
> After=syslog.target
>
> [Service]
> Type=simple
> User=root
> WorkingDirectory=/run
> PIDFile=/run/pi_fan_hwpwm.pid
> ExecStart=/usr/local/sbin/pi_fan_hwpwm
> Restart=on-failure
>
> [Install]
> WantedBy=multi-user.target
>
> Multi-user.target.wants has pi_fan_hwpwm.service in its directory.  The
> service does start.  However the actual control of the pin is somehow being
> prevented.  Even if I stop and start the service, the pin is not controlled.
>
> I've turned off i2c, spi and audio in /boot/config.txt  There's some
> "mumbling" about doing things like this, both in the source of
> pi_fan_hwpwm.c and in the header of bcm2835.h.
>
> This project includes bcm2835.h to control the pwm.  If I use the example
> file found in bcm2835-1.68/examples/pwm/pwm.c and compile it, it controls
> the fan.  If I then stop and start the fan service (after running ./pwm
> once) the fan pwm service now controls the fan physically.  Comparing the
> code in pwm.c and pi_fan_hwpwm.c reveals similar structure and
> initialization.  Have to say, this has been perplexing.
>
> Besides giving up, which I have seriously considered, anyone got any
> suggestions to make this work?  Was hoping to use the HW controlled fan on
> my RPI4 NAS.
>
>
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RPI HW PWM fan control

2021-03-19 Thread Bruce Labitt
I found a project on
https://gist.github.com/alwynallan/1c13096c4cd675f38405702e89e0c536 for a
hardware controlled pwm fan for the RPI4.  This should have low resource
usage, as compared to software controlled pwm.  I don't have a Noctua type
pwm fan, but I made a simple circuit with a resistor, a transistor and a
flyback diode to drive a brushless DC motor from the GPIO pin.  The
circuit, cobbled together on some perf board, works great.

This project works, but doesn't.  By that I mean, if one makes and installs
the program along with a systemd service it works.  However, the fan
control does not survive a reboot.  More accurately, the service is
relaunched after boot - I can see it with # systemctl | grep pi_fan_hwpwm,
but the actual pin is not being controlled.  The SW thinks it is
controlling the pwm pin output, but there is NO physical control of the
pin.  As the core temp gets hotter and hotter the PWM duty factor
increases, as designed, but no signal is at the GPIO#18 pin.

The pi_fan_hwpwm.service is
[Unit]
Description=Hardware PWM control for Raspberry Pi 4 Case Fan
After=syslog.target

[Service]
Type=simple
User=root
WorkingDirectory=/run
PIDFile=/run/pi_fan_hwpwm.pid
ExecStart=/usr/local/sbin/pi_fan_hwpwm
Restart=on-failure

[Install]
WantedBy=multi-user.target

Multi-user.target.wants has pi_fan_hwpwm.service in its directory.  The
service does start.  However the actual control of the pin is somehow being
prevented.  Even if I stop and start the service, the pin is not controlled.

I've turned off i2c, spi and audio in /boot/config.txt  There's some
"mumbling" about doing things like this, both in the source of
pi_fan_hwpwm.c and in the header of bcm2835.h.

This project includes bcm2835.h to control the pwm.  If I use the example
file found in bcm2835-1.68/examples/pwm/pwm.c and compile it, it controls
the fan.  If I then stop and start the fan service (after running ./pwm
once) the fan pwm service now controls the fan physically.  Comparing the
code in pwm.c and pi_fan_hwpwm.c reveals similar structure and
initialization.  Have to say, this has been perplexing.

Besides giving up, which I have seriously considered, anyone got any
suggestions to make this work?  Was hoping to use the HW controlled fan on
my RPI4 NAS.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Have suggestions for a "roll your own file server"?

2021-03-10 Thread Bruce Labitt
-1 configuration (also known as "mirroring") to
>> keep it simple.  If one disk fails the other will still keep working (but
>> you should replace it as soon as possible).
>> > >
>> > > Put all of your data on both systems.
>> > >
>> > > Take one of your systems to a friends or relatives house who you
>> trust that has relatively good WiFi.  Make sure the friend is relatively
>> close, but is not in the same flood plain or fire area you are.
>> > >
>> > > Do an rsync every night to keep them in sync.
>> > >
>> > > Help your friend/relative do the same thing, keeping a copy of their
>> data in your house.   If your disks are big enough you could share systems
>> and disks.
>> > >
>> > > Use encryption as you wish.
>> > >
>> > > Disk failure?   Replace the disk and the data will be replicated.
>> > > Fire, theft, earthquake?   Take the replaced system over to your
>> friends/relatives and copy the data at high speed, then take the copied
>> system back to your house and start using it again.
>> > >
>> > > You would need three disks to fail at relatively the same time to
>> lose your data.   Or an asteroid crashing that wipes out all life on the
>> planet.  Unlikely.
>> > >
>> > > Realize that nothing is forever.
>> > >
>> > > md
>> > >> On 03/08/2021 7:33 PM Bruce Labitt  wrote:
>> > >>
>> > >>
>> > >> For the second time in 3 months I have had a computer failure.
>> Oddly, it was a PS on the motherboard both times.  (Two different MB's.)
>> Fortunately the disks were ok.  I'm living on borrowed time.  Next time, I
>> may not be that lucky.
>> > >>
>> > >> Need a file server system with some sort of RAID redundancy.  I want
>> to backup 2 main computers, plus photos.  Maybe this RPI4 too, since that's
>> what I'm running on, due to the second failure.  If this SSD goes, I'm
>> gonna be a sad puppy.  This is for home use, so we are not talking
>> Exabytes.  I'm thinking about 2-4TB of RAID.  Unless of course, RAID is
>> obsolete these days.  Honestly, I find some of the levels of RAID
>> confusing.  I want something that will survive a disk
>> > >> failure (or two) out of the array.  Have any ideas, or can you point
>> me to some place that discusses this somewhat intelligently?
>> > >>
>> > >> Are there reasonable systems that one can put together oneself these
>> days?  Can I repurpose an older PC for this purpose?  Or an RPI4?  What are
>> the gotchas of going this way?
>> > >>
>> > >> I want to be able to set up a daily rsync or equivalent so we will
>> lose as little as possible.  At the moment, I'm not thinking about
>> surviving fire or disaster.  Maybe I should, but I suspect the costs
>> balloon considerably.  I do not want to backup to the cloud because, plain
>> and simple, I don't trust it to be fully secure.
>> >
>> > --
>> > Connect with me on the GNU social network! <
>> https://status.hackerposse.com/rozzin>
>> > Not on the network? Ask me for more info!
>> > ___
>> > gnhlug-discuss mailing list
>> > gnhlug-discuss@mail.gnhlug.org
>> > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>
>
> --
> Marc Nozell (m...@nozell.com) http://www.nozell.com/blog
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Have suggestions for a "roll your own file server"?

2021-03-10 Thread Bruce Labitt
iFi.  Make sure the friend is relatively close,
> but is not in the same flood plain or fire area you are.
> >
> > Do an rsync every night to keep them in sync.
> >
> > Help your friend/relative do the same thing, keeping a copy of their
> data in your house.   If your disks are big enough you could share systems
> and disks.
> >
> > Use encryption as you wish.
> >
> > Disk failure?   Replace the disk and the data will be replicated.
> > Fire, theft, earthquake?   Take the replaced system over to your
> friends/relatives and copy the data at high speed, then take the copied
> system back to your house and start using it again.
> >
> > You would need three disks to fail at relatively the same time to lose
> your data.   Or an asteroid crashing that wipes out all life on the
> planet.  Unlikely.
> >
> > Realize that nothing is forever.
> >
> > md
> >> On 03/08/2021 7:33 PM Bruce Labitt  wrote:
> >>
> >>
> >> For the second time in 3 months I have had a computer failure.  Oddly,
> it was a PS on the motherboard both times.  (Two different MB's.)
> Fortunately the disks were ok.  I'm living on borrowed time.  Next time, I
> may not be that lucky.
> >>
> >> Need a file server system with some sort of RAID redundancy.  I want to
> backup 2 main computers, plus photos.  Maybe this RPI4 too, since that's
> what I'm running on, due to the second failure.  If this SSD goes, I'm
> gonna be a sad puppy.  This is for home use, so we are not talking
> Exabytes.  I'm thinking about 2-4TB of RAID.  Unless of course, RAID is
> obsolete these days.  Honestly, I find some of the levels of RAID
> confusing.  I want something that will survive a disk
> >> failure (or two) out of the array.  Have any ideas, or can you point me
> to some place that discusses this somewhat intelligently?
> >>
> >> Are there reasonable systems that one can put together oneself these
> days?  Can I repurpose an older PC for this purpose?  Or an RPI4?  What are
> the gotchas of going this way?
> >>
> >> I want to be able to set up a daily rsync or equivalent so we will lose
> as little as possible.  At the moment, I'm not thinking about surviving
> fire or disaster.  Maybe I should, but I suspect the costs balloon
> considerably.  I do not want to backup to the cloud because, plain and
> simple, I don't trust it to be fully secure.
>
> --
> Connect with me on the GNU social network! <
> https://status.hackerposse.com/rozzin>
> Not on the network? Ask me for more info!
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


rsync question

2021-03-09 Thread Bruce Labitt
A maybe not so smart rsync question...

If one uses rsync -avz src/bar  /disk2will that copy over everything
from src/bar and create a directory bar on disk2?  What if src/bar has
other users or root?  In other words, does the -a mean that it will
preserve ownership and links and copy to /disk2?  Just don't know if I need
sudo or not.

I dumbly did a copy.  Well, that didn't preserve permissions or
attributes.  So deleting that...  Since I'm trying to back up 100's of GB,
thought I'd ask.  This is taking a long time, even with USB3 drives and
nvme.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-08 Thread Bruce Labitt
Yet you might think that since many, many people pay off mortgages or pay
off mortgages on sales of property every single year that they might have
it down to a science by now.   It is not like I am the first mortgage that
has been paid off in the past 30 years..

Maddog, I totally agree with you.  I was astonished and more than a little
annoyed that it took so long.  You'd think it was the first time (every
time!).  Apparently the banks aren't too innovative in this respect...
-Bruce

On Mon, Mar 8, 2021 at 7:45 PM  wrote:

> Here's my story about time...
>
> I had an old computer I was using as an email server and I just configured
> the time to sync once a day, which seemed often enough for email. The clock
> started to go bad, drifting several minutes a day (I don't remember now if
> it was forward or backward because I'm getting pretty old myself), and when
> it resynced each day, well, I couldn't understand why my logs kept
> indicating that the system was violating causality...
>
> > You could plan a vacation in Switzerland in 2030, but if an asteroid
> > obliterates Switzerland in 2028, your vacation plans become null and
> void.
> > It's not a contingency you need to plan for when making your vacation
> > plans.
> >
>
> Depends on the size of the asteroid. (apocalypse humor)
>
> Ronald
> r...@mrt4.com
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Have suggestions for a "roll your own file server"?

2021-03-08 Thread Bruce Labitt
For the second time in 3 months I have had a computer failure.  Oddly, it
was a PS on the motherboard both times.  (Two different MB's.)  Fortunately
the disks were ok.  I'm living on borrowed time.  Next time, I may not be
that lucky.

Need a file server system with some sort of RAID redundancy.  I want to
backup 2 main computers, plus photos.  Maybe this RPI4 too, since that's
what I'm running on, due to the second failure.  If this SSD goes, I'm
gonna be a sad puppy.  This is for home use, so we are not talking
Exabytes.  I'm thinking about 2-4TB of RAID.  Unless of course, RAID is
obsolete these days.  Honestly, I find some of the levels of RAID
confusing.  I want something that will survive a disk failure (or two) out
of the array.  Have any ideas, or can you point me to some place that
discusses this somewhat intelligently?

Are there reasonable systems that one can put together oneself these days?
Can I repurpose an older PC for this purpose?  Or an RPI4?  What are the
gotchas of going this way?

I want to be able to set up a daily rsync or equivalent so we will lose as
little as possible.  At the moment, I'm not thinking about surviving fire
or disaster.  Maybe I should, but I suspect the costs balloon
considerably.  I do not want to backup to the cloud because, plain and
simple, I don't trust it to be fully secure.

Thanks for any and all suggestions.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-08 Thread Bruce Labitt
If my experience is a guide, you have a few more months to go.  Once we
paid off our mortgage it took almost 6 months to clear everything up.
(That was in late 2018)  What you probably aren't realizing is that they
have to hunt down your paperwork.  Your note was probably transferred to
dozens of investors who bought and sold paper.  They are tracking it down
by going through the chain of possession.  The days of your bank holding
your mortgage papers in the local vault are gone forever.

But yeah, late a day, they'd call you...

On Mon, Mar 8, 2021 at 6:21 PM  wrote:

> I paid off my 30 year mortgage on November 29th, 2020 (two years early)
> thinking that it would be better not to carry any of it over to the next
> year.
>
> Then I waited for all of the associated paperwork (escrow payment refunds
> for property tax, deed, etc.)  At the end of January I called the bank.
>
> "Oh yes, it appears you have paid it off.   Well, it takes a little time."
>
> Then the end of February I called again.
>
> "Oh, yes, we can see the zero principal back in Decemberyes, you are
> right...any day now"
>
> Here it is, March 8th, 2021.
>
> If I was two days late on a payment, they hounded me.
>
> They are not using a computer.   They are using quill pens and parchment.
>
> md
> > On 03/08/2021 4:08 PM Joshua Judson Rosen 
> wrote:
> >
> >
> > On 3/8/21 2:16 PM, Jerry Feldman wrote:
> > > I love this discussion. I've been involved with computer time since
> the early 1970s. While at burger King I wrote a standardized set of time
> utilities in cobol. Later at Digital I was responsible for the utmp
> libraries, and the standard test failed. The issue was that the
> > > standard test used a future time beyond 2035. Back then tine_t was a
> signed 32 bit integer
> >
> > I bought a house with a 30-year mortgage in late 2008. My first house,
> actually.
> >
> > All of the things that people talk about being afraid of with being a
> new home-buyer...,
> > well..., none of them compared to the sense of dread that I felt when I
> looked at
> > the end-date on the mortgage and asked myself:
> >
> >   What's the likelihood that this date is going to pass through a
> computer
> >   where time_t is not wider than 32 bits before then?
> >
> > So I pay a little extra each month.
> > Hopefully I can have the account closed and expunged before that point ;p
> >
> > --
> > Connect with me on the GNU social network: <
> https://status.hackerposse.com/rozzin>
> > Not on the network? Ask me for an invitation to a social hub!
> > ___
> > gnhlug-discuss mailing list
> > gnhlug-discuss@mail.gnhlug.org
> > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-05 Thread Bruce Labitt
It would seem you have learned some of this the hard way.  And you are
indeed scaring me a bit.  Not sure what to do differently, however.  What
I'm hearing is properly dealing with time is hard and ugly.  I've
experienced some of this already.

I have been warned.  But, I still have to get on with things...  Can't
change the whole world, for that matter, I can't change how the cards are
written either.  So I'll have to put in (more than a few) tests to ensure
that I don't jam things up.

Thanks for passing on some of these pearls of wisdom.  I mean it.  When I
mess up, I'll remember, oh yeah, I was warned about that.  Next thought
will be, "Gosh, how am I going to fix this?"

On Fri, Mar 5, 2021, 8:54 PM Joshua Judson Rosen 
wrote:

> On 3/5/21 2:15 PM, Bruce Labitt wrote:
> >
> > On 3/4/21 10:51 PM, Joshua Judson Rosen wrote:
> > >
> > > See also: "The Problem with Time and Timezones" <
> https://www.youtube.com/watch?v=-5wpm-gesOY>
> > >
> > > 😣
> >
> > That was somewhat comical.  Yeah, been trying to keep everything with
> > respect to UTC.  It can be a little difficult at times, as it's easy to
> > goof up and fall in to quite a few time trap holes.
>
> See also:
>
> http://falsehoodsabouttime.com/
>
>
> http://www.creativedeletion.com/2015/01/28/falsehoods-programmers-date-time-zones.html
>
>
> > One of the more difficult things has been indexing into the time array.
> > I've been using numpy's timedate64 and timedelta64 but occasionally
> > still get tripped up. Handling time is complicated.  Fortunately, all
> > that I care about for this project is relative time.  Start time, end
> > time and time is "linear" in between.  According to the the youtuber,
> > even that's not guaranteed if one spans the new year and we need a leap
> > second!
>
> Indeed! Though I fear that the reality is actually worse than the
> impression you got
>
> A lot of the `if this happens and also' conditions are actually `if
> _either_ this _or_ that'.
>
> e.g.: most days have 86400 seconds, but...:
>
> * some have 86401 (+ leap seconds)
> * some have 86399 (- leap seconds--significantly rarer: hasn't
> happened _yet_, but...)
> * some have 82800 (i.e. "some days only have 23 hours", normal
> spring-forward DST shift)
> * some have 9 (i.e. "some days have 25 hours", normal
> fall-backward DST shift)
> * conceivably some may even have 90001 or 8
>
> (Really! RE: negative leap seconds, `there is a first time for everything':
>  <
> https://www.livescience.com/earth-spinning-faster-negative-leap-second.html
> >)
>
> And yeah..., even if you're using unix time (seconds since the epoch)...,
> unix time
> specifically does _not_ count leap seconds..., which is both wonderful and
> terrible
>
> Quoting the time(2) man page I have here:
>
> This value is not the same as the actual number of seconds between
> the time and the Epoch,
> because of leap seconds and because system clocks are not required
> to be synchronized
> to a standard reference.  The intention is that the interpretation
> of seconds since the Epoch
> values be consistent; see POSIX.1-2008 Rationale A.4.15 for
> further rationale.
>
> Wikipedia has some text on this, as well <
> https://en.wikipedia.org/wiki/Unix_time#Leap_seconds>:
>
> When a leap second occurs, the UTC day is not exactly 86400
> seconds long and the Unix time number
> (which always increases by exactly 86400 each day) experiences a
> discontinuity.
> Leap seconds may be positive or negative. No negative leap second
> has ever been declared,
> but if one were to be, then at the end of a day with a negative
> leap second,
> the Unix time number would jump up by 1 to the start of the next
> day.
> During a positive leap second at the end of a day, which occurs
> about every year and a half on average,
> the Unix time number increases continuously into the next day
> during the leap second and then at the end
> of the leap second jumps back by 1 (returning to the start of the
> next day).
>
>
> "all I have to care about is relative time" _should_ make your life
> easier..., in theory...,
> _assuming_ that the timestamps that you get and need to diff _really are_
> on a linear timescale.
>
> Good luck. I actually would love to hear about whatever linear timescale
> you end up settling on.
>
> This is why astronomers are using `Julian 

Are there any "relatively" local PySIG's any more?

2021-03-05 Thread Bruce Labitt
Been reminiscing a bit lately and thought about the PySIG group we had 
in the area.  Stumbled across the Boston Python Users Group.  Is that 
the closest one?  Are there any others not on the Python Software 
Foundation Meetup Pro Network?

Jeesh, don't even know why I'm asking about locality these days - most 
meetings are virtual.

Been missing my python meetings...  Learned a lot from them.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-05 Thread Bruce Labitt
That was somewhat comical.  Yeah, been trying to keep everything with 
respect to UTC.  It can be a little difficult at times, as it's easy to 
goof up and fall in to quite a few time trap holes.

One of the more difficult things has been indexing into the time array.  
I've been using numpy's timedate64 and timedelta64 but occasionally 
still get tripped up. Handling time is complicated.  Fortunately, all 
that I care about for this project is relative time.  Start time, end 
time and time is "linear" in between.  According to the the youtuber, 
even that's not guaranteed if one spans the new year and we need a leap 
second!


On 3/4/21 10:51 PM, Joshua Judson Rosen wrote:
> See also: "The Problem with Time and Timezones" 
> <https://www.youtube.com/watch?v=-5wpm-gesOY>
>
> 😣
>
> On 3/4/21 10:32 PM, Bruce Labitt wrote:
>> On 3/4/21 9:56 PM, Joshua Judson Rosen wrote:
>>> On 3/4/21 7:13 PM, Bruce Labitt wrote:
>>>> Good point.  I'll check that.  Logging machine was set to local time EST.  
>>>> But it does have a wireless link, maybe it set itself internally to UT.  
>>>> Thanks for the hint.
>>> You have your code explicitly calling a function named `UTC from timestamp'.
>>>
>>> If you want localtime and not UTC, call the function that doesn't start 
>>> with "utc".
>>>
>>> And if you want to assume some particular timezone other than your system's 
>>> default,
>>> you can pass that as an optional argument.
>>>
>>> BTW, FYI "UT" is *not* the same thing as "UTC". Timezones are confusing 
>>> enough,
>>> it's worth spending the extra character to avoid creating even more 
>>> confusion
>>> (or just call it "Z" and save yourself even more characters).
>>>
>>> And as a general word of advice from someone whose been burnt way too many 
>>> times:
>>> if you're going to put timestamps in your filenames, either just use UTC
>>> or explicitly indicate which timezone the timestamps are assuming.
>>>
>>> "the local non-UTC timezone" *changes*. Frequently. Like, twice every year 
>>> if you're lucky--
>>> and more frequently than that if you're unlucky. And if you are, for 
>>> example, generating those
>>> files/filenames between 1:00 AM and 2:00 AM when you go from EDT to EST in 
>>> November
>>> (and that "1:00-2:00 localtime" interval *repeats*)..., you'll be sorry.
>>>
>> These files are written by commercial closed box machines (medical
>> equipment).  There is no choice for the users.  That being said, these
>> machines are designed to basically have the time set once.  (Drift, ntp?
>> what's that?)  If one plays with resetting the time, one can be rewarded
>> by having all your data wiped.
>>
>> "UT" was me being lazy.  (Too lazy to type the extra character...)  I
>> don't have any code with explicit timezone stuff in it.  Have to agree
>> it's a good idea to keep time in UTC, to avoid 'many' of the headaches.
>> Nonetheless, it's easy to get confused about all this, especially if
>> external devices don't do time the same way.  (Not all devices handle
>> time correctly.) Then, as you say, you'll be sorry.
>

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-04 Thread Bruce Labitt
On 3/4/21 9:56 PM, Joshua Judson Rosen wrote:
> On 3/4/21 7:13 PM, Bruce Labitt wrote:
>> Good point.  I'll check that.  Logging machine was set to local time EST.  
>> But it does have a wireless link, maybe it set itself internally to UT.  
>> Thanks for the hint.
> You have your code explicitly calling a function named `UTC from timestamp'.
>
> If you want localtime and not UTC, call the function that doesn't start with 
> "utc".
>
> And if you want to assume some particular timezone other than your system's 
> default,
> you can pass that as an optional argument.
>
> BTW, FYI "UT" is *not* the same thing as "UTC". Timezones are confusing 
> enough,
> it's worth spending the extra character to avoid creating even more confusion
> (or just call it "Z" and save yourself even more characters).
>
> And as a general word of advice from someone whose been burnt way too many 
> times:
> if you're going to put timestamps in your filenames, either just use UTC
> or explicitly indicate which timezone the timestamps are assuming.
>
> "the local non-UTC timezone" *changes*. Frequently. Like, twice every year if 
> you're lucky--
> and more frequently than that if you're unlucky. And if you are, for example, 
> generating those
> files/filenames between 1:00 AM and 2:00 AM when you go from EDT to EST in 
> November
> (and that "1:00-2:00 localtime" interval *repeats*)..., you'll be sorry.
>
These files are written by commercial closed box machines (medical 
equipment).  There is no choice for the users.  That being said, these 
machines are designed to basically have the time set once.  (Drift, ntp? 
what's that?)  If one plays with resetting the time, one can be rewarded 
by having all your data wiped.

"UT" was me being lazy.  (Too lazy to type the extra character...)  I 
don't have any code with explicit timezone stuff in it.  Have to agree 
it's a good idea to keep time in UTC, to avoid 'many' of the headaches.  
Nonetheless, it's easy to get confused about all this, especially if 
external devices don't do time the same way.  (Not all devices handle 
time correctly.) Then, as you say, you'll be sorry.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kind of puzzled about timestamps

2021-03-04 Thread Bruce Labitt
Weird, it is just the 5 hours between UT and EST.  The files are 
generated on a non-linux embedded machine.
If I create a file on my pc, then the TZ information is present and the 
time is set.  ls reads it correctly.


This time stuff can get confusing.  As you were.

On 3/4/21 7:13 PM, Bruce Labitt wrote:
Good point.  I'll check that.  Logging machine was set to local time 
EST.  But it does have a wireless link, maybe it set itself internally 
to UT.  Thanks for the hint.




On Thu, Mar 4, 2021, 7:05 PM Dana Nowell 
<mailto:dananow...@cornerstonesoftware.com>> wrote:


If I'm reading it correctly, it's a 5 hr difference?  Local vs gmt?


On Thu, Mar 4, 2021, 6:43 PM Bruce Labitt
mailto:bruce.lab...@myfairpoint.net>> wrote:

This is an odd question.  It involves both python and linux.

Have a bunch of files in a directory that I'd like like to
sort by similar names and in time order.  This isn't
particularly difficult in python.  What is puzzling me is the
modified timestamp returned by python doesn't match whats
reported by the file manager nautilus or even ls.  (ls and
nautilus are consistent)

$ lsb_release -d Ubuntu 20.04.2 LTS
$ nautilus --version  GNOME nautilus 3.36.3

$ python3 --version  Python 3.8.5

$ ls -lght

total 4.7M
-rw-r--r-- 1 bruce 209K Feb 26 01:49 20210226_022134_PLD.edf
-rw-r--r-- 1 bruce  65K Feb 26 01:49 20210226_022134_SAD.edf
-rw-r--r-- 1 bruce 2.4M Feb 26 01:49 20210226_022133_BRP.edf
-rw-r--r-- 1 bruce 1.1K Feb 26 00:58 20210225_224134_EVE.edf
-rw-r--r-- 1 bruce 1.9M Feb 25 21:18 20210225_224141_BRP.edf
-rw-r--r-- 1 bruce 169K Feb 25 21:17 20210225_224142_PLD.edf
-rw-r--r-- 1 bruce  53K Feb 25 21:17 20210225_224142_SAD.edf

Python3 script

#!/usr/bin/env python3
import os
from datetime import datetime

def convert_date(timestamp):
  d = datetime.utcfromtimestamp(timestamp)
  formatted_date = d.strftime('%d %b %Y  %H:%M:%S')
  return formatted_date

with os.scandir('feb262021') as entries:
  for entry in entries:
    if entry.is_file():
  info = entry.stat()
  print(f'{entry.name <http://entry.name>}\t Last
Modified: {convert_date(info.st_mtime) }' )  # last modification

info /(after exit) contains/: os.stat_result(st_mode=33188,
st_ino=34477637, st_dev=66306, st_nlink=1, st_uid=1000,
st_gid=1000, st_size=213416, st_atime=1614379184,
st_mtime=1614322176, st_ctime=1614379184)

Running the script results in:

20210226_022133_BRP.edf  Last Modified: 26 Feb 2021  06:49:34
20210225_224141_BRP.edf     Last Modified: 26 Feb 2021  02:18:42
20210225_224142_PLD.edf     Last Modified: 26 Feb 2021  02:17:44
20210225_224142_SAD.edf     Last Modified: 26 Feb 2021  02:17:44
20210225_224134_EVE.edf     Last Modified: 26 Feb 2021  05:58:26
20210226_022134_SAD.edf     Last Modified: 26 Feb 2021  06:49:36
20210226_022134_PLD.edf     Last Modified: 26 Feb 2021  06:49:36

Actually, what is returned by my script is at least sensible,
given that 20210225_224141_BRP.edf started on Feb 25th and
ended recording at 2:17am on Feb 26th.  I know this because I
can see the data on a separate program.
20210226_022133_BRP.edf started on Feb 26th at around 2:21am
and terminated at 6:49am.  BRP files are written to
continuously at a 25 Hz rate all evening.  What makes no sense
whatsoever is what *ls* is reporting.

Do *ls* and python3 use different definitions of "last modified"?

Guess I can keep going, but I really was surprised at the
difference between methods.  Default for ls is "last
modified", at least as reported by man.  ls's last modified
just isn't correct, at least on Ubuntu 20.04.2

Is this a quirk?  Am I doing something wrong?  Some kind of
voodoo definition of "last modified"?  What does Linux say
"last modified" really means?

FWIW, I am coming up to speed on processing these edf files to
help out on an open source project.  Been working on some data
analysis tools.  As an aside, biological data is very messy. 
It's been a treat to work on this as it's forced me to dust
off the mental cobwebs and work on a problem that can help a
lot of people.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
<mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
<ht

Re: Kind of puzzled about timestamps

2021-03-04 Thread Bruce Labitt
Good point.  I'll check that.  Logging machine was set to local time EST.
But it does have a wireless link, maybe it set itself internally to UT.
Thanks for the hint.



On Thu, Mar 4, 2021, 7:05 PM Dana Nowell 
wrote:

> If I'm reading it correctly, it's a 5 hr difference?  Local vs gmt?
>
>
> On Thu, Mar 4, 2021, 6:43 PM Bruce Labitt 
> wrote:
>
>> This is an odd question.  It involves both python and linux.
>>
>> Have a bunch of files in a directory that I'd like like to sort by
>> similar names and in time order.  This isn't particularly difficult in
>> python.  What is puzzling me is the modified timestamp returned by python
>> doesn't match whats reported by the file manager nautilus or even ls.  (ls
>> and nautilus are consistent)
>> $ lsb_release -d Ubuntu 20.04.2 LTS
>> $ nautilus --version  GNOME nautilus 3.36.3
>>
>> $ python3 --version  Python 3.8.5
>>
>> $ ls -lght
>> total 4.7M
>> -rw-r--r-- 1 bruce 209K Feb 26 01:49 20210226_022134_PLD.edf
>> -rw-r--r-- 1 bruce  65K Feb 26 01:49 20210226_022134_SAD.edf
>> -rw-r--r-- 1 bruce 2.4M Feb 26 01:49 20210226_022133_BRP.edf
>> -rw-r--r-- 1 bruce 1.1K Feb 26 00:58 20210225_224134_EVE.edf
>> -rw-r--r-- 1 bruce 1.9M Feb 25 21:18 20210225_224141_BRP.edf
>> -rw-r--r-- 1 bruce 169K Feb 25 21:17 20210225_224142_PLD.edf
>> -rw-r--r-- 1 bruce  53K Feb 25 21:17 20210225_224142_SAD.edf
>>
>> Python3 script
>>
>> #!/usr/bin/env python3
>> import os
>> from datetime import datetime
>>
>> def convert_date(timestamp):
>>   d = datetime.utcfromtimestamp(timestamp)
>>   formatted_date = d.strftime('%d %b %Y  %H:%M:%S')
>>   return formatted_date
>>
>> with os.scandir('feb262021') as entries:
>>   for entry in entries:
>> if entry.is_file():
>>   info = entry.stat()
>>   print(f'{entry.name}\t Last Modified: {convert_date(info.st_mtime)
>> }' )  # last modification
>>
>> info *(after exit) contains*: os.stat_result(st_mode=33188,
>> st_ino=34477637, st_dev=66306, st_nlink=1, st_uid=1000, st_gid=1000,
>> st_size=213416, st_atime=1614379184, st_mtime=1614322176,
>> st_ctime=1614379184)
>>
>> Running the script results in:
>>
>> 20210226_022133_BRP.edf Last Modified: 26 Feb 2021  06:49:34
>> 20210225_224141_BRP.edf Last Modified: 26 Feb 2021  02:18:42
>> 20210225_224142_PLD.edf Last Modified: 26 Feb 2021  02:17:44
>> 20210225_224142_SAD.edf Last Modified: 26 Feb 2021  02:17:44
>> 20210225_224134_EVE.edf Last Modified: 26 Feb 2021  05:58:26
>> 20210226_022134_SAD.edf Last Modified: 26 Feb 2021  06:49:36
>> 20210226_022134_PLD.edf Last Modified: 26 Feb 2021  06:49:36
>>
>> Actually, what is returned by my script is at least sensible, given that
>> 20210225_224141_BRP.edf started on Feb 25th and ended recording at
>> 2:17am on Feb 26th.  I know this because I can see the data on a separate
>> program.  20210226_022133_BRP.edf started on Feb 26th at around 2:21am
>> and terminated at 6:49am.  BRP files are written to continuously at a 25 Hz
>> rate all evening.  What makes no sense whatsoever is what *ls* is
>> reporting.
>>
>> Do *ls* and python3 use different definitions of "last modified"?
>>
>> Guess I can keep going, but I really was surprised at the difference
>> between methods.  Default for ls is "last modified", at least as reported
>> by man.  ls's last modified just isn't correct, at least on Ubuntu 20.04.2
>>
>> Is this a quirk?  Am I doing something wrong?  Some kind of voodoo
>> definition of "last modified"?  What does Linux say "last modified" really
>> means?
>>
>> FWIW, I am coming up to speed on processing these edf files to help out
>> on an open source project.  Been working on some data analysis tools.  As
>> an aside, biological data is very messy.  It's been a treat to work on this
>> as it's forced me to dust off the mental cobwebs and work on a problem that
>> can help a lot of people.
>>
>>
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Kind of puzzled about timestamps

2021-03-04 Thread Bruce Labitt

This is an odd question.  It involves both python and linux.

Have a bunch of files in a directory that I'd like like to sort by 
similar names and in time order.  This isn't particularly difficult in 
python.  What is puzzling me is the modified timestamp returned by 
python doesn't match whats reported by the file manager nautilus or even 
ls.  (ls and nautilus are consistent)


$ lsb_release -d Ubuntu 20.04.2 LTS
$ nautilus --version  GNOME nautilus 3.36.3

$ python3 --version Python 3.8.5

$ ls -lght

total 4.7M
-rw-r--r-- 1 bruce 209K Feb 26 01:49 20210226_022134_PLD.edf
-rw-r--r-- 1 bruce  65K Feb 26 01:49 20210226_022134_SAD.edf
-rw-r--r-- 1 bruce 2.4M Feb 26 01:49 20210226_022133_BRP.edf
-rw-r--r-- 1 bruce 1.1K Feb 26 00:58 20210225_224134_EVE.edf
-rw-r--r-- 1 bruce 1.9M Feb 25 21:18 20210225_224141_BRP.edf
-rw-r--r-- 1 bruce 169K Feb 25 21:17 20210225_224142_PLD.edf
-rw-r--r-- 1 bruce  53K Feb 25 21:17 20210225_224142_SAD.edf

Python3 script

#!/usr/bin/env python3
import os
from datetime import datetime

def convert_date(timestamp):
  d = datetime.utcfromtimestamp(timestamp)
  formatted_date = d.strftime('%d %b %Y  %H:%M:%S')
  return formatted_date

with os.scandir('feb262021') as entries:
  for entry in entries:
    if entry.is_file():
  info = entry.stat()
  print(f'{entry.name}\t Last Modified: 
{convert_date(info.st_mtime) }' )  # last modification


info /(after exit) contains/: os.stat_result(st_mode=33188, 
st_ino=34477637, st_dev=66306, st_nlink=1, st_uid=1000, st_gid=1000, 
st_size=213416, st_atime=1614379184, st_mtime=1614322176, 
st_ctime=1614379184)


Running the script results in:

20210226_022133_BRP.edf     Last Modified: 26 Feb 2021  06:49:34
20210225_224141_BRP.edf     Last Modified: 26 Feb 2021  02:18:42
20210225_224142_PLD.edf     Last Modified: 26 Feb 2021  02:17:44
20210225_224142_SAD.edf     Last Modified: 26 Feb 2021  02:17:44
20210225_224134_EVE.edf     Last Modified: 26 Feb 2021  05:58:26
20210226_022134_SAD.edf     Last Modified: 26 Feb 2021  06:49:36
20210226_022134_PLD.edf     Last Modified: 26 Feb 2021  06:49:36

Actually, what is returned by my script is at least sensible, given that 
20210225_224141_BRP.edf started on Feb 25th and ended recording at 
2:17am on Feb 26th.  I know this because I can see the data on a 
separate program. 20210226_022133_BRP.edf started on Feb 26th at around 
2:21am and terminated at 6:49am.  BRP files are written to continuously 
at a 25 Hz rate all evening.  What makes no sense whatsoever is what 
*ls* is reporting.


Do *ls* and python3 use different definitions of "last modified"?

Guess I can keep going, but I really was surprised at the difference 
between methods.  Default for ls is "last modified", at least as 
reported by man.  ls's last modified just isn't correct, at least on 
Ubuntu 20.04.2


Is this a quirk?  Am I doing something wrong?  Some kind of voodoo 
definition of "last modified"?  What does Linux say "last modified" 
really means?


FWIW, I am coming up to speed on processing these edf files to help out 
on an open source project.  Been working on some data analysis tools.  
As an aside, biological data is very messy.  It's been a treat to work 
on this as it's forced me to dust off the mental cobwebs and work on a 
problem that can help a lot of people.



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Question about ssh key generation

2021-02-16 Thread Bruce Labitt
Gitlab is asking for ssh keys now.  Is there a recommended type of key 
these days?

man ssh-key gives me the following choices:  dsa | ecdsa | ecdsa-sk | 
ed25519 | ed25519-sk | rsa

Which should I choose?  Which ones offer the longer/longest key length 
(best security?)

Sorry for the simplistic question, not done this before.  Any insight 
would be helpful.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Help me fix this annoying RPI4 Buster printing problem

2021-01-08 Thread Bruce Labitt
At the moment I'm on an RPi4 running Raspberry Pi OS.  My laptop died 
recently.  I'm trying to figure out this OS works.  Have to say, it's 
been painful in many ways.  One is that the RPI is slow...  It will be a 
little while before I get a replacement laptop, so I have to deal with this.

Although I have configured a network printer (running on CUPS server on 
an RPi2) some apps on my RPI4, like, GIMP, gpaint, and LibreOffice seem 
to retain former and incorrect printers.

My GoogleFu seems to be terrible.  Can't seem to find a way to purge 
these old settings from showing up.  There has to be a way to do this!

I've found this experience to be pretty darned exasperating.  I find 
lots of apps, including gpaint, don't just work on RPI4 Buster.  Doing a 
print, or a print preview from gpaint should NOT cause the app to fail 
and terminate.  Especially if the network printer actually is configured 
correctly.  I can print a test page just fine.

Where are system wide print settings managed?  File names and 
directories?  Does gpaint have a shadow set of printer settings? Where 
are they stored?

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
To put some bounds on this, I'm currently looking at an Oryx Pro.  I have
it tricked out to $2100.  i7-10875H, 32GB 3200MHz RAM, RTX2060 6GB, 1TB
NVME seq. RD 3500MB/s, seq WR 3300 MB/s, 15.6" display 1080p.  Seems to be
the best combination of stuff I can find for the price.  Can a standard
make laptop beat that and be relatively painless linux compatible?

On Wed, Jan 6, 2021 at 4:17 PM Bill Ricker  wrote:

>
>
> On Wed, Jan 6, 2021 at 3:45 PM Joshua Judson Rosen 
> wrote:
>
>> Showtime Computer  in Hudson now does
>> custom-built laptops,
>> as of some time in the last few years IIRC. They look like they're based
>> on the same ODM kits
>> as the other Linux boutiques I've shopped, and should be solid.
>>
>
> ?? I do NOT see Linux listed on their Operating Systems page (except for a
> WSL mention on WinSvr page).
>
> ThinkPenguin  is also based in NH again
>> (Keene, last I heard);
>>
> Interesting
>
>> looks like they've may have stopped doing laptops for the time being,
>> though
>> (I don't see any in the listing on their website, just accessories; they
>> have _desktops_...).
>>
> Too bad
>
>
>> I was buying all of my computers from ZaReason, but they just went out of
>> business
>> ("Unfortunately, the pandemic has been the final KO blow. It has hit our
>> little town hard
>>and we have not been able to recover from it.
>>As of Tuesday, 11/24/20 17:00 EST ZaReason is no longer in business.").
>>
>
> Sad.
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
It would be a lot easier if this was a desktop.  I had a PC supply
capacitor explode once.  That was exciting, as I was in the room at the
time.  Boom!  Lots of smoke.  One of the high voltage electrolytic caps
popped.  Replaced the power supply and was good to go again.

Laptops, especially 7 year old ones, probably won't be able to be
repaired.   Well, this laptop went out with a whimper...  Wouldn't surprise
me if it was a bad electrolytic cap.  Wasn't there a bad run of them a
decade or so ago?

On Wed, Jan 6, 2021 at 3:11 PM Jerry Feldman  wrote:

> Power supply failures can cause lots of issues. I've changed a few. For me
> a quick trip to micro center allowed me to get stuff up and running.
>
> --
> Jerry Feldman 
> Boston Linux and Unix http://www.blu.org
> PGP key id: 6F6BB6E7
> PGP Key fingerprint: 0EDC 2FF5 53A6 8EED 84D1  3050 5715 B88D 6F6
> B B6E7
>
> On Wed, Jan 6, 2021, 3:07 PM Bruce Labitt  wrote:
>
>> Checked the media, both are readable using the RPI4.  Seems like the
>> power supply is failing.  It's cycling on and off even with no media, dvd,
>> or drives.  I think this is a dead parrot.
>>
>> Well, that was fun.  Uh, not really.
>>
>> Guess I need to go computer shopping.  It was an i7, 32GB RAM, 17"
>> screen.  It had a nvidia GPU so I could play with CUDA.  What's out there
>> that's at least as good performance wise and not a PIA to convert to
>> linux.  It was a Bonobo Extreme 6.  At the time it was pretty high end.  My
>> BonX6 was a boat anchor, but since it hardly moved, it wasn't a problem.
>> Of course, light and performance is good too.  Any good laptops out there?
>> Been out of the loop a while.
>>
>> On Wed, Jan 6, 2021 at 1:33 PM Bruce Labitt  wrote:
>>
>>> One more oddity, when I turned it off by pressing the power off button,
>>> the laptop went off, then started again.  Is this a clue?
>>>
>>> On Wed, Jan 6, 2021 at 1:31 PM Bruce Labitt  wrote:
>>>
>>>> I yanked the battery, and all the disks.  Tried booting with AC power.
>>>> And no usb stick.  I get the same behavior.  Does not respond to F2, F7, or
>>>> Func-F2 or Func-F7.  :(  No fan comes on.  If I try the USB stick and power
>>>> up, same behavior, except the fan has some activity.  Not looking good...
>>>> Guess I could go deeper into disassembly, maybe finding a weird crimped or
>>>> mangled cable, or dust filled something or another, but not looking good at
>>>> all...  Anything else it could be?  Don't know if this is a clue at all.
>>>> Next to last boot (with original disk) was 8 minutes.  Last boot (with
>>>> original disk) was 28 minutes .  Is this a sagging or failing power
>>>> supply?  What else electrical could it be?
>>>>
>>>> On Wed, Jan 6, 2021 at 12:49 PM  wrote:
>>>>
>>>>> Yank the SSD and USB and see if it boots.  That will at least isolate
>>>>> if either of those are involved.
>>>>>
>>>>> On Jan 6, 2021 12:10 PM, Bruce Labitt  wrote:
>>>>>
>>>>> Sorry to bother you, that is, if I haven 't been put on a giant ignore
>>>>> list.  Replaced disk with new bigger SSD.  Unfortunately, the laptop is 
>>>>> not
>>>>> booting to the USB stick.  I haven't even gotten to any video console yet,
>>>>> grub, bios, nada.  I get occasional flashes of the disk activity light and
>>>>> nothing else.  Posting from an RPI4 now.  Tried various combinations of 
>>>>> F2,
>>>>> F7, and no screen activity.  :(  Basically in the place I didn't want to 
>>>>> be
>>>>> with my primary computer.
>>>>>
>>>>> On Wed, Jan 6, 2021 at 10:27 AM Bruce Labitt 
>>>>> wrote:
>>>>>
>>>>> Found out how to check the whole usb disk.  $ sudo sha256sum -b
>>>>> /dev/sdx  Sudo was required.  Hope to be back and running soon...  Sorry
>>>>> for all the noise.
>>>>>
>>>>> On Wed, Jan 6, 2021 at 10:03 AM Bruce Labitt 
>>>>> wrote:
>>>>>
>>>>> System76 thinks it's the ssd.  Machine strangely got locked up while
>>>>> trying to start the arduino IDE, forcing me to power off the laptop.  Took
>>>>> 28 minutes to boot!  And 12 seconds after handing off to the OS.
>>>>> So it's time to do this.  I just backed up /home, /opt and /etc.
>>>>> Anything else I should do before replacing the 

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
Checked the media, both are readable using the RPI4.  Seems like the power
supply is failing.  It's cycling on and off even with no media, dvd, or
drives.  I think this is a dead parrot.

Well, that was fun.  Uh, not really.

Guess I need to go computer shopping.  It was an i7, 32GB RAM, 17" screen.
It had a nvidia GPU so I could play with CUDA.  What's out there that's at
least as good performance wise and not a PIA to convert to linux.  It was a
Bonobo Extreme 6.  At the time it was pretty high end.  My BonX6 was a boat
anchor, but since it hardly moved, it wasn't a problem.  Of course, light
and performance is good too.  Any good laptops out there?  Been out of the
loop a while.

On Wed, Jan 6, 2021 at 1:33 PM Bruce Labitt  wrote:

> One more oddity, when I turned it off by pressing the power off button,
> the laptop went off, then started again.  Is this a clue?
>
> On Wed, Jan 6, 2021 at 1:31 PM Bruce Labitt  wrote:
>
>> I yanked the battery, and all the disks.  Tried booting with AC power.
>> And no usb stick.  I get the same behavior.  Does not respond to F2, F7, or
>> Func-F2 or Func-F7.  :(  No fan comes on.  If I try the USB stick and power
>> up, same behavior, except the fan has some activity.  Not looking good...
>> Guess I could go deeper into disassembly, maybe finding a weird crimped or
>> mangled cable, or dust filled something or another, but not looking good at
>> all...  Anything else it could be?  Don't know if this is a clue at all.
>> Next to last boot (with original disk) was 8 minutes.  Last boot (with
>> original disk) was 28 minutes .  Is this a sagging or failing power
>> supply?  What else electrical could it be?
>>
>> On Wed, Jan 6, 2021 at 12:49 PM  wrote:
>>
>>> Yank the SSD and USB and see if it boots.  That will at least isolate if
>>> either of those are involved.
>>>
>>> On Jan 6, 2021 12:10 PM, Bruce Labitt  wrote:
>>>
>>> Sorry to bother you, that is, if I haven 't been put on a giant ignore
>>> list.  Replaced disk with new bigger SSD.  Unfortunately, the laptop is not
>>> booting to the USB stick.  I haven't even gotten to any video console yet,
>>> grub, bios, nada.  I get occasional flashes of the disk activity light and
>>> nothing else.  Posting from an RPI4 now.  Tried various combinations of F2,
>>> F7, and no screen activity.  :(  Basically in the place I didn't want to be
>>> with my primary computer.
>>>
>>> On Wed, Jan 6, 2021 at 10:27 AM Bruce Labitt  wrote:
>>>
>>> Found out how to check the whole usb disk.  $ sudo sha256sum -b
>>> /dev/sdx  Sudo was required.  Hope to be back and running soon...  Sorry
>>> for all the noise.
>>>
>>> On Wed, Jan 6, 2021 at 10:03 AM Bruce Labitt  wrote:
>>>
>>> System76 thinks it's the ssd.  Machine strangely got locked up while
>>> trying to start the arduino IDE, forcing me to power off the laptop.  Took
>>> 28 minutes to boot!  And 12 seconds after handing off to the OS.
>>> So it's time to do this.  I just backed up /home, /opt and /etc.
>>> Anything else I should do before replacing the disk?  Just checked the
>>> sha256sum on the iso.  How do I check if the USB stick I burned is ok?
>>>
>>>
>>> On Sat, Jan 2, 2021 at 10:14 PM Bruce Labitt <
>>> bruce.lab...@myfairpoint.net> wrote:
>>>
>>> Think it's a driver issue.  Looked in journalctl and there's some errors
>>> indicated.  One is a video issue, another is some sort of permissions
>>> issue for user who isn't me.  The permissions issue is with
>>> tracker-miner, which I find to be highly annoying.  Not quite sure how
>>> to disable it cleanly with low system impact.
>>>
>>> Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't
>>> an overdue fsck...  So I'm not so sure it's disk related at all.
>>>
>>> Have contacted system76 and sent them logs.  If I recall correctly, the
>>> issue seems to be closely related to a driver change (issued by
>>> system76).  Of course, they are still on break...
>>>
>>> Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think
>>> my first IBM PC was that slow, even with a boot from floppy disk.
>>>
>>>
>>> On 1/2/21 9:15 PM, r...@mrt4.com wrote:
>>> > Examine the time stamps on the syslog and compare them to previous
>>> nominal boots. That should indicate where the issue is. If all log entries
>>> indicate long delays, then it is something s

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
One more oddity, when I turned it off by pressing the power off button, the
laptop went off, then started again.  Is this a clue?

On Wed, Jan 6, 2021 at 1:31 PM Bruce Labitt  wrote:

> I yanked the battery, and all the disks.  Tried booting with AC power.
> And no usb stick.  I get the same behavior.  Does not respond to F2, F7, or
> Func-F2 or Func-F7.  :(  No fan comes on.  If I try the USB stick and power
> up, same behavior, except the fan has some activity.  Not looking good...
> Guess I could go deeper into disassembly, maybe finding a weird crimped or
> mangled cable, or dust filled something or another, but not looking good at
> all...  Anything else it could be?  Don't know if this is a clue at all.
> Next to last boot (with original disk) was 8 minutes.  Last boot (with
> original disk) was 28 minutes .  Is this a sagging or failing power
> supply?  What else electrical could it be?
>
> On Wed, Jan 6, 2021 at 12:49 PM  wrote:
>
>> Yank the SSD and USB and see if it boots.  That will at least isolate if
>> either of those are involved.
>>
>> On Jan 6, 2021 12:10 PM, Bruce Labitt  wrote:
>>
>> Sorry to bother you, that is, if I haven 't been put on a giant ignore
>> list.  Replaced disk with new bigger SSD.  Unfortunately, the laptop is not
>> booting to the USB stick.  I haven't even gotten to any video console yet,
>> grub, bios, nada.  I get occasional flashes of the disk activity light and
>> nothing else.  Posting from an RPI4 now.  Tried various combinations of F2,
>> F7, and no screen activity.  :(  Basically in the place I didn't want to be
>> with my primary computer.
>>
>> On Wed, Jan 6, 2021 at 10:27 AM Bruce Labitt  wrote:
>>
>> Found out how to check the whole usb disk.  $ sudo sha256sum -b /dev/sdx
>> Sudo was required.  Hope to be back and running soon...  Sorry for all the
>> noise.
>>
>> On Wed, Jan 6, 2021 at 10:03 AM Bruce Labitt  wrote:
>>
>> System76 thinks it's the ssd.  Machine strangely got locked up while
>> trying to start the arduino IDE, forcing me to power off the laptop.  Took
>> 28 minutes to boot!  And 12 seconds after handing off to the OS.
>> So it's time to do this.  I just backed up /home, /opt and /etc.
>> Anything else I should do before replacing the disk?  Just checked the
>> sha256sum on the iso.  How do I check if the USB stick I burned is ok?
>>
>>
>> On Sat, Jan 2, 2021 at 10:14 PM Bruce Labitt <
>> bruce.lab...@myfairpoint.net> wrote:
>>
>> Think it's a driver issue.  Looked in journalctl and there's some errors
>> indicated.  One is a video issue, another is some sort of permissions
>> issue for user who isn't me.  The permissions issue is with
>> tracker-miner, which I find to be highly annoying.  Not quite sure how
>> to disable it cleanly with low system impact.
>>
>> Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't
>> an overdue fsck...  So I'm not so sure it's disk related at all.
>>
>> Have contacted system76 and sent them logs.  If I recall correctly, the
>> issue seems to be closely related to a driver change (issued by
>> system76).  Of course, they are still on break...
>>
>> Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think
>> my first IBM PC was that slow, even with a boot from floppy disk.
>>
>>
>> On 1/2/21 9:15 PM, r...@mrt4.com wrote:
>> > Examine the time stamps on the syslog and compare them to previous
>> nominal boots. That should indicate where the issue is. If all log entries
>> indicate long delays, then it is something systemic like memory, storage,
>> CPU, a thermal issue, etc. (Note: A systemic issue is not necessarily a
>> hardware fault because a HW device can be incorrectly configured when it is
>> initialized.)
>> >
>> > If it was a one-time occurrence then it was most likely an overdue
>> fsck, but syslog will indicate that if that's the case.
>> >
>> > Ronald Smith
>> >
>> > --
>> >
>> > On Wed, 30 Dec 2020 14:04:43 -0500
>> > Bruce Labitt  wrote:
>> >
>> >> I think I have a SSD on the way out.  Last reboot took a REALLY long
>> >> time.  Like 30 minutes.  I ran the smart data and self test and the SSD
>> >> passes.  Overall assessment is disk is ok.  I really don't know how to
>> >> interpret what the results are.
>> >>
>> >> I think the disk is in pre-fail based on the smartctl output below
>> >>
>> >>

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
I yanked the battery, and all the disks.  Tried booting with AC power.  And
no usb stick.  I get the same behavior.  Does not respond to F2, F7, or
Func-F2 or Func-F7.  :(  No fan comes on.  If I try the USB stick and power
up, same behavior, except the fan has some activity.  Not looking good...
Guess I could go deeper into disassembly, maybe finding a weird crimped or
mangled cable, or dust filled something or another, but not looking good at
all...  Anything else it could be?  Don't know if this is a clue at all.
Next to last boot (with original disk) was 8 minutes.  Last boot (with
original disk) was 28 minutes .  Is this a sagging or failing power
supply?  What else electrical could it be?

On Wed, Jan 6, 2021 at 12:49 PM  wrote:

> Yank the SSD and USB and see if it boots.  That will at least isolate if
> either of those are involved.
>
> On Jan 6, 2021 12:10 PM, Bruce Labitt  wrote:
>
> Sorry to bother you, that is, if I haven 't been put on a giant ignore
> list.  Replaced disk with new bigger SSD.  Unfortunately, the laptop is not
> booting to the USB stick.  I haven't even gotten to any video console yet,
> grub, bios, nada.  I get occasional flashes of the disk activity light and
> nothing else.  Posting from an RPI4 now.  Tried various combinations of F2,
> F7, and no screen activity.  :(  Basically in the place I didn't want to be
> with my primary computer.
>
> On Wed, Jan 6, 2021 at 10:27 AM Bruce Labitt  wrote:
>
> Found out how to check the whole usb disk.  $ sudo sha256sum -b /dev/sdx
> Sudo was required.  Hope to be back and running soon...  Sorry for all the
> noise.
>
> On Wed, Jan 6, 2021 at 10:03 AM Bruce Labitt  wrote:
>
> System76 thinks it's the ssd.  Machine strangely got locked up while
> trying to start the arduino IDE, forcing me to power off the laptop.  Took
> 28 minutes to boot!  And 12 seconds after handing off to the OS.
> So it's time to do this.  I just backed up /home, /opt and /etc.  Anything
> else I should do before replacing the disk?  Just checked the sha256sum on
> the iso.  How do I check if the USB stick I burned is ok?
>
>
> On Sat, Jan 2, 2021 at 10:14 PM Bruce Labitt 
> wrote:
>
> Think it's a driver issue.  Looked in journalctl and there's some errors
> indicated.  One is a video issue, another is some sort of permissions
> issue for user who isn't me.  The permissions issue is with
> tracker-miner, which I find to be highly annoying.  Not quite sure how
> to disable it cleanly with low system impact.
>
> Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't
> an overdue fsck...  So I'm not so sure it's disk related at all.
>
> Have contacted system76 and sent them logs.  If I recall correctly, the
> issue seems to be closely related to a driver change (issued by
> system76).  Of course, they are still on break...
>
> Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think
> my first IBM PC was that slow, even with a boot from floppy disk.
>
>
> On 1/2/21 9:15 PM, r...@mrt4.com wrote:
> > Examine the time stamps on the syslog and compare them to previous
> nominal boots. That should indicate where the issue is. If all log entries
> indicate long delays, then it is something systemic like memory, storage,
> CPU, a thermal issue, etc. (Note: A systemic issue is not necessarily a
> hardware fault because a HW device can be incorrectly configured when it is
> initialized.)
> >
> > If it was a one-time occurrence then it was most likely an overdue fsck,
> but syslog will indicate that if that's the case.
> >
> > Ronald Smith
> >
> > --
> >
> > On Wed, 30 Dec 2020 14:04:43 -0500
> > Bruce Labitt  wrote:
> >
> >> I think I have a SSD on the way out.  Last reboot took a REALLY long
> >> time.  Like 30 minutes.  I ran the smart data and self test and the SSD
> >> passes.  Overall assessment is disk is ok.  I really don't know how to
> >> interpret what the results are.
> >>
> >> I think the disk is in pre-fail based on the smartctl output below
> >>
> >> /snip
> >>
> >> === START OF INFORMATION SECTION ===
> >> Model Family: Crucial/Micron RealSSD m4/C400/P400
> >> Device Model: M4-CT256M4SSD2
> >> Serial Number:1247091DC2FF
> >> LU WWN Device Id: 5 00a075 1091dc2ff
> >> Firmware Version: 040H
> >> User Capacity:256,060,514,304 bytes [256 GB]
> >> Sector Size:  512 bytes logical/physical
> >> Rotation Rate:Solid State Device
> >> Form Factor:  2.5 inches
> >> Device is:In smartct

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
Sorry to bother you, that is, if I haven 't been put on a giant ignore
list.  Replaced disk with new bigger SSD.  Unfortunately, the laptop is not
booting to the USB stick.  I haven't even gotten to any video console yet,
grub, bios, nada.  I get occasional flashes of the disk activity light and
nothing else.  Posting from an RPI4 now.  Tried various combinations of F2,
F7, and no screen activity.  :(  Basically in the place I didn't want to be
with my primary computer.

On Wed, Jan 6, 2021 at 10:27 AM Bruce Labitt  wrote:

> Found out how to check the whole usb disk.  $ sudo sha256sum -b /dev/sdx
> Sudo was required.  Hope to be back and running soon...  Sorry for all the
> noise.
>
> On Wed, Jan 6, 2021 at 10:03 AM Bruce Labitt  wrote:
>
>> System76 thinks it's the ssd.  Machine strangely got locked up while
>> trying to start the arduino IDE, forcing me to power off the laptop.  Took
>> 28 minutes to boot!  And 12 seconds after handing off to the OS.
>> So it's time to do this.  I just backed up /home, /opt and /etc.
>> Anything else I should do before replacing the disk?  Just checked the
>> sha256sum on the iso.  How do I check if the USB stick I burned is ok?
>>
>>
>> On Sat, Jan 2, 2021 at 10:14 PM Bruce Labitt <
>> bruce.lab...@myfairpoint.net> wrote:
>>
>>> Think it's a driver issue.  Looked in journalctl and there's some errors
>>> indicated.  One is a video issue, another is some sort of permissions
>>> issue for user who isn't me.  The permissions issue is with
>>> tracker-miner, which I find to be highly annoying.  Not quite sure how
>>> to disable it cleanly with low system impact.
>>>
>>> Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't
>>> an overdue fsck...  So I'm not so sure it's disk related at all.
>>>
>>> Have contacted system76 and sent them logs.  If I recall correctly, the
>>> issue seems to be closely related to a driver change (issued by
>>> system76).  Of course, they are still on break...
>>>
>>> Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think
>>> my first IBM PC was that slow, even with a boot from floppy disk.
>>>
>>>
>>> On 1/2/21 9:15 PM, r...@mrt4.com wrote:
>>> > Examine the time stamps on the syslog and compare them to previous
>>> nominal boots. That should indicate where the issue is. If all log entries
>>> indicate long delays, then it is something systemic like memory, storage,
>>> CPU, a thermal issue, etc. (Note: A systemic issue is not necessarily a
>>> hardware fault because a HW device can be incorrectly configured when it is
>>> initialized.)
>>> >
>>> > If it was a one-time occurrence then it was most likely an overdue
>>> fsck, but syslog will indicate that if that's the case.
>>> >
>>> > Ronald Smith
>>> >
>>> > --
>>> >
>>> > On Wed, 30 Dec 2020 14:04:43 -0500
>>> > Bruce Labitt  wrote:
>>> >
>>> >> I think I have a SSD on the way out.  Last reboot took a REALLY long
>>> >> time.  Like 30 minutes.  I ran the smart data and self test and the
>>> SSD
>>> >> passes.  Overall assessment is disk is ok.  I really don't know how to
>>> >> interpret what the results are.
>>> >>
>>> >> I think the disk is in pre-fail based on the smartctl output below
>>> >>
>>> >> /snip
>>> >>
>>> >> === START OF INFORMATION SECTION ===
>>> >> Model Family: Crucial/Micron RealSSD m4/C400/P400
>>> >> Device Model: M4-CT256M4SSD2
>>> >> Serial Number:1247091DC2FF
>>> >> LU WWN Device Id: 5 00a075 1091dc2ff
>>> >> Firmware Version: 040H
>>> >> User Capacity:256,060,514,304 bytes [256 GB]
>>> >> Sector Size:  512 bytes logical/physical
>>> >> Rotation Rate:Solid State Device
>>> >> Form Factor:  2.5 inches
>>> >> Device is:In smartctl database [for details use: -P show]
>>> >> ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 6
>>> >> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
>>> >> Local Time is:Wed Dec 30 13:49:17 2020 EST
>>> >> SMART support is: Available - device has SMART capability.
>>> >> SMART support is: Enabled
>>> >>
>>> >> ==

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
Found out how to check the whole usb disk.  $ sudo sha256sum -b /dev/sdx
Sudo was required.  Hope to be back and running soon...  Sorry for all the
noise.

On Wed, Jan 6, 2021 at 10:03 AM Bruce Labitt  wrote:

> System76 thinks it's the ssd.  Machine strangely got locked up while
> trying to start the arduino IDE, forcing me to power off the laptop.  Took
> 28 minutes to boot!  And 12 seconds after handing off to the OS.
> So it's time to do this.  I just backed up /home, /opt and /etc.  Anything
> else I should do before replacing the disk?  Just checked the sha256sum on
> the iso.  How do I check if the USB stick I burned is ok?
>
>
> On Sat, Jan 2, 2021 at 10:14 PM Bruce Labitt 
> wrote:
>
>> Think it's a driver issue.  Looked in journalctl and there's some errors
>> indicated.  One is a video issue, another is some sort of permissions
>> issue for user who isn't me.  The permissions issue is with
>> tracker-miner, which I find to be highly annoying.  Not quite sure how
>> to disable it cleanly with low system impact.
>>
>> Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't
>> an overdue fsck...  So I'm not so sure it's disk related at all.
>>
>> Have contacted system76 and sent them logs.  If I recall correctly, the
>> issue seems to be closely related to a driver change (issued by
>> system76).  Of course, they are still on break...
>>
>> Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think
>> my first IBM PC was that slow, even with a boot from floppy disk.
>>
>>
>> On 1/2/21 9:15 PM, r...@mrt4.com wrote:
>> > Examine the time stamps on the syslog and compare them to previous
>> nominal boots. That should indicate where the issue is. If all log entries
>> indicate long delays, then it is something systemic like memory, storage,
>> CPU, a thermal issue, etc. (Note: A systemic issue is not necessarily a
>> hardware fault because a HW device can be incorrectly configured when it is
>> initialized.)
>> >
>> > If it was a one-time occurrence then it was most likely an overdue
>> fsck, but syslog will indicate that if that's the case.
>> >
>> > Ronald Smith
>> >
>> > --
>> >
>> > On Wed, 30 Dec 2020 14:04:43 -0500
>> > Bruce Labitt  wrote:
>> >
>> >> I think I have a SSD on the way out.  Last reboot took a REALLY long
>> >> time.  Like 30 minutes.  I ran the smart data and self test and the SSD
>> >> passes.  Overall assessment is disk is ok.  I really don't know how to
>> >> interpret what the results are.
>> >>
>> >> I think the disk is in pre-fail based on the smartctl output below
>> >>
>> >> /snip
>> >>
>> >> === START OF INFORMATION SECTION ===
>> >> Model Family: Crucial/Micron RealSSD m4/C400/P400
>> >> Device Model: M4-CT256M4SSD2
>> >> Serial Number:1247091DC2FF
>> >> LU WWN Device Id: 5 00a075 1091dc2ff
>> >> Firmware Version: 040H
>> >> User Capacity:256,060,514,304 bytes [256 GB]
>> >> Sector Size:  512 bytes logical/physical
>> >> Rotation Rate:Solid State Device
>> >> Form Factor:  2.5 inches
>> >> Device is:In smartctl database [for details use: -P show]
>> >> ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 6
>> >> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
>> >> Local Time is:Wed Dec 30 13:49:17 2020 EST
>> >> SMART support is: Available - device has SMART capability.
>> >> SMART support is: Enabled
>> >>
>> >> === START OF READ SMART DATA SECTION ===
>> >> SMART overall-health self-assessment test result: PASSED
>> >>
>> >> /snip
>> >>
>> >> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
>> >> UPDATED  WHEN_FAILED RAW_VALUE
>> >> 1 Raw_Read_Error_Rate 0x002f   100   100   050 Pre-fail
>> >> Always   -   0
>> >> 5 Reallocated_Sector_Ct   0x0033   100   100   010 Pre-fail
>> >> Always   -   0
>> >> 9 Power_On_Hours  0x0032   100   100   001 Old_age
>> >> Always   -   7294
>> >>12 Power_Cycle_Count   0x0032   100   100   001 Old_age
>> >> Always   -   2511
>> >> 170 Grown_Failing_Block_Ct  0x0033   100   100   010 Pre-fail
>> >> Always   -  

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-06 Thread Bruce Labitt
System76 thinks it's the ssd.  Machine strangely got locked up while trying
to start the arduino IDE, forcing me to power off the laptop.  Took 28
minutes to boot!  And 12 seconds after handing off to the OS.
So it's time to do this.  I just backed up /home, /opt and /etc.  Anything
else I should do before replacing the disk?  Just checked the sha256sum on
the iso.  How do I check if the USB stick I burned is ok?


On Sat, Jan 2, 2021 at 10:14 PM Bruce Labitt 
wrote:

> Think it's a driver issue.  Looked in journalctl and there's some errors
> indicated.  One is a video issue, another is some sort of permissions
> issue for user who isn't me.  The permissions issue is with
> tracker-miner, which I find to be highly annoying.  Not quite sure how
> to disable it cleanly with low system impact.
>
> Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't
> an overdue fsck...  So I'm not so sure it's disk related at all.
>
> Have contacted system76 and sent them logs.  If I recall correctly, the
> issue seems to be closely related to a driver change (issued by
> system76).  Of course, they are still on break...
>
> Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think
> my first IBM PC was that slow, even with a boot from floppy disk.
>
>
> On 1/2/21 9:15 PM, r...@mrt4.com wrote:
> > Examine the time stamps on the syslog and compare them to previous
> nominal boots. That should indicate where the issue is. If all log entries
> indicate long delays, then it is something systemic like memory, storage,
> CPU, a thermal issue, etc. (Note: A systemic issue is not necessarily a
> hardware fault because a HW device can be incorrectly configured when it is
> initialized.)
> >
> > If it was a one-time occurrence then it was most likely an overdue fsck,
> but syslog will indicate that if that's the case.
> >
> > Ronald Smith
> >
> > --
> >
> > On Wed, 30 Dec 2020 14:04:43 -0500
> > Bruce Labitt  wrote:
> >
> >> I think I have a SSD on the way out.  Last reboot took a REALLY long
> >> time.  Like 30 minutes.  I ran the smart data and self test and the SSD
> >> passes.  Overall assessment is disk is ok.  I really don't know how to
> >> interpret what the results are.
> >>
> >> I think the disk is in pre-fail based on the smartctl output below
> >>
> >> /snip
> >>
> >> === START OF INFORMATION SECTION ===
> >> Model Family: Crucial/Micron RealSSD m4/C400/P400
> >> Device Model: M4-CT256M4SSD2
> >> Serial Number:1247091DC2FF
> >> LU WWN Device Id: 5 00a075 1091dc2ff
> >> Firmware Version: 040H
> >> User Capacity:256,060,514,304 bytes [256 GB]
> >> Sector Size:  512 bytes logical/physical
> >> Rotation Rate:Solid State Device
> >> Form Factor:  2.5 inches
> >> Device is:In smartctl database [for details use: -P show]
> >> ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 6
> >> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
> >> Local Time is:Wed Dec 30 13:49:17 2020 EST
> >> SMART support is: Available - device has SMART capability.
> >> SMART support is: Enabled
> >>
> >> === START OF READ SMART DATA SECTION ===
> >> SMART overall-health self-assessment test result: PASSED
> >>
> >> /snip
> >>
> >> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
> >> UPDATED  WHEN_FAILED RAW_VALUE
> >> 1 Raw_Read_Error_Rate 0x002f   100   100   050 Pre-fail
> >> Always   -   0
> >> 5 Reallocated_Sector_Ct   0x0033   100   100   010 Pre-fail
> >> Always   -   0
> >> 9 Power_On_Hours  0x0032   100   100   001 Old_age
> >> Always   -   7294
> >>12 Power_Cycle_Count   0x0032   100   100   001 Old_age
> >> Always   -   2511
> >> 170 Grown_Failing_Block_Ct  0x0033   100   100   010 Pre-fail
> >> Always   -   0
> >> 171 Program_Fail_Count  0x0032   100   100   001 Old_age
> >> Always   -   0
> >> 172 Erase_Fail_Count0x0032   100   100   001 Old_age
> >> Always   -   0
> >> 173 Wear_Leveling_Count 0x0033   098   098   010 Pre-fail
> >> Always   -   66
> >> 174 Unexpect_Power_Loss_Ct  0x0032   100   100   001 Old_age
> >> Always   -   87
> >> 181 Non4k_Aligned_Access0x0022   100   100   001 Old_age
> >> Always   -   10250 5047 520

Re: SMART data & Self tests, not sure if my SSD is on it's last gasp

2021-01-02 Thread Bruce Labitt
Think it's a driver issue.  Looked in journalctl and there's some errors 
indicated.  One is a video issue, another is some sort of permissions 
issue for user who isn't me.  The permissions issue is with 
tracker-miner, which I find to be highly annoying.  Not quite sure how 
to disable it cleanly with low system impact.

Last fsck was 3 months ago.  Next one is due in 3 months.  So it wasn't 
an overdue fsck...  So I'm not so sure it's disk related at all.

Have contacted system76 and sent them logs.  If I recall correctly, the 
issue seems to be closely related to a driver change (issued by 
system76).  Of course, they are still on break...

Nonetheless, waiting 8-10 minutes for boot is awful.  I don't even think 
my first IBM PC was that slow, even with a boot from floppy disk.


On 1/2/21 9:15 PM, r...@mrt4.com wrote:
> Examine the time stamps on the syslog and compare them to previous nominal 
> boots. That should indicate where the issue is. If all log entries indicate 
> long delays, then it is something systemic like memory, storage, CPU, a 
> thermal issue, etc. (Note: A systemic issue is not necessarily a hardware 
> fault because a HW device can be incorrectly configured when it is 
> initialized.)
>
> If it was a one-time occurrence then it was most likely an overdue fsck, but 
> syslog will indicate that if that's the case.
>
> Ronald Smith
>
> --
>
> On Wed, 30 Dec 2020 14:04:43 -0500
> Bruce Labitt  wrote:
>
>> I think I have a SSD on the way out.  Last reboot took a REALLY long
>> time.  Like 30 minutes.  I ran the smart data and self test and the SSD
>> passes.  Overall assessment is disk is ok.  I really don't know how to
>> interpret what the results are.
>>
>> I think the disk is in pre-fail based on the smartctl output below
>>
>> /snip
>>
>> === START OF INFORMATION SECTION ===
>> Model Family: Crucial/Micron RealSSD m4/C400/P400
>> Device Model: M4-CT256M4SSD2
>> Serial Number:    1247091DC2FF
>> LU WWN Device Id: 5 00a075 1091dc2ff
>> Firmware Version: 040H
>> User Capacity:    256,060,514,304 bytes [256 GB]
>> Sector Size:  512 bytes logical/physical
>> Rotation Rate:    Solid State Device
>> Form Factor:  2.5 inches
>> Device is:    In smartctl database [for details use: -P show]
>> ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 6
>> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
>> Local Time is:    Wed Dec 30 13:49:17 2020 EST
>> SMART support is: Available - device has SMART capability.
>> SMART support is: Enabled
>>
>> === START OF READ SMART DATA SECTION ===
>> SMART overall-health self-assessment test result: PASSED
>>
>> /snip
>>
>> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
>> UPDATED  WHEN_FAILED RAW_VALUE
>>     1 Raw_Read_Error_Rate 0x002f   100   100   050 Pre-fail
>> Always   -   0
>>     5 Reallocated_Sector_Ct   0x0033   100   100   010 Pre-fail
>> Always   -   0
>>     9 Power_On_Hours  0x0032   100   100   001 Old_age
>> Always   -   7294
>>    12 Power_Cycle_Count   0x0032   100   100   001 Old_age
>> Always   -   2511
>> 170 Grown_Failing_Block_Ct  0x0033   100   100   010 Pre-fail
>> Always   -   0
>> 171 Program_Fail_Count  0x0032   100   100   001 Old_age
>> Always   -   0
>> 172 Erase_Fail_Count    0x0032   100   100   001 Old_age
>> Always   -   0
>> 173 Wear_Leveling_Count 0x0033   098   098   010 Pre-fail
>> Always   -   66
>> 174 Unexpect_Power_Loss_Ct  0x0032   100   100   001 Old_age
>> Always   -   87
>> 181 Non4k_Aligned_Access    0x0022   100   100   001 Old_age
>> Always   -   10250 5047 5203
>> 183 SATA_Iface_Downshift    0x0032   100   100   001 Old_age
>> Always   -   0
>> 184 End-to-End_Error    0x0033   100   100   050 Pre-fail
>> Always   -   0
>> 187 Reported_Uncorrect  0x0032   100   100   001 Old_age
>> Always   -   0
>> 188 Command_Timeout 0x0032   100   100   001 Old_age
>> Always   -   0
>> 189 Factory_Bad_Block_Ct    0x000e   100   100   001 Old_age
>> Always   -   81
>> 194 Temperature_Celsius 0x0022   100   100   000 Old_age
>> Always   -   0
>> 195 Hardware_ECC_Recovered  0x003a   100   100   001 Old_age
>> Always   -   0
>> 196 Reallocated_Event_Count 0x0032   100   100   001 Old_age
>> Always   -   0
>> 197 Current_Pending_Sector  0x0032   100   1

SMART data & Self tests, not sure if my SSD is on it's last gasp

2020-12-30 Thread Bruce Labitt
I think I have a SSD on the way out.  Last reboot took a REALLY long 
time.  Like 30 minutes.  I ran the smart data and self test and the SSD 
passes.  Overall assessment is disk is ok.  I really don't know how to 
interpret what the results are.

I think the disk is in pre-fail based on the smartctl output below

/snip

=== START OF INFORMATION SECTION ===
Model Family: Crucial/Micron RealSSD m4/C400/P400
Device Model: M4-CT256M4SSD2
Serial Number:    1247091DC2FF
LU WWN Device Id: 5 00a075 1091dc2ff
Firmware Version: 040H
User Capacity:    256,060,514,304 bytes [256 GB]
Sector Size:  512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:  2.5 inches
Device is:    In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Dec 30 13:49:17 2020 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

/snip

ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  
UPDATED  WHEN_FAILED RAW_VALUE
   1 Raw_Read_Error_Rate 0x002f   100   100   050 Pre-fail  
Always   -   0
   5 Reallocated_Sector_Ct   0x0033   100   100   010 Pre-fail  
Always   -   0
   9 Power_On_Hours  0x0032   100   100   001 Old_age   
Always   -   7294
  12 Power_Cycle_Count   0x0032   100   100   001 Old_age   
Always   -   2511
170 Grown_Failing_Block_Ct  0x0033   100   100   010 Pre-fail  
Always   -   0
171 Program_Fail_Count  0x0032   100   100   001 Old_age   
Always   -   0
172 Erase_Fail_Count    0x0032   100   100   001 Old_age   
Always   -   0
173 Wear_Leveling_Count 0x0033   098   098   010 Pre-fail  
Always   -   66
174 Unexpect_Power_Loss_Ct  0x0032   100   100   001 Old_age   
Always   -   87
181 Non4k_Aligned_Access    0x0022   100   100   001 Old_age   
Always   -   10250 5047 5203
183 SATA_Iface_Downshift    0x0032   100   100   001 Old_age   
Always   -   0
184 End-to-End_Error    0x0033   100   100   050 Pre-fail  
Always   -   0
187 Reported_Uncorrect  0x0032   100   100   001 Old_age   
Always   -   0
188 Command_Timeout 0x0032   100   100   001 Old_age   
Always   -   0
189 Factory_Bad_Block_Ct    0x000e   100   100   001 Old_age   
Always   -   81
194 Temperature_Celsius 0x0022   100   100   000 Old_age   
Always   -   0
195 Hardware_ECC_Recovered  0x003a   100   100   001 Old_age   
Always   -   0
196 Reallocated_Event_Count 0x0032   100   100   001 Old_age   
Always   -   0
197 Current_Pending_Sector  0x0032   100   100   001 Old_age   
Always   -   0
198 Offline_Uncorrectable   0x0030   100   100   001 Old_age   
Offline  -   0
199 UDMA_CRC_Error_Count    0x0032   100   100   001 Old_age   
Always   -   0
202 Perc_Rated_Life_Used    0x0018   098   098   001 Old_age   
Offline  -   2
206 Write_Error_Rate    0x000e   100   100   001 Old_age   
Always   -   0

Replace the disk pronto?  Is that what this is telling me?  Or?

I recently copied over many important files to another disk.  And 
downloaded a new OS.  I just hate re-configuring things, and starting 
from scratch, it's such a pain.  Not as painful as a disk crash, but 
close.  I've got loads of stuff I've compiled from source and just 100's 
of things to check or update.  Yes, I'll just have to do it.  It's just 
the week plus of recovery that I'm rebelling against.

Anything else I should do first?  Check something?  Run a test? Any tips 
to make the "recovery" less painful?

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Known good wireless Linux Keyboard & Mouse combos?

2020-11-12 Thread Bruce Labitt
My old Logitech K270 keyboard and Performance MX mouse are long in the 
tooth and need replacing.  The paint is worn off several keys and 
sometimes keys either stick or the KB doesn't seem to transmit a 
keystroke.  The MX mouse charging circuit seems to have failed.  If I 
put in a charged NiMH battery it balks.  Red blinking light.  Which 
seems to mean the battery needs replacing, or needs a charge.  Found a 
hack on instructables, but seriously, it's time for a new one.  This one 
is worn.  Yes, I can clean out the KB as well, and have done this 
multiple times, but, it really is time to replace these guys.

Logitech seems to have abandoned their unifying receiver, especially at 
the lower end, which is a shame.  Their stuff worked for me and for a 
relatively long time.

What wireless KB & mouse combos have you found that work seamlessly with 
linux?  I don't need any thing super fancy, just something with no palm 
rests, and not a micro sized mouse.  The little mice are uncomfortable 
to use.  The K270 style layout is perfectly adequate.  Price range is 
$50 and under.

The MK850 claims multi-os friendly, but has the palm rests built in.  My 
desk has a palm rest built in, so the 850 won't work.

Is there any way to actually filter what you see on Amazon?  I swear, 
their search engine is deliberately terrible.  All sorts of irrelevant 
stuff.  20 pages for "linux wireless keyboard and mouse"  First hit is 
wired... Come on...

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How does Linux handle DST/ST? It's all about time...

2020-11-10 Thread Bruce Labitt
"Dumb" machine, while actually computer controlled, is closed source.  
No possibility of changing its behavior.
No ssh, no network.  It's a data logger to an SD card.  I have to use 
sneaker net to transport data to my PC.

Other possibility (after a SD card backup) is to change the dumb machine 
clock back to standard time, hopefully without messing any settings up.  
Fortunately I have recorded all the necessary settings.  The dumb 
machine has big warnings to not do such a thing as the instructions warn 
of data corruption.

Probably will end up just changing the dumb clock.  There does seem to 
be a struct with an int called tt_isdst, which should be useful, 
however, not sure the solution would be platform independent.  As I 
understand it, linux uses UTC in the RT clock and Windows uses 
localtime.  How messy.  Oh well, so much for a sw solution, going to 
change the time on dumbo.

Back to what you all were doing... Sorry for the noise.


On 11/10/20 1:23 PM, Michael ODonnell wrote:
>
> You can mess around with DST and such but this slightly sleazy hack
> might serve an alternative: find some way to get your "dumb" machine to
> tell your "smart" machine what time it thinks it is currently, and then
> force the smart machine to that time.  For example, if SSH works from
> the smart machine to the dumb one then from the smart one you might say:
>
>   % x="$(ssh dumb date)"
>   % sudo date --set="$x"
>
> ...or some variation on that theme.
>
>--M
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


How does Linux handle DST/ST? It's all about time...

2020-11-10 Thread Bruce Labitt
Still looking at a time related bug.  Wondering how (nowadays) linux 
handles TZ and DST/ST transition.

Does linux embed DST state into TZ?  Or is there a variable with a name 
like "DST"?

I want to set two different machines, one a PC, the other a dumb 
instrument to the same time.  The dumb one doesn't do time adjustment.  
My PC, obviously does.

I want to create a time, on my PC called: mytime = utc + utc_offset.  If 
utc_offset is invariant, then I am all set.  All I can see is (at least 
for ubuntu) there is TZ, which appears to be equal to utc_offset + DST*1.

If there was a variable called DST, I'd be done.  Then mytime = utc + 
utc_offset -DST*1, where DST=1 if now is daylight savings time, or DST=0 
if now is standard time.

Anyone have insight on this?  All I know is that my dumb machine was set 
to DST last month.  My PC is DST corrected.  Half the time, (all during 
ST) the device clock is ahead of the PC clock. Why is this bad - because 
the PC refuses to read files timestamped in the future.  I do not want 
to correct the dumb machine's clock twice a year, as there is documented 
potential for destroying data.  This should be correctable on the PC 
end, shouldn't it?

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Simple git question

2020-11-04 Thread Bruce Labitt

Thanks all.  I found that the simple git pull did what I needed it to do.

Built an experimental version of an experimental version...  If anyone 
is interested I'm trying to hunt down a weird DST/ST time bug.  These 
are a bear.  It's really amazing how often people get this wrong.  And 
quite annoying when trying to sync up different machines running 
different software each with entirely different approaches to time shift 
(all of which were actually wrong BTW).  At one point the different 
devices were 2 hours apart, when they should have been within seconds.


Simple manifestation was a failed file read because the time stamp was 
allegedly in the future.  (The time stamp was not in the future!)  Only 
happened once to me and had no tools in place to find it.  Wouldn't you 
know it, error failed to repeat.  If I hadn't saved the debug log, I 
couldn't even prove it happened once.


Not actually thinking I'll trap the error, but if it happens again maybe 
I can learn a little more how to fix it.


Oh and thanks for the list below, very helpful.

On 11/4/20 5:06 PM, Bill Ricker wrote:

Dan's way is as good as any.
(Could also commit to the local branch instead of stashing, which 
would let you diff against your config tweaks.)


I find that understanding what Git is doing really helps me figure out 
what i want to do. My preferred intro for this is


  * Git from the inside out


My other strategic bookmarks -

  * Git - Book 
  * Git - autocomplete

  * Specify an SSH key for git push for a given domain - Stack
Overflow



  * rename git branch locally and remotely · GitHub

  * The Universe of Discourse : How to recover lost files added to Git
but not committed

  * Git 2.5, including multiple worktrees and triangular workflows



  * Difference between git reset soft, mixed and hard


  * git revert-a-faulty-merge



  * 10 Common Git Problems and How to Fix Them – citizen428.blog



[so good i bookmarked it twice, 2nd location Codementor


]
  * git admin: An alias for running git commands as a privileged SSH
identity – Noam Lewis





___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Simple git question

2020-11-04 Thread Bruce Labitt
Guys & Gals, sorry for the elementary question.

I have cloned a project that I am interested.  Along the way, I fiddled 
with setting, mostly debug, but none of my changes are important.  I've 
built the project and am using it.  I'd like to re download it from the 
repo again, abandoning any changes that I have made.  A different person 
has merged some changes and I want to try them.

What is the best way to accomplish this?  My head spins with all the 
pushing and pulling.  For some reason I am loath to nuke everything and 
start over again.  Is there a slightly more graceful way to do this in 
git without nuking?

github's man area is not illuminating, instead they talk about all sorts 
of corner case things rather than something this basic.

Thanks all.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How was the get-together?

2020-02-24 Thread Bruce Labitt
As much as I enjoy eating out, the venue was too noisy for a meeting.  
If there's a meeting, it would be good to be able to hear folks.  It was 
difficult to hear people talking only 2 or 3 people away.  My 2 cents.  
What I was able to hear was both interesting and informative, and 
occasionally amusing :)

On 2/24/20 2:57 PM, Ben Scott wrote:
> On Fri, Feb 21, 2020 at 10:00 AM Ken D'Ambrosio  wrote:
>> Hey, all.  I'm deeply, deeply sorry I missed the fun.  Tow truck finally
>> got me to Amherst around 7:00, and I still had to walk home from the
>> shop.  But enough about me: I'm curious how things went!  Was a good
>> time had by all?
>Everyone was so devastated by your inability to attend, they all
> left after learning of the news.
>
>> Should we consider getting together again on a regular
>> (probably quarterly) basis, maybe with an actual agenda, etc.?
>My personal opinion (and not that of any other person, organization,
> or entity) has long been that regular meetings should come before
> formal meetings.  It seems like people get caught up in the desire for
> topics or speakers or other formalism, and seeing an inability to
> sustain such, give up.  My thought is that if a community is built and
> nourished, things like topics and speakers will follow naturally, as
> people discuss, discover, and want to do more.  But if there is no
> community, the opportunity for that synthesis is greatly diminished.
> (Others have theorized that a lack of formal structure means there is
> nothing to build on.  So maybe I'm wrong.)
>
>So I would suggest picking a date and recurrence interval and
> getting that going.
>
>Perhaps at the next meeting, the question of topics of interest
> could be the discussed.  (See?  Already the synthesis occurs.)
>
>One concern I do have is: It is often difficult to hear and be heard
> in a restaurant venue.  It certainly was the other night.  At the same
> time, it seems like food and drink are an appealing aspect for many.
> I know in the past, venues with a quiet corner or room, such that the
> celebration and the discussion could be colocated, or relocated to
> with a short walk, were sought, with some success.  Perhaps that is
> still a possibility?
>
> -- Ben
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Linking problem

2019-09-13 Thread Bruce Labitt
Puzzling over the use of ldconfig.  As I understand it ldconfig can be 
used to rebuild/locate all the shared libraries.  It looks in ld.so.conf 
for the directories to use. In my case ld.so.conf has one line in it:

"include /etc/ld.so.conf.d/*.conf"

I have 3 conf files in ld.so.conf.d.

libc.conf:

     /usr/local/lib

x86_64-linux-gnu.conf:

     /lib/x86_64-linux-gnu

     /usr/lib/x86_64-linux-gnu

i386-linux-gnu.conf:

     /lib/i386-linux-gnu

     /usr/lib/i386-linux-gnu

If I $ sudo rm /etc/ld.so.cache and $ sudo ldconfig -v, I get the message

     /sbin/ldconfig.real: Path `/lib/x86_64-linux-gnu' given more than once
     /sbin/ldconfig.real: Path `/usr/lib/x86_64-linux-gnu' given more 
than once

Why would this happen?  Is this ok?

I haven't gotten to my actual question yet, but this is puzzling me.  
Ubuntu 18.04 LTS, if this matters.  I'm trying to figure out if things 
are ok enough to ask why the linker can't find a file, even though I see 
it in ldconfig.  Maybe what I am asking is how to force a new 
configuration after deleting the ld.so.cache.



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Arduino question?

2019-05-04 Thread Bruce Labitt

Hi Paul,

Based on both your suggestion and a few of Bill Freeman's, I found the 
problem.  It basically was a problem with the scope of the #ifndef 
statement.  The problem was that if moreutils was compiled prior to 
RunningMedian, then RunningMedian_h was defined.  Due to an error in 
placement of the #endif statement in RunningMedian.h, the whole body of 
the header file was not read in.  I changed the location of the #endif 
statement in RunningMedian.h so that the body of the header would be 
read.  Problem solved.


So, it was simple... once it was pointed out.  Thanks to you both.

I'll still be coming up to the MakerSpace on Monday.  I'd like to check 
out the setup.  There is one in Nashua, just wondering how the 
Manchester one compares.


Regards,
Bruce

On 5/3/19 7:34 PM, Bruce Labitt wrote:

Hi Paul,

It seems this is a variant of header hell that I am stewing in...  At 
the moment the project is not up on github, as I was under some sort 
of delusion that this might eventually have some commercial value.  
I'm not so sure about that, but I have put a lot of effort into it so 
far...


Arduino, does indeed do some funky stuff.  If I could figure it out, 
I'd probably just write my own scripts to have things compile in the 
order I want.  A makefile isn't hard.  I just don't know all the crazy 
locations where Arduino stuff is stashed.  Suppose I could poke about.


Thanks for the tip on going through the #ifndef's.  It's got to be in 
that neighborhood.  I put in the #ifndef stuff to prevent multiple 
inclusions, which Arduino 1.8.9 IDE does not seem to like, the 
compiler squawked at me when I tried to do that.  What is puzzling me 
is that even if I put the include #include "RunningMedian.h" in the 
file, I get "nearly" the same error, if I comment out the include.


WITH include:
#include "moreutils.h"
*#include "RunningMedian.h"*
home/bruce/Arduino/adcdmafftM4bruce/adcdmafft/adcdmafftm4/moreutils.ino: 
In function 'void doMedian(float*, float*, int)':

moreutils:14:3: error: 'RunningMedian' was not declared in this scope
   RunningMedian samples = RunningMedian(medianlength);
   ^
moreutils:14:17: error: expected ';' before 'samples'
   RunningMedian samples = RunningMedian(medianlength);
 ^
moreutils:16:5: error: 'samples' was not declared in this scope
 samples.add(abuf[i]);
 ^
exit status 1
'RunningMedian' was not declared in this scope

WITHOUT include:
#include "moreutils.h"
/home/bruce/Arduino/adcdmafftM4bruce/adcdmafft/adcdmafftm4/moreutils.ino: 
In function 'void doMedian(float*, float*, int)':

moreutils:14:3: error: 'RunningMedian' was not declared in this scope
   RunningMedian samples = RunningMedian(medianlength);
   ^
moreutils:14:17: error: expected ';' before 'samples'
   RunningMedian samples = RunningMedian(medianlength);
 ^
moreutils:16:5: error: 'samples' was not declared in this scope
 samples.add(abuf[i]);
 ^
exit status 1
'RunningMedian' was not declared in this scope

In both cases, the top of the file states #include moreutils.h.  And 
inside of moreutils.h there is:

#ifndef RunningMedian_h
#define RunningMedian_h
#include "RunningMedian.h"
...
#endif

It's got to be simple, it's got to be simple,...  Wish I could see it 
what it was...


I think, I'll come up on Monday.  I haven't been to the Manchester 
Maker Space before.


-Bruce

On 5/3/19 6:50 PM, Paul Beaudet wrote:
The compiler is basically telling you the library is not imported or 
has yet to be imported for whatever reason.


The Arduinoy parts of the compilation process do some weird things 
behind the scenes to reorder the code before compiling so that it 
actually makes sense. Done with extra .ino files not named the same 
as the project and functions after the main loop in said 
project_name.ino file. Its possible for an Arduino developer/tinkerer 
to declare a function after the two primary loops that use those 
exact functions before actual deceleration because of this behavior. 
Which breaks some expectations of hardened engineers I think. All in 
the interest of catering to new people that would probably be 
frustrated by a strict order of operation. If this behaviour 
rearranges library functions before the "#include ", 
that might give an unexpected result, but that doesn't seem to be 
what's going on here. Under the hood strict order of operations still 
exist. Arduino does ultimately use the gcc compiler, though maybe its 
configured for that switcharoo magic, I didn't write the code, I've 
just fallen into the traps.


Extra .ino files in the same folder will be arranged before your main 
.ino file, but it could be right before setup. Not sure, its been a 
while since I

Re: Arduino question?

2019-05-03 Thread Bruce Labitt
n logic runs its 
course. Why do you need to prevent multiple instances from being 
called? #import should only call one instance, right? Also, spelling, 
that's normally my issue.. haha


If you are still having trouble after spending more time with it, feel 
free to stop by the Manchester Makerspace Monday during open house 
(6pm-8pm). I'll likely be in giving tours, I can take a look with you 
when tours simmer down.


Cheers,
Paul Beaudet

On Fri, May 3, 2019 at 4:57 PM Bruce Labitt 
mailto:bruce.lab...@myfairpoint.net>> 
wrote:


Can I ask an Arduino/C/C++ question here?  If not, where is a decent
place to ask?  Full code is just under 50KB (unzipped).

It's a "Variable was not declared in this scope" problem.
Basically, I'm
in over my head at the moment.  I'm not a good structured
programmer -
so let's get that out of the way.  I'm a hack, in the worst sense...

Everything was working... when I had a huge file.  I then decided,
wow,
this is a mess, lets break this up a bit into modules, so that it is
more supportable and debug-able (for myself).  If anyone is remotely
interested, it is a homebrew radar based chronograph.  I've got
most of
the pieces working (or at least it worked before I recently busted
things).  The 100KHz sampling using DMA, the ping pong floating
point 1K
FFT's running in 'real' time, and some display stuff. Separately, I
have a live update of a tft screen (320x240) running with the FFT
output.  I'm running on an ARM M4F processor, but using the Arduino
IDE.  The Arduino way of doing things is a little confusing to me,
to be
honest.  It hides a lot of things.

Ok, here is the error.

/home/bruce/Arduino/adcdmafftM4bruce/adcdmafft/adcdmafftm4/moreutils.ino:

In function 'void doMedian(float*, float*, int)':
moreutils:14:3: error: 'RunningMedian' was not declared in this scope
    RunningMedian samples = RunningMedian(medianlength);
    ^
moreutils:14:17: error: expected ';' before 'samples'
    RunningMedian samples = RunningMedian(medianlength);
  ^
moreutils:16:5: error: 'samples' was not declared in this scope
  samples.add(abuf[i]);
  ^
exit status 1
'RunningMedian' was not declared in this scope

The code in moreutils.ino is:

// additional processing
#include "moreutils.h"

void doMedian( float abuf[], float runmed[], int medianlength) {  //
needs work!

   RunningMedian samples = RunningMedian(medianlength);
   for (int i=0; i< FFT_SIZE/2; i++) {
 samples.add(abuf[i]);
 if (i>medianlength-1) {
   runmed[i-medianlength] = samples.getMedian();
   // don't put value until the circ buffer is filled
 }
   }
   for (int i= (FFT_SIZE/2 -medianlength-7); i< (FFT_SIZE/2); i++) {
 runmed[i] = runmed[FFT_SIZE/4];  // hack for now
 // at tail end of median there are some bizarre numbers. root
cause has not been
 // determined, so we just fill the last samples from
'something close'
   }
}

Inside of moreutils.h, is #include RunningMedian.h with an
#ifndef/#define/#include statement, to prevent multiple includes
of the
same file (RunningMedian.h).

I'm really kind of confused as to where I need to do the declaration.

In

https://github.com/RobTillaart/Arduino/blob/master/libraries/RunningMedian/examples/RunningMedian/RunningMedian.ino

the declaration is simply done prior to setup.  Snippet below

#include 

RunningMedian samples = RunningMedian(5);

RunningMedian samples2 = RunningMedian(9);

void setup() {

...

}

void loop() {

use samples here...

}

I'm sure this is trivial for most of you - but I'm both perplexed and
stuck.  If one of you kind souls could help me, I'd greatly
appreciate
it.  I'd even travel to see someone if that would work out better.

TIA, Bruce

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org <mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Arduino question?

2019-05-03 Thread Bruce Labitt
Can I ask an Arduino/C/C++ question here?  If not, where is a decent 
place to ask?  Full code is just under 50KB (unzipped).

It's a "Variable was not declared in this scope" problem. Basically, I'm 
in over my head at the moment.  I'm not a good structured programmer - 
so let's get that out of the way.  I'm a hack, in the worst sense...

Everything was working... when I had a huge file.  I then decided, wow, 
this is a mess, lets break this up a bit into modules, so that it is 
more supportable and debug-able (for myself).  If anyone is remotely 
interested, it is a homebrew radar based chronograph.  I've got most of 
the pieces working (or at least it worked before I recently busted 
things).  The 100KHz sampling using DMA, the ping pong floating point 1K 
FFT's running in 'real' time, and some display stuff.  Separately, I 
have a live update of a tft screen (320x240) running with the FFT 
output.  I'm running on an ARM M4F processor, but using the Arduino 
IDE.  The Arduino way of doing things is a little confusing to me, to be 
honest.  It hides a lot of things.

Ok, here is the error.

/home/bruce/Arduino/adcdmafftM4bruce/adcdmafft/adcdmafftm4/moreutils.ino: 
In function 'void doMedian(float*, float*, int)':
moreutils:14:3: error: 'RunningMedian' was not declared in this scope
    RunningMedian samples = RunningMedian(medianlength);
    ^
moreutils:14:17: error: expected ';' before 'samples'
    RunningMedian samples = RunningMedian(medianlength);
  ^
moreutils:16:5: error: 'samples' was not declared in this scope
  samples.add(abuf[i]);
  ^
exit status 1
'RunningMedian' was not declared in this scope

The code in moreutils.ino is:

// additional processing
#include "moreutils.h"

void doMedian( float abuf[], float runmed[], int medianlength) {  // 
needs work!

   RunningMedian samples = RunningMedian(medianlength);
   for (int i=0; i< FFT_SIZE/2; i++) {
     samples.add(abuf[i]);
     if (i>medianlength-1) {
   runmed[i-medianlength] = samples.getMedian();
   // don't put value until the circ buffer is filled
     }
   }
   for (int i= (FFT_SIZE/2 -medianlength-7); i< (FFT_SIZE/2); i++) {
     runmed[i] = runmed[FFT_SIZE/4];  // hack for now
     // at tail end of median there are some bizarre numbers.  root 
cause has not been
     // determined, so we just fill the last samples from 'something close'
   }
}

Inside of moreutils.h, is #include RunningMedian.h with an 
#ifndef/#define/#include statement, to prevent multiple includes of the 
same file (RunningMedian.h).

I'm really kind of confused as to where I need to do the declaration.

In 
https://github.com/RobTillaart/Arduino/blob/master/libraries/RunningMedian/examples/RunningMedian/RunningMedian.ino
 
the declaration is simply done prior to setup.  Snippet below

#include 

RunningMedian samples = RunningMedian(5);

RunningMedian samples2 = RunningMedian(9);

void setup() {

...

}

void loop() {

use samples here...

}

I'm sure this is trivial for most of you - but I'm both perplexed and 
stuck.  If one of you kind souls could help me, I'd greatly appreciate 
it.  I'd even travel to see someone if that would work out better.

TIA, Bruce

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


truncated: Recommendations on cloning a bootable main disk Now: grub and mounting said disk

2018-12-02 Thread Bruce Labitt

Hi Jerry,

I'm clearly seeing the merit of your approach by now.  Next time, I 
think I'll do it that way.


Since I'm so deep into this (spent way too much time already), I'd like 
to complete the process. I've learned about gparted (how to use it 
successfully) and now hopefully grub.  Found the ppa for grub-customizer 
and installed it.


I see that grub was set up to have 0 seconds delay, so that it was not 
possible to intervene.  So it seems that the menu should both be made 
visible and the timeout set to 10 or more seconds, at least for now.  
Does that make sense?


Grub customizer seems to be referring to the active disk (which is not 
the one I want to change).  How do I get it to refer to the other disk?  
I will be doing a physical disk change, and still am holding up hopes to 
not screw up the known good disk.


Actually, sdc (it's actually changed, but let's keep it consistent for 
the whole thread) isn't mounted, and it seems I'm having issues mounting 
it correctly.


From "man mount", I see that if the device isn't in fstab, one does 
"mount /dev/sdc1 ..."  (I'm not sure what goes in ... )
However what is the dir that is referred to, on the host machine or on 
sdc1?  If I want full access to sdc1, I would do


$ sudo mount -t ext4 /dev/sdc1 / (really sdb1 now, as seen below)

If I do this I get:

$ lsblk --fs
NAME   FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 ext4 6fb59d06-ec00-44f4-abcc-0da0f018be93 /
├─sda2
└─sda5 swap 616eaa19-299e-479b-8dcf-dfc36593f63a [SWAP]
sdb
├─sdb1 ext4 6fb59d06-ec00-44f4-abcc-0da0f018be93 / <--- this is the disk 
I attempted to mount

├─sdb2
└─sdb5 swap 38689ed0-1d07-416d-bd53-8dfb22554b3f
sdc
└─sdc1 ntfs A2DA53E7DA53B5EF
sr0
sdc and sdb have swapped.

Sorry for my confusion, but this stuff isn't obvious to me.  I seem to 
be missing a couple of idea that tie this all together.


I cannot see the drive show up in Files.  All I see is sda (boot disk) 
and sdc (a data disk).


How do I mount this disk?  It seems like this is a necessary step, is it 
not?
If I can't mount it, then how can I modify grub on it?  Or for that 
matter do a new system install on it.


For now, I have not made any changes in grub commander, and I have 
unmounted sdb1.


Wow, this has been messy so far, and compounded by my lack of expertise 
in the area.


Bruce

Sent from the very machine I'm trying to fix...

On 12/2/18 11:50 AM, Jerry Feldman wrote:

Grub will still point to the old one.
The way I prefer to do it is to install a fresh os onto the new drive 
and copy /home and possibly /usr/local. But everyone has an individual 
setup. There is a grub utility, Grub Customizer. I use this when I set 
up triple boot.



Sent from Galaxy S9+

Jerry Feldman mailto:gaf.li...@gmail.com>>
Boston Linux and Unix
http://www.blu.org
PGP key id: 6F6BB6E7
PGP Key fingerprint: 0EDC 2FF5 53A6 8EED 84D1  3050 5715 B88D 6F6B B6E7



Had to truncate, as the message size grew too large.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Recommendations on cloning a bootable main disk

2018-12-02 Thread Bruce Labitt
So after changing the boot order to boot from the new drive first, BIOS 
had to fall back to booting from the smaller (original) drive.  I had no 
warning or display, BIOS switched over "silently".


What is the best way of diagnosing this and proceeding?  I presume 
something needs to be done to grub on the new disk.


Since I had done a dd copy, wouldn't grub already be there?


On 12/2/18 10:47 AM, Bruce Labitt wrote:
Thanks Dan!  That did the trick. Doing it the way you suggested 
worked, vs. the way in the video (which did not work).

Your way was simple and with no error.

Thanks everyone for your help.
Hopefully it will boot, and then I'll replace the disk in the laptop 
with this one.


Bruce

On 12/2/18 9:40 AM, Dan Jenkins wrote:
Booting from sda should be fine. I just wanted to make sure you were 
not resizing from a live file system, which, while it can work 
sometimes, is problematic many times.


The 1MiB at the end of the drive appears to be related to GPT. If you 
aren't using GPT, shouldn't be an issue. With a 1 TB drive, you don't 
need to use GPT, in any event, though you could choose to. I have 
also seen such fragments of unallocated space. which appear to have 
been created due to partition alignment issues. I have never needed 
to leave such space available. Your partitioning tool may leave such 
space available, again, due to alignment issues.


On 12/2/2018 9:32 AM, Bruce.Labitt wrote:
I'm booting on to sda, not sdc.  sda is a 240GB SSD.  sdc is not 
active and hasn't been mounted.  sdc is a 1TB drive.  When sdc is 
finally sorted out, I will physically remove sda (240) from my 
laptop and install sdc (1T).  ( The bigger sdc drive probably will 
turn into sda! ).


Just to make this explicit, the sdc drive is connected to the laptop 
via a USB3/SATA adapter.  I haven't opened up the laptop yet.


If you think I should boot from a USB Ubuntu flash drive I can do 
that as well.


Thanks for the tips on gparted.
Do I need to allocate 1MiB at the end of the drive?  I'm reading 
conflicting requirements on this.


I will try your suggestions and will report back.

Sent from Blue <http://www.bluemail.me/r?b=14063>
On Dec 2, 2018, at 8:42 AM, Dan Jenkins <mailto:d...@rastech.com>> wrote:


First, you are running GParted from a bootable flash drive, not
from booting off the new sdc, correct?

I have had issues, in a few instances, with GParted, when taking
multiple steps at once.
Rather than do all the steps at once, I would do one step at a
time.
Apply it and let it complete.
Then do the next step.
GParted often works fine with multiple steps, except when it
doesn't. :-)

Further, you don't actually need to move the swap partition,
just recreate it in its final position.
That would save time, but doesn't explain the error.

These are the steps I would use, if I was doing it:
1. Delete the swap partition (sdc5)
2. Delete the extended partition (sdc2)
3. Apply steps 1 & 2.
4. Resize the data partition (sdc1), leaving 30 GB unallocated
at the end.
5. Apply step 4.
6. Create an extended partition in that 30 GB unallocated space.
7. Create a 30 GB swap partition in that new extended partition.
8. Apply steps 6 & 7.

On 12/1/2018 9:05 PM, Bruce Labitt wrote:

Thanks for the instructions on the BIOS - umm, nothing was
wrong. Having the USB stick prior to entering the BIOS made the
device show up.

OK, dd'd the disk.  Took a long time, 94 minutes, but
everything is transferred, except for this email.

Next is to resize in gparted - which didn't complete.
I followed a youtube video at
https://www.youtube.com/watch?v=cDgUwWkvuIY

Just to note, *sdc has never been mounted. *

The video is done in a virtual machine, but I followed the part
showing how to do the resizing.  The linux-swap was turned off.
The error is as follows:

GParted 0.30.0 --enable-libparted-dmraid --enable-online-resize

Libparted 3.2

*Grow /dev/sdc2 from 29.99 GiB to 723.03 GiB*  00:00:00(
ERROR )

calibrate /dev/sdc2  00:00:00( SUCCESS )

/path: /dev/sdc2 (partition)
start: 437226563
end: 500118191
size: 62891629 (29.99 GiB)/

grow partition from 29.99 GiB to 723.03 GiB  00:00:00( ERROR )

/old start: 437226563
old end: 500118191
old size: 62891629 (29.99 GiB)/

/requested start: 437226563
requested end: 1953523711
requested size: 1516297149 (723.03 GiB)/

libparted messages( INFO )

/Unable to satisfy all constraints on the partition./



*Move /dev/sdc5 to the right and grow it from 29.99 GiB to
29.99 GiB*



*Move /dev/sdc2 to the right and shrink it from 723.03 G

Re: Recommendations on cloning a bootable main disk

2018-12-02 Thread Bruce Labitt
Thanks Dan!  That did the trick.  Doing it the way you suggested worked, 
vs. the way in the video (which did not work).

Your way was simple and with no error.

Thanks everyone for your help.
Hopefully it will boot, and then I'll replace the disk in the laptop 
with this one.


Bruce

On 12/2/18 9:40 AM, Dan Jenkins wrote:
Booting from sda should be fine. I just wanted to make sure you were 
not resizing from a live file system, which, while it can work 
sometimes, is problematic many times.


The 1MiB at the end of the drive appears to be related to GPT. If you 
aren't using GPT, shouldn't be an issue. With a 1 TB drive, you don't 
need to use GPT, in any event, though you could choose to. I have also 
seen such fragments of unallocated space. which appear to have been 
created due to partition alignment issues. I have never needed to 
leave such space available. Your partitioning tool may leave such 
space available, again, due to alignment issues.


On 12/2/2018 9:32 AM, Bruce.Labitt wrote:
I'm booting on to sda, not sdc.  sda is a 240GB SSD.  sdc is not 
active and hasn't been mounted.  sdc is a 1TB drive.  When sdc is 
finally sorted out, I will physically remove sda (240) from my laptop 
and install sdc (1T).  ( The bigger sdc drive probably will turn into 
sda! ).


Just to make this explicit, the sdc drive is connected to the laptop 
via a USB3/SATA adapter.  I haven't opened up the laptop yet.


If you think I should boot from a USB Ubuntu flash drive I can do 
that as well.


Thanks for the tips on gparted.
Do I need to allocate 1MiB at the end of the drive?  I'm reading 
conflicting requirements on this.


I will try your suggestions and will report back.

Sent from Blue <http://www.bluemail.me/r?b=14063>
On Dec 2, 2018, at 8:42 AM, Dan Jenkins <mailto:d...@rastech.com>> wrote:


First, you are running GParted from a bootable flash drive, not
from booting off the new sdc, correct?

I have had issues, in a few instances, with GParted, when taking
multiple steps at once.
Rather than do all the steps at once, I would do one step at a time.
Apply it and let it complete.
Then do the next step.
GParted often works fine with multiple steps, except when it
doesn't. :-)

Further, you don't actually need to move the swap partition, just
recreate it in its final position.
That would save time, but doesn't explain the error.

These are the steps I would use, if I was doing it:
1. Delete the swap partition (sdc5)
2. Delete the extended partition (sdc2)
3. Apply steps 1 & 2.
4. Resize the data partition (sdc1), leaving 30 GB unallocated at
the end.
5. Apply step 4.
6. Create an extended partition in that 30 GB unallocated space.
7. Create a 30 GB swap partition in that new extended partition.
8. Apply steps 6 & 7.

On 12/1/2018 9:05 PM, Bruce Labitt wrote:

Thanks for the instructions on the BIOS - umm, nothing was
wrong.  Having the USB stick prior to entering the BIOS made the
device show up.

OK, dd'd the disk.  Took a long time, 94 minutes, but everything
is transferred, except for this email.

Next is to resize in gparted - which didn't complete.
I followed a youtube video at
https://www.youtube.com/watch?v=cDgUwWkvuIY

Just to note, *sdc has never been mounted. *

The video is done in a virtual machine, but I followed the part
showing how to do the resizing.  The linux-swap was turned off. 
The error is as follows:

GParted 0.30.0 --enable-libparted-dmraid --enable-online-resize

Libparted 3.2

*Grow /dev/sdc2 from 29.99 GiB to 723.03 GiB*  00:00:00(
ERROR )

calibrate /dev/sdc2  00:00:00( SUCCESS )

/path: /dev/sdc2 (partition)
start: 437226563
end: 500118191
size: 62891629 (29.99 GiB)/

grow partition from 29.99 GiB to 723.03 GiB  00:00:00( ERROR )

/old start: 437226563
old end: 500118191
old size: 62891629 (29.99 GiB)/

/requested start: 437226563
requested end: 1953523711
requested size: 1516297149 (723.03 GiB)/

libparted messages( INFO )

/Unable to satisfy all constraints on the partition./



*Move /dev/sdc5 to the right and grow it from 29.99 GiB to 29.99
GiB*



*Move /dev/sdc2 to the right and shrink it from 723.03 GiB to
29.99 GiB*



*Grow /dev/sdc1 from 208.48 GiB to 901.52 GiB*



/dev/sdc1 is ext4 and what I want extended  208.48 GiB
/dev/sdc2 is the extended partition                  29.99 GiB
/dev/sdc5 is the linux swap which was turned off 29.99 GiB and
was inside the extended partition
unallocated was 693.04 GiB

P

Re: Recommendations on cloning a bootable main disk

2018-12-01 Thread Bruce Labitt
Thanks for the instructions on the BIOS - umm, nothing was wrong.  
Having the USB stick prior to entering the BIOS made the device show up.


OK, dd'd the disk.  Took a long time, 94 minutes, but everything is 
transferred, except for this email.


Next is to resize in gparted - which didn't complete.
I followed a youtube video at https://www.youtube.com/watch?v=cDgUwWkvuIY

Just to note,*sdc has never been mounted. *

The video is done in a virtual machine, but I followed the part showing 
how to do the resizing.  The linux-swap was turned off.  The error is as 
follows:


GParted 0.30.0 --enable-libparted-dmraid --enable-online-resize

Libparted 3.2

*Grow /dev/sdc2 from 29.99 GiB to 723.03 GiB*  00:00:00( ERROR )

calibrate /dev/sdc2  00:00:00( SUCCESS )

/path: /dev/sdc2 (partition)
start: 437226563
end: 500118191
size: 62891629 (29.99 GiB)/

grow partition from 29.99 GiB to 723.03 GiB  00:00:00( ERROR )

/old start: 437226563
old end: 500118191
old size: 62891629 (29.99 GiB)/

/requested start: 437226563
requested end: 1953523711
requested size: 1516297149 (723.03 GiB)/

libparted messages( INFO )

/Unable to satisfy all constraints on the partition./



*Move /dev/sdc5 to the right and grow it from 29.99 GiB to 29.99 GiB*



*Move /dev/sdc2 to the right and shrink it from 723.03 GiB to 29.99 GiB*



*Grow /dev/sdc1 from 208.48 GiB to 901.52 GiB*



/dev/sdc1 is ext4 and what I want extended  208.48 GiB
/dev/sdc2 is the extended partition               29.99 GiB
/dev/sdc5 is the linux swap which was turned off 29.99 GiB and was 
inside the extended partition

unallocated was  693.04 GiB

Partitions were dragged and moved per the basic instructions.

Can you give me a hint what went wrong?  I'm kind of surprised that it 
failed, essentially in the first step, growing the extended partition 
after turning linux-swap off.


The problem might be that gparted still has a problem with leaving 1MiB 
at the end for the duplicate boot information.  I found a comment in 
2017 for gparted: http://gparted-forum.surf4.info/viewtopic.php?id=17646


And: https://bugzilla.gnome.org/show_bug.cgi?id=738144

Is there a practical work around to my reported error?

Thanks,
Bruce


On 12/1/18 4:39 PM, Dan Jenkins wrote:

On some of the BIOSes, unless you have the USB drive connected, before
you go into the BIOS, it will not appear as a boot option.

Also, depending on the USB flash drive model, it may appear:
1) as a removable device (aka a floppy drive),
2) a hard drive (appearing as second choice under hard disk drives;
     you would need to change the 1st drive to USB and the 2nd drive to
your current boot drive), or
3) as a CDROM drive.

Also, if you have a UEFI BIOS, you may need to switch it to Legacy,
instead of UEFI.

Lastly, if you have a UEFI BIOS, you need a UEFI compatible boot device.
In the case of Clonezilla, you need to download an AMD664 alternative
version (Ubuntu-based), rather than the default Debian-based. (We have
both the UEFI and Legacy versions of Clonezilla to try when we run into
such issues.)

And, rarely, I encounter computers that simply cannot boot USB flash
drives, but those tend to be much older ones.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/




___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Recommendations on cloning a bootable main disk

2018-12-01 Thread Bruce Labitt
I forgot just how slow DVD burning can be.  No wonder, no one hardly 
uses them anymore.  USB3 is truly a great invention.


On 12/1/18 4:24 PM, Bruce Labitt wrote:
Totally unexpected fly in the ointment.  Option in BIOS to boot to USB 
has disappeared. Arggghh!


American Megatrends BIOS Version 2.15.1226 ca. 2012

There were 4 boot options - now there is only 3, and no (apparent) 
option to get a 4th option back.  I took a picture of the screen, but 
I won't clutter up the list.


I *will* have to fix the bios, or I won't be able to boot from USB.

Temp fix - just to get on with life, is to create a boot DVD.  Not 
even sure where I keep the DVDs anymore...  Fortunately, my laptop has 
a DVD writer.



I did visit the AMI website, but it wasn't obvious how or if I could 
upgrade the BIOS.  I sent them a tech support request, on upgrade, and 
in particular about the disappearing option, maybe I will get an 
answer next week.


At the moment, I'm not feeling very confident.  This all seemed easy a 
day ago, now its gotten complicated...


Bruce


On 12/1/18 3:18 PM, Tom Buskey wrote:

Clonezilla is awesome for that.

On Sat, Dec 1, 2018 at 2:07 PM Dan Jenkins <mailto:d...@rastech.com>> wrote:


We use Clonezilla off a bootable USB flash drive.

On December 1, 2018 12:51:34 PM EST, Bruce Labitt
mailto:bruce.lab...@myfairpoint.net>> wrote:

It' apparent that one uses a variant of dd.  What isn't apparent is how
one goes about cloning one's primary disk (active).  From searching it
appears it is not recommended to use dd when the disk is active, either
the source or the destination.

I'm trying to clone my nearly full SSD with the OS (Ubuntu 18.04) to a
new larger SSD.

Is there a tiny linux I can boot into that I can run dd from?  Or can I
make the main disk ro?  What do you suggest?

I have backed up home.  I really don't want to re-install the OS, since
I have had troubles with gdm3 screwing up (different topic).  (Black
screen, no consoles)

Recommendations/recipes on the cloning process sought.

I was simply going to use

# dd if=/dev/sda of=/dev/sdc bs=4096 conv=sync,noerror

I've seen lots of comments about block size and optimal setting, but I'm
not sure what is optimal if there are unknown (but few) source drive 
errors.

Thanks,

Bruce

gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org  <mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
<mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/




___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Recommendations on cloning a bootable main disk

2018-12-01 Thread Bruce Labitt
Totally unexpected fly in the ointment.  Option in BIOS to boot to USB 
has disappeared. Arggghh!


American Megatrends BIOS Version 2.15.1226 ca. 2012

There were 4 boot options - now there is only 3, and no (apparent) 
option to get a 4th option back.  I took a picture of the screen, but I 
won't clutter up the list.


I *will* have to fix the bios, or I won't be able to boot from USB.

Temp fix - just to get on with life, is to create a boot DVD.  Not even 
sure where I keep the DVDs anymore...  Fortunately, my laptop has a DVD 
writer.



I did visit the AMI website, but it wasn't obvious how or if I could 
upgrade the BIOS.  I sent them a tech support request, on upgrade, and 
in particular about the disappearing option, maybe I will get an answer 
next week.


At the moment, I'm not feeling very confident.  This all seemed easy a 
day ago, now its gotten complicated...


Bruce


On 12/1/18 3:18 PM, Tom Buskey wrote:

Clonezilla is awesome for that.

On Sat, Dec 1, 2018 at 2:07 PM Dan Jenkins <mailto:d...@rastech.com>> wrote:


We use Clonezilla off a bootable USB flash drive.

On December 1, 2018 12:51:34 PM EST, Bruce Labitt
mailto:bruce.lab...@myfairpoint.net>> wrote:

It' apparent that one uses a variant of dd.  What isn't apparent is how
one goes about cloning one's primary disk (active).  From searching it
appears it is not recommended to use dd when the disk is active, either
the source or the destination.

I'm trying to clone my nearly full SSD with the OS (Ubuntu 18.04) to a
new larger SSD.

Is there a tiny linux I can boot into that I can run dd from?  Or can I
make the main disk ro?  What do you suggest?

I have backed up home.  I really don't want to re-install the OS, since
I have had troubles with gdm3 screwing up (different topic).  (Black
screen, no consoles)

Recommendations/recipes on the cloning process sought.

I was simply going to use

# dd if=/dev/sda of=/dev/sdc bs=4096 conv=sync,noerror

I've seen lots of comments about block size and optimal setting, but I'm
not sure what is optimal if there are unknown (but few) source drive 
errors.

Thanks,

Bruce

gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org  <mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org <mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Recommendations on cloning a bootable main disk

2018-12-01 Thread Bruce Labitt

Thanks for everyone's suggestions! Sometimes one misses the obvious.

Bruce

On 12/1/18 1:50 PM, Mac wrote:

Umm...make a bootable usb stick, boot from that and do the dd?

On Sat, Dec 1, 2018 at 12:51 PM Bruce Labitt 
mailto:bruce.lab...@myfairpoint.net>> 
wrote:


It' apparent that one uses a variant of dd.  What isn't apparent
is how
one goes about cloning one's primary disk (active).  From
searching it
appears it is not recommended to use dd when the disk is active,
either
the source or the destination.

I'm trying to clone my nearly full SSD with the OS (Ubuntu 18.04)
to a
new larger SSD.

Is there a tiny linux I can boot into that I can run dd from? Or
can I
make the main disk ro?  What do you suggest?

I have backed up home.  I really don't want to re-install the OS,
since
I have had troubles with gdm3 screwing up (different topic). (Black
screen, no consoles)

Recommendations/recipes on the cloning process sought.

I was simply going to use

# dd if=/dev/sda of=/dev/sdc bs=4096 conv=sync,noerror

I've seen lots of comments about block size and optimal setting,
but I'm
not sure what is optimal if there are unknown (but few) source
drive errors.

Thanks,

Bruce


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org <mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Recommendations on cloning a bootable main disk

2018-12-01 Thread Bruce Labitt
It' apparent that one uses a variant of dd.  What isn't apparent is how 
one goes about cloning one's primary disk (active).  From searching it 
appears it is not recommended to use dd when the disk is active, either 
the source or the destination.

I'm trying to clone my nearly full SSD with the OS (Ubuntu 18.04) to a 
new larger SSD.

Is there a tiny linux I can boot into that I can run dd from?  Or can I 
make the main disk ro?  What do you suggest?

I have backed up home.  I really don't want to re-install the OS, since 
I have had troubles with gdm3 screwing up (different topic).  (Black 
screen, no consoles)

Recommendations/recipes on the cloning process sought.

I was simply going to use

# dd if=/dev/sda of=/dev/sdc bs=4096 conv=sync,noerror

I've seen lots of comments about block size and optimal setting, but I'm 
not sure what is optimal if there are unknown (but few) source drive errors.

Thanks,

Bruce


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: symlink confusion

2015-11-14 Thread Bruce Labitt

On 11/14/2015 04:07 PM, Joshua Judson Rosen wrote:

Bruce Labitt  wrote:

Pardon my denseness (density?), but what you have shown is still
confusing to me.

ln -s thing-I-want-a-symlink-to where-I-want-to-put-it  <-- I don't
understand this :(

In my case, I want any reference to cc to point to
/opt/compiler_cuda/gcc.  It turns out /opt/compiler_cuda/gcc will be a
symlink as well.  Eventually the cc reference will end up pointing to
gcc-4.9, since CUDA7.5 does not support gcc5.

Is it
1)  ln -s cc /opt/compiler_cuda/gcc  or
2)  ln -s /opt/compiler_cuda/gcc cc

Which one does what I want?  Seriously confused.

I suspect you don't quite want either of those, actually.

It sounds a little like you're expecting "ln -s" to create a shell
command-alias or something (you never said *what directory* you want
to contain the "cc" symlink, but that's important!). It just creates a file.
In order for that file to be recognised as a command, you need to put it
in one of the directories that the shell searches ($PATH).

IF you want to used a symlink to create a "cc" command, you probably want
either:

 ln -s /opt/compiler_cuda/gcc /usr/local/bin/cc

... or:

 ln -s /opt/compiler_cuda/gcc ~/bin/cc

... depending on whether you're doing this for a system-wide default
or just for yourself. But I'm surprised that you're trying to do it
this way at all.

I usually just do something more like:

 export CC=/opt/compiler_cuda/gcc

... and then let the makefiles pick up that environment-variable.
Even if your Makefile is using implicit rules, it'll still pick up
and use the ${CC} value from your environment.

Are you actually using a Makefile or something that actually,
*explicitly*, has "cc" hardcoded rather than using "${CC}"?

I'd expect that you don't actually want to make a CUDA compiler
the default compiler for *all software* you build,
which is probably what you'll do by naming it "cc"
and putting it into your search-path




Original link:  from 
http://askubuntu.com/questions/693145/installing-cuda-7-5-toolkit-on-ubuntu-15-10


/I wanna share my experience on installing CUDA 7.5 (in order to use 
with Theano) on Ubuntu 15.10. /


//

1.

   /I installed Ubuntu 15.10 and the video driver (352.41) from the
   "Additional Drivers" tab;/

2.

   /Installed few dependencies like //|nvidia-modprobe|//(fix
   permissions problems), and for the samples compiling
   //|freeglut3-dev libx11-dev libxmu-dev libxi-dev libglu1-mesa-dev|/

3.

   /And because it needs GCC 4.9: //|sudo apt-get install gcc-4.9
   g++-4.9|//, then made symlinks in //|/opt/compiler_cuda|//(created
   the folder with an arbitrary name of my choice) as follows://
   / /|$ ls -la /opt/compiler_cuda/|//
   / /|lrwxrwxrwx 1 root root 22 Nov 2 16:14 cc ->
   /opt/compiler_cuda/gcc|//
   / /|lrwxrwxrwx 1 root root 16 Nov 2 16:13 g++ -> /usr/bin/g++-4.9|//
   / /|lrwxrwxrwx 1 root root 16 Nov 2 16:12 gcc -> /usr/bin/gcc-4.9|//
   //Registered //|update-alternatives|//with://
   / /|sudo update-alternatives --install /usr/bin/gcc gcc
   /usr/bin/gcc-5 60 --slave /usr/bin/g++ g++ /usr/bin/g++-5|//
   / /|sudo update-alternatives --install /usr/bin/gcc gcc
   /usr/bin/gcc-4.9 50 --slave /usr/bin/g++ g++ /usr/bin/g++-4.9|//
   /

4.

   /Downloaded "runfile (local)" 15.04 version, from //CUDA 7.5
   Downloads <https://developer.nvidia.com/cuda-downloads>//; and
   installed with://
   / /|sudo sh cuda_7.5.18_linux.run --silent --toolkit --override|//
   / /|sudo sh cuda_7.5.18_linux.run --silent --samples --override|//
   //and appended in //|.bash_aliases|//(.bashrc reads it)://
   / /|export PATH=/usr/local/cuda-7.5/bin:$PATH|//
   / /|export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH|/

5.

   /Appended //|compiler-bindir = /opt/compiler_cuda|//in
   //|nvcc.profile|//, so nvcc can use it./


I'm trying to do step 3.  I have seen similar instructions in the past.  
I may have done something like this in the past, but, I am temporarily 
suffering from CRS.  I believe this will work, although as you have 
stated, it may not be optimal.  For Wily Ubuntu 15.10, the default gcc 
is 5.2.  Nvidia is stuck at 4.9, hence the update alternatives command 
in step 3.


Is there any reason step 3) won't work?

To make the symlink |cc -> /opt/compiler_cuda/gcc, what is the command?

Does *ln -s  /usr/bin/cc /opt/compiler_cuda/gcc*  do what the author 
states above?  Or do I have it backwards?  Let's just say this is a 
dyslexic moment. Please confirm if this makes sense, or I should be 
doing something else.


As far as I know, nvcc has the smarts to use nvcc to compile cuda code, 
and gcc/g++ for everything else.


Best regards
-Bruce


|
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: symlink confusion

2015-11-14 Thread Bruce Labitt

On 11/14/2015 12:36 PM, Kyle Smith wrote:

I like to remember it as:

ln -s thing-I-want-a-symlink-to where-I-want-to-put-it

It's helps to remember, too, the reason for the order is that the 
second option isn't required. It will put a symlink with the same base 
name in your current working directory without it.
On Sat, Nov 14, 2015 at 12:31 PM Bruce Labitt 
mailto:bruce.lab...@myfairpoint.net>> 
wrote:


Confused about this, so I'd like to ask, before I mess things up. 
I am attempting to follow the instructions on


http://askubuntu.com/questions/693145/installing-cuda-7-5-toolkit-on-ubuntu-15-10

I'd like to create a symbolic link from cc (which is a symlink) to
/opt/compiler_cuda/gcc

|cc -> /opt/compiler_cuda/gcc

|So the command should be:  sudo ln -s cc /opt/compiler_cuda/gcc
?  Or reverse the arguments?

Sorry about this primitive question, sometimes I get confused
about the order.  As I have found online, the description is
ln -s /path/to/file path/to/symlink.  However, this still confuses
me.  Which is which in my example?

Can someone enlighten me?  TIA.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org <mailto:gnhlug-discuss@mail.gnhlug.org>
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



Pardon my denseness (density?), but what you have shown is still 
confusing to me.


ln -s thing-I-want-a-symlink-to where-I-want-to-put-it  <-- I don't 
understand this :(


In my case, I want any reference to cc to point to 
/opt/compiler_cuda/gcc.  It turns out /opt/compiler_cuda/gcc will be a 
symlink as well.  Eventually the cc reference will end up pointing to 
gcc-4.9, since CUDA7.5 does not support gcc5.


Is it
1)  ln -s cc /opt/compiler_cuda/gcc  or
2)  ln -s /opt/compiler_cuda/gcc cc

Which one does what I want?  Seriously confused.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


symlink confusion

2015-11-14 Thread Bruce Labitt
Confused about this, so I'd like to ask, before I mess things up.  I am 
attempting to follow the instructions on

http://askubuntu.com/questions/693145/installing-cuda-7-5-toolkit-on-ubuntu-15-10

I'd like to create a symbolic link from cc (which is a symlink) to 
/opt/compiler_cuda/gcc


|cc -> /opt/compiler_cuda/gcc

|So the command should be:  sudo ln -s cc /opt/compiler_cuda/gcc ?  Or 
reverse the arguments?


Sorry about this primitive question, sometimes I get confused about the 
order.  As I have found online, the description is
ln -s /path/to/file path/to/symlink.  However, this still confuses me.  
Which is which in my example?


Can someone enlighten me?  TIA.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SDHC card locked?

2015-09-19 Thread Bruce Labitt
>From mount:
/dev/mmcblk0p1 on /media/bruce/647E-E27B2 type vfat
(rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)

How would the command go?  mount ?? --options remount,rw   What is ??, is
it /media/bruce/647E-E27B2 ?

On Sat, Sep 19, 2015 at 4:33 PM, Bill Ricker  wrote:

>
> On Sat, Sep 19, 2015 at 4:09 PM, Bruce Labitt <
> bruce.lab...@myfairpoint.net> wrote:
>
>> .   I wanted to copy my data
>> from another SDHC card to it.  The card seems to be locked, and is
>> preventing writing to the card - although the little slider is set to
>> the unlocked position.  Short of returning the card, which may be my
>> best option, what can I do to check that the card is actually ok, or my
>> laptop's SD card reader is at fault.
>>
>> I checked the properties of the card - it is set to user -
>>
>
>
> ​Check the 'dmesg -T',  'mount', 'hdparm', 'fdisk -l', and
> '/var/log/messages'  output for clues. ​
>
> Some distros default fat, vfat, ntfs file-systems to read-only for safety,
> don't know if that's your case. Devices also remount r/o on error.  If
> 'mount' reports 'ro', but no errors listed, a 'mount --options remount,rw'
> should work.
>
> Sometimes a specific reader doesn't like a specific card, so try another
> one. E.g., my USB hub has slots for everything, but the uSD slot isn't
> reliable.
>
>
>
> --
> Bill Ricker
> bill.n1...@gmail.com
> https://www.linkedin.com/in/n1vux
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SDHC card locked?

2015-09-19 Thread Bruce Labitt
Ubuntu 15.04.  SDHC card (no adapter) plugged into SDHC reader in my
laptop.  I just ordered a USB adapter.  Hope it helps.

On Sat, Sep 19, 2015 at 4:28 PM, R. Anthony Lomartire <
opensourcek...@gmail.com> wrote:

> what distro are you on? i've had similar issues on my macbook pro running
> ubuntu. i was using a microsd card via an adapter. i used a usb adapter
> instead and that worked fine.
>
> On Sat, Sep 19, 2015 at 4:10 PM Bruce Labitt 
> wrote:
>
>> I recently bought a Flash Air III SDHC card.   I wanted to copy my data
>> from another SDHC card to it.  The card seems to be locked, and is
>> preventing writing to the card - although the little slider is set to
>> the unlocked position.  Short of returning the card, which may be my
>> best option, what can I do to check that the card is actually ok, or my
>> laptop's SD card reader is at fault.
>>
>> I checked the properties of the card - it is set to user -
>> read/write/execute
>>
>> $ getfacl /media/bruce/647E-E27B2
>> getfacl: Removing leading '/' from absolute path names
>> # file: media/bruce/647E-E27B2
>> # owner: bruce
>> # group: bruce
>> user::rwx
>> group::r-x
>> other::r-x
>>
>> Thanks,
>> Bruce
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


SDHC card locked?

2015-09-19 Thread Bruce Labitt
I recently bought a Flash Air III SDHC card.   I wanted to copy my data 
from another SDHC card to it.  The card seems to be locked, and is 
preventing writing to the card - although the little slider is set to 
the unlocked position.  Short of returning the card, which may be my 
best option, what can I do to check that the card is actually ok, or my 
laptop's SD card reader is at fault.

I checked the properties of the card - it is set to user - 
read/write/execute

$ getfacl /media/bruce/647E-E27B2
getfacl: Removing leading '/' from absolute path names
# file: media/bruce/647E-E27B2
# owner: bruce
# group: bruce
user::rwx
group::r-x
other::r-x

Thanks,
Bruce
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: FYI: Comcast Metro ethernet to the home

2015-07-17 Thread Bruce Labitt
I'm in Nashua (north end) and have fiber.  However, this fiber was installed 
when Verizon owned the landlines.  But Fairpoint did the pole to house drop.  
You sure there is no fiber downtown?

Best regards,
Bruce

Please excuse any typos, sent by my iPhone.

> On Jul 17, 2015, at 17:53, Joshua Judson Rosen  wrote:
> 
>> On 2015-07-17 15:53, Matt Minuti wrote:
>> If only someone offered such nice service in auburn... I'm still on 6/1 for 
>> $60...
> 
> At least you can blame your placement out in the boonies.
> 
> I'm stuck trying to do DSL over 90-year-old copper+paper+lead telephone-lines
> that semiregularly require a bucket-truck visit because they've delaminated,
> formed a new crack, got full of either rainwater or condensation,
> and shorted themselves out... *in downtown Nashua*, because AFAICT my only
> other options are Comcast cable (and I'd prefer not to do business with 
> Comcast),
> a high-latency Satellite link, or terrestrial wireless service via
> one of the wireless telcos--and somehow those all seem mostly worse to me.
> 
> All *I* have to blame my situation on is my own lousy personality :)
> 
> (but, really--how come fiber is available in places like Wilton and Chichester
> before it's available in here? Is it normal for cities to be the 
> cyber-boonies?)
> 
>> On Jul 16, 2015 7:07 PM, "Ted Roche" > > wrote:
>> 
>>Not sure where your local area is, but many towns served by the telecom 
>> TDS
>>have, or will soon have, TDSFiber available. For plain old residential
>>service at $49+fees, they are offering 100Mbps up to 1 Gbps, triple 
>> bundles
>>and some discounts during the rollout. A local billboard claims it's the
>>fastest residential service in the country, though I'm not sure if that
>>discounter Google Fiber or had some disclaimer in fine print. 
>> 
>>https://www.tdsfiber.com/where/
>> 
>> 
>> 
>>On Thu, Jul 16, 2015 at 6:16 PM, Steven C. Peterson >> wrote:
>> 
>>As an fyi for any one who wants major bandwidth at home Comcast has in
>>our area a Metro Ethernet service for residences
>>505/125mb.
>> 
>>New Hampshire was the pilot test for the 1gb and 2gb services they are
>>rolling out down south. They have told all of the new England beta 
>> tests
>>that they will be moved to 2gb service this fall
>> 
>> I have been on it since January and it is fantastic, catches $299 per
>>month + tax and lease (a cienea metro e switch) 3 year contract. and a
>>$250 installation fee
>> 
>>Need to be with in an arbitrary distance of a Comcast splice or node (
>>they base this on the cost to get the 12 fiber single mode run into 
>> your
>>home)
>> 
>>This is the same service and network they sell to enterprise 
>> customers.
>>they include block of 5 IPv4 and a /48 IPv6 static with the service 
>> fee
>> 
>>I have a contact in the enterprise sales that can get any one who
>>interested getting more info
>> 
>>--
>>Steven C. Peterson
>>Mainstream Technology Group
>>s...@mainstream.net 
>>Office: (603)966-4607 x 2409 
>>Cell/SMS: (603)913-7006 
>> 
>>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>___
>>gnhlug-discuss mailing list
>>gnhlug-discuss@mail.gnhlug.org 
>>http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>> 
>> 
>> 
>> 
>>-- 
>>Ted Roche
>>Ted Roche & Associates, LLC
>>http://www.tedroche.com
>> 
>>___
>>gnhlug-discuss mailing list
>>gnhlug-discuss@mail.gnhlug.org 
>>http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>> 
>> 
>> 
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
> -- 
> "Don't be afraid to ask (λf.((λx.xx) (λr.f(rr."
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Need some suggestions on a borked upgrade

2015-05-10 Thread Bruce Labitt
Thanks.  Will try tomorrow.

Bruce 

Please excuse any typos, sent by my iPhone.

> On May 10, 2015, at 22:04, Jeffry Smith  wrote:
> 
> Run "ls /dev/sd*" before & after inserting stick.   Then "point /dev/sd disk>"
> 
> Will show up in /media
> 
> Jeff
> 
>> On May 10, 2015 9:42 PM, "Bruce Labitt"  wrote:
>> Ok.  Seemed to have lost the ability to log into x.
>> 
>> I can login.  I ran the command and it created the log file.  How do you 
>> mount a usb stick when you don't know its name?  Then I can copy the file.
>> 
>> It's getting to the point of removing the drive, copying /home and doing a 
>> new installation.  
>> 
>> Bruce 
>> 
>>> On May 10, 2015, at 21:17, Jeffry Smith  wrote:
>>> 
>>> The -f flag tells apt to try and fix errors.
>>> 
>>> I run Debian. Sometimes either "apt-get -f install" works.  Also,  running 
>>> "dpkg --configure -a" (which tells dpkg to try & configure all the 
>>> partially installed packages) will unbork it.  Without seeing the exact 
>>> error,  I also can't give you better advice.
>>> 
>>>> On May 10, 2015 9:08 PM, "David Rysdam"  wrote:
>>>> Joshua Judson Rosen  writes:
>>>> > Can you run "apt-get install -f 2>&1 | tee apt-errors.log"
>>>> 
>>>> OT, but why not just:
>>>> 
>>>> apt-get install > apt-errors.log 2>&1
>>>> 
>>>> ?
>>>> ___
>>>> gnhlug-discuss mailing list
>>>> gnhlug-discuss@mail.gnhlug.org
>>>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Need some suggestions on a borked upgrade

2015-05-10 Thread Bruce Labitt
Ok.  Seemed to have lost the ability to log into x.

I can login.  I ran the command and it created the log file.  How do you mount 
a usb stick when you don't know its name?  Then I can copy the file.

It's getting to the point of removing the drive, copying /home and doing a new 
installation.  

Bruce 

> On May 10, 2015, at 21:17, Jeffry Smith  wrote:
> 
> The -f flag tells apt to try and fix errors.
> 
> I run Debian. Sometimes either "apt-get -f install" works.  Also,  running 
> "dpkg --configure -a" (which tells dpkg to try & configure all the partially 
> installed packages) will unbork it.  Without seeing the exact error,  I also 
> can't give you better advice.
> 
>> On May 10, 2015 9:08 PM, "David Rysdam"  wrote:
>> Joshua Judson Rosen  writes:
>> > Can you run "apt-get install -f 2>&1 | tee apt-errors.log"
>> 
>> OT, but why not just:
>> 
>> apt-get install > apt-errors.log 2>&1
>> 
>> ?
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Need some suggestions on a borked upgrade

2015-05-10 Thread Bruce Labitt
Been quiet on the list.

Upgrade from Ubuntu 14.10 to 15.04.  Apt seems to be hung up with 
removing a single file - which is old and not needed.  But because of 
this the upgrade is severely screwed up.  Not sure if it will boot again 
properly.  I just downloaded an 15.04 iso to burn to a usbstick to act 
as a rescue if needed.

Package is octave3.2-info, for some reason it has dependencies on perl.  
Perl is apparently used by a lot of packages in some way, and all of 
these necessary packages are "half installed".  It got so bad that 
dist-upgrade hung because there were too many errors.

There is some sort of directory issue, which generates an error message 
if I attempt to apt-get remove octave3.2-info.
I think, if I can remove or delete this file (and remove references to 
it) perhaps the rest of the install will go through.  This, of course 
sound 'dangerous' but I have run out of ideas.

Any suggestions?  apt-get -f install returns the error message. apt-get 
remove returns the same error.  Looking for a few ideas. I'll try to use 
some of them tonight to attempt a fix.  Got to visit Mom now...

Regards,
Bruce
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Can this disk be repaired? Does it need to be?

2014-12-04 Thread Bruce Labitt
Yes, I'll get another disk to check this one😡. I just bought this SSD a few 
weeks ago.

Can FAT32 support "weird" aka Linux file names?  If not, which is most likely 
true, I'll have to pull the blade server nfs "image" off this disk.  I have 
directories with MAC ID's as directory names.  I'm sure that is messing up 
windows.

At this time I can't dump Win7 as that is my work issued machine.  Later, I'd 
like to change that...  One battle at a time...

-Bruce


> On Dec 4, 2014, at 16:35, r...@mrt4.com wrote:
> 
> Suggestions:
> 
> 1. Dump Win7 and use ext4.
> 
> 2. If that's impossible, use FAT32 (if the file size limit is not an issue) 
> -- it's been around forever and is widely compatible and reliable. If you're 
> only using it to warehouse backup data, you really don't need the features of 
> contemporary file systems. 
> 
> 3. If neither of those work for you, use separate drives for each OS.
> 
> You should get another drive anyway because the issue may be H/W related and 
> you can use the second drive to isolate the problem. Using USB3 it should 
> only take a couple of hours to transfer the data.
> 
> Ronald Smith
> r...@mrt4.com
> 
> 
> 
> On Thu, 4 Dec 2014 12:24:13 -0500
> Bruce Labitt  wrote:
> 
>> Have an SSD formatted to NTFS.  I had intended to use it between linux and
>> Win7 as a backup.  It worked for a while in both OS.  Yesterday Win7 asked
>> if I wanted to repair the disk.  Since there are directories and file names
>> that are not windows safe, I declined.
>> 
>> Later in the day, I reconnected the disk to my win7 machine, and the
>> computer could not recognize the drive.
>> 
>> *Location is not available*
>> E:\ is not accessible
>> The file or directory is corrupted and unreadable
>> 
>> It asked if I wanted to format the disk.  Umm, no.
>> 
>> Later in the evening, I connected my SSD to my linux laptop.  It opened the
>> SSD and files within it without problem.
>> 
>> I looked at the disk in gparted, and it showed a non-descriptive !, and
>> something about not being able to read it, suggesting I install ntfsprogs
>> and ntfs-3g.  However, ntfs-3g is already on my laptop (and clearly
>> running, along with fuse).  *The "!" is just that the disk is unmounted.*
>> gparted also showed a green box, and a key icon next to the SDD name
>> /dev/sdc1.
>> 
>> Is this fixable?  Am I headed to uncertain doom?  Seriously, is there a way
>> to get back to having win7 recognize the disk again?  Or should I get yet
>> another disk, transfer the cross-platform compatible files and directories
>> and start all over again?  Should I use a different file format?
>> 
>> Any suggestions?  Thanks.
>> 
>> Ubuntu 14.10.  i7, 32GB Ram, 240GB SSD main drive, Crucial 1TB SSD in
>> Inatech USB3 housing.
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Can this disk be repaired? Does it need to be?

2014-12-04 Thread Bruce Labitt
It is kind of funny, I just bought this disk because another one WD Passport 
was starting to fail/get flakey.  The right thing to do is to get another disk.

This SSD was to be my carry around disk containing work I've done in the past 
as reference.  My brains so to speak.  Pity in windows it (the disk) has lost 
its mind.  Linux/Ubuntu has no problems with the disk, so far.

I guess a NAS is in my future.

Thanks for your suggestions.

Bruce 

> On Dec 4, 2014, at 16:21, John Abreau  wrote:
> 
> Safest thing would be to get a spare disk, back up everything from the SSD to 
> the spare disk, then reformat the SSD so Win7 is happy, then restore 
> everything from the spare back to the SSD. 
> 
> It might make sense to replace it with a NAS drive so the two OSes aren't 
> accessing the drive at such a low level that corruption is likely to occur. 
> Connect to the NAS with an Ethernet cable and access it via NFS and Samba. 
> 
>> On Thu, Dec 4, 2014 at 12:24 PM, Bruce Labitt  wrote:
>> Have an SSD formatted to NTFS.  I had intended to use it between linux and 
>> Win7 as a backup.  It worked for a while in both OS.  Yesterday Win7 asked 
>> if I wanted to repair the disk.  Since there are directories and file names 
>> that are not windows safe, I declined.  
>> 
>> Later in the day, I reconnected the disk to my win7 machine, and the 
>> computer could not recognize the drive.  
>> 
>> Location is not available
>> E:\ is not accessible
>> The file or directory is corrupted and unreadable
>> 
>> It asked if I wanted to format the disk.  Umm, no.
>> 
>> Later in the evening, I connected my SSD to my linux laptop.  It opened the 
>> SSD and files within it without problem.  
>> 
>> I looked at the disk in gparted, and it showed a non-descriptive !, and 
>> something about not being able to read it, suggesting I install ntfsprogs 
>> and ntfs-3g.  However, ntfs-3g is already on my laptop (and clearly running, 
>> along with fuse).  The "!" is just that the disk is unmounted.
>> gparted also showed a green box, and a key icon next to the SDD name 
>> /dev/sdc1.
>> 
>> Is this fixable?  Am I headed to uncertain doom?  Seriously, is there a way 
>> to get back to having win7 recognize the disk again?  Or should I get yet 
>> another disk, transfer the cross-platform compatible files and directories 
>> and start all over again?  Should I use a different file format?
>> 
>> Any suggestions?  Thanks.
>> 
>> Ubuntu 14.10.  i7, 32GB Ram, 240GB SSD main drive, Crucial 1TB SSD in 
>> Inatech USB3 housing.
>> 
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
> 
> 
> -- 
> John Abreau / Executive Director, Boston Linux & Unix
> Email j...@blu.org / WWW http://www.abreau.net / PGP-Key-ID 0x920063C6
> PGP-Key-Fingerprint A5AD 6BE1 FEFE 8E4F 5C23  C2D0 E885 E17C 9200 63C6
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


  1   2   3   4   >