Re: [GLLUG] British Gas DKIM failure?

2024-01-28 Thread Carles Pina i Estany via GLLUG

Hi,

On 28 Jan 2024 at 14:37:43, Marco van Beek via GLLUG wrote:
> On 27/01/2024 18:08, Henrik Morsing via GLLUG wrote:
> > 
> > I'm now getting the same from the Land Registry:
> > 
> > I wish there was a test I could do to check what is actually wrong...
> > 
> Okay, so this would indicate that it is more likely something wrong at your
> end rather than at theirs. I think that this point, I would start to wonder
> if there is anything at your end that is altering the email before it gets
> to the DKIM check.

this makes sense. I would also check if DKIM can be verified by, for
example, mails coming from gmail.com.

In my mail client: I view the headers to see the
"Authentication-Results". For an email from gmail.com to my mail server
with DKIM I see that it says "dkim=pass". But, note, that DMARC for
gmail.com says policy "none" and for the British Gas / Land registry I
think that says "policy=reject".

So, I wonder, does DKIM verification always fails for Henrik?
(e.g. wrong DNS lookups from DKIM, it happened to me). And, for some
domains, this is a reject and other domains, is "nothing happens".

[...]

> Maybe something in your system is altering something in a field that is
> being used by the British Gas and Land registry emails, like adding an
> "EXTERNAL" into the subject line before the DKIM test?

that's a good idea to check as well.

For reference, an email that I am checking the headers says:
Subject:From:To:Date:message-id:x-mailer-recipientid:fe
+edback-id:list-unsubscribe-post:list-unsubscribe:precedence:x-mailru-msgtype:x-campaignid:rep
+ly-to:MIME-Version:Content-Type

So, if the subject changed before verifying DKIM, it would not pass.

Cheers,

-- 
Carles Pina i Estany
https://carles.pina.cat


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


Re: [GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

2024-01-28 Thread bap--- via GLLUG
On 2024-01-28 14:06, Carles Pina i Estany via GLLUG wrote:

> I am normally not active on this forum, but read most of the messages
> and do have some knowledge in linux. For me to state that a policy
from
> another site, just because it makes sense, should also
retrospectively
> appy here just because you think it makes sense, is a wrong
assumption.
> If you think that policy should apply here, lets discuss and agree to
that.
> Not just retrospectively apply it to a person, that for all intents
and
> purposes is trying to help Hendrik is not conductive to free and open
> discussions.
> For me the usage of LLMs, if done correctly, is a great tool. As any
tool it
> has its limitation, the famous hallucinations to name but one.

I managed development of an AI project between 1995 and 2000. In my
role as "speaker to suits" I had to explain it to people` with zero
understanding of IT. The more I came to understand it the less I liked
it. For our project it was fine because our requirement was for
credibility and not accuracy. We are currently at the "unrealistic
expectations" phase of the hype-cycle and blind acceptance is rife.. 

The problem is that it generates plausible answers and not necessarily
correct ones. In situations where correctness is important the
error-rate needs to be watched. If users are accustomed to blindly
accepting whatever the computer says 99.99% right might be worse than
90%. My advice is that it's often a mistake to use LLM for any job that
you can't do better yourself.

> Let's try to be nice to eachother, especially when somebody is doing
> his/her/its best to help Jan

Indeed. Although we don't necessarily have to agree.

I would prefer that LLM generated material should always be flagged as
such. 

I'm unsure whether I should go further and consider discussions in
which LLM-generated arguments have been presented as  'tainted.'

-- 
Bernard Peek
b...@shrdlu.com


-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


Re: [GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

2024-01-28 Thread Andy Smith via GLLUG
Hello,

On Sun, Jan 28, 2024 at 02:37:45PM +, Jan van Bergen via GLLUG wrote:
> How Carles used it, with giving a reference stating that the lines in
> question came from a LLM, while still making sure that the info is correct
> to me is very much how you should use tools like this, and I had absolutely
> no issue with it.

My issues with it are pretty much the same as StackOverflow's issues
with it.

Any of us could have done the same, including Henrik themselves. Do
we want support venues that are just people pasting ChatGPT to each
other, web searches pulling back hits that are just more of that?

We have to spot that it's from an LLM and check the reference
ourselves. We don't know whether Carles did that for us. We can't
generally trust the LLM user to do that.

Carles could have asked the LLM the question, done the research
themselves to check that what the LLM came back with is correct, and
then written a response that they believe to be true and factual, in
which case that's fine. But we don't know that happened because it's
just a paste from ChatGPT.

> Maybe you're a language virtuoso and don't need tools to write,
> not everybody is like that.

Nice personal attack noted, but we aren't talking about writing prose.

> Let's try to be nice to eachother, especially when somebody is doing
> his/her/its best to help

I think my request was politely phrased and backed up with good
reasoning, whether you agree with the reasoning or not. I don't
think that pasting ChatGPT responses is someone doing their best to
help people.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


Re: [GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

2024-01-28 Thread Marco van Beek via GLLUG



On 28/01/2024 14:37, Jan van Bergen via GLLUG wrote:
Let's try to be nice to each other, especially when somebody is doing 
his/her/its best to help


+1

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


Re: [GLLUG] British Gas DKIM failure?

2024-01-28 Thread Marco van Beek via GLLUG

On 27/01/2024 18:08, Henrik Morsing via GLLUG wrote:


I'm now getting the same from the Land Registry:

I wish there was a test I could do to check what is actually wrong...

Okay, so this would indicate that it is more likely something wrong at 
your end rather than at theirs. I think that this point, I would start 
to wonder if there is anything at your end that is altering the email 
before it gets to the DKIM check.


So, I suggest you check good DKIM signatures against "bad" DKIM 
signatures, and look at which headers are being used to create the 
signature (the "h" in the DKIM header) and see if there is a patterns. 
On an email directly from me, you would see 
"h=Date:Cc:Subject:To:References:From:In-Reply-To:From;" in the header.


Maybe something in your system is altering something in a field that is 
being used by the British Gas and Land registry emails, like adding an 
"EXTERNAL" into the subject line before the DKIM test?


Regards

Marco

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


Re: [GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

2024-01-28 Thread Jan van Bergen via GLLUG

On 2024-01-28 14:06, Carles Pina i Estany via GLLUG wrote:

Hi,

On 28 Jan 2024 at 13:23:26, Andy Smith via GLLUG wrote:

Hello,

On Sun, Jan 28, 2024 at 12:42:20AM +, Carles Pina i Estany via 
GLLUG wrote:

> (this is a copy-paste from a... ChatGPT conversation):

Please don't.


To clarify, that was only the list of things that could have been wrong
on why opendkim reported "bad signature". To my knowledge, the list
seems correct and can be usefl. I am not a professional mail sysadmin
(even though I set up email servers, during years, in different
environments).

The rest of the email is hand typed and brain thought!

Anyway, I'll not do it again.


If this was a StackOverflow site, your response would not be
permitted because you used an LLM (ChatGPT).


yes, but this is not StackOverflow (so didn't think that adding 4 lines
that I thought that were well explained) was a problem. Stating the
source.

I thought that it was a good description to help Henrik that could have
happened and fix the issue.


I think that StackOverflow's reasoning for their policy is sound and
would apply here also:


https://meta.stackoverflow.com/questions/421831/temporary-policy-generative-ai-e-g-chatgpt-is-banned


In a nutshell, any of us, including Henrik, can easily use an LLM
yet what we can't easily do without domain knowledge is tell when an
LLM is *incorrect*.


I do have some (limited) domain knowledge and I thought that the 4 
lines

were quite correct and a good summary (else, I wouldn't go answering
things that I have no idea).

I might still be wrong, in this case I apologise and hope to learn.


When someone asks a question on a mailing list like this, I'd like
to think their question would be given as much respect as if it were
asked on a Stack site.


This is my third email trying to help Henrik, including sharing some
scripts that I use for a similar case. I really only want to help
Henrik, and I used tools that I had in hand to try to explain one of 
the

errors. I will not do it another time.

Sorry for the confusion here!
I am normally not active on this forum, but read most of the messages 
and do have some knowledge in linux. For me to state that a policy from 
another site, just because it makes sense, should also retrospectively 
appy here just because you think it makes sense, is a wrong assumption. 
If you think that policy should apply here, lets discuss and agree to 
that. Not just retrospectively apply it to a person, that for all 
intents and purposes is trying to help Hendrik is not conductive to free 
and open discussions.
For me the usage of LLMs, if done correctly, is a great tool. As any 
tool it has its limitation, the famous hallucinations to name but one. 
However many times already I have used it to create an outline of an 
article or presentation. Even if you're an expert in the area, just 
getting past a blank page can be hard. As long as you check the outcome 
and make sure that any mistakes are corrected and missing info is added 
it can be an extremely valuable tool in improving productivity. I am 
actively encouraging all the developers in my department to use it.
How Carles used it, with giving a reference stating that the lines in 
question came from a LLM, while still making sure that the info is 
correct to me is very much how you should use tools like this, and I had 
absolutely no issue with it.
I often ask LLMs to rewrite paragraphs I have written to make them 
easier to read, as English is not my first language. Obviously I would 
make sure I still agree with what is said and often it is a dialogue 
with the LLM till I am happy with the proposed text. Maybe you're a 
language virtuoso and don't need tools to write, not everybody is like 
that.
Let's try to be nice to eachother, especially when somebody is doing 
his/her/its best to help

Jan

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


Re: [GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

2024-01-28 Thread Carles Pina i Estany via GLLUG

Hi,

On 28 Jan 2024 at 13:23:26, Andy Smith via GLLUG wrote:
> Hello,
> 
> On Sun, Jan 28, 2024 at 12:42:20AM +, Carles Pina i Estany via GLLUG 
> wrote:
> > (this is a copy-paste from a... ChatGPT conversation):
> 
> Please don't.

To clarify, that was only the list of things that could have been wrong
on why opendkim reported "bad signature". To my knowledge, the list
seems correct and can be usefl. I am not a professional mail sysadmin
(even though I set up email servers, during years, in different
environments).

The rest of the email is hand typed and brain thought!

Anyway, I'll not do it again.

> If this was a StackOverflow site, your response would not be
> permitted because you used an LLM (ChatGPT).

yes, but this is not StackOverflow (so didn't think that adding 4 lines
that I thought that were well explained) was a problem. Stating the
source.

I thought that it was a good description to help Henrik that could have
happened and fix the issue.

> I think that StackOverflow's reasoning for their policy is sound and
> would apply here also:
> 
> 
> https://meta.stackoverflow.com/questions/421831/temporary-policy-generative-ai-e-g-chatgpt-is-banned
> 
> In a nutshell, any of us, including Henrik, can easily use an LLM
> yet what we can't easily do without domain knowledge is tell when an
> LLM is *incorrect*.

I do have some (limited) domain knowledge and I thought that the 4 lines
were quite correct and a good summary (else, I wouldn't go answering
things that I have no idea).

I might still be wrong, in this case I apologise and hope to learn.

> When someone asks a question on a mailing list like this, I'd like
> to think their question would be given as much respect as if it were
> asked on a Stack site.

This is my third email trying to help Henrik, including sharing some
scripts that I use for a similar case. I really only want to help
Henrik, and I used tools that I had in hand to try to explain one of the
errors. I will not do it another time.

Sorry for the confusion here!

-- 
Carles Pina i Estany
https://carles.pina.cat


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug


[GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

2024-01-28 Thread Andy Smith via GLLUG
Hello,

On Sun, Jan 28, 2024 at 12:42:20AM +, Carles Pina i Estany via GLLUG wrote:
> (this is a copy-paste from a... ChatGPT conversation):

Please don't.

If this was a StackOverflow site, your response would not be
permitted because you used an LLM (ChatGPT).

I think that StackOverflow's reasoning for their policy is sound and
would apply here also:


https://meta.stackoverflow.com/questions/421831/temporary-policy-generative-ai-e-g-chatgpt-is-banned

In a nutshell, any of us, including Henrik, can easily use an LLM
yet what we can't easily do without domain knowledge is tell when an
LLM is *incorrect*.

When someone asks a question on a mailing list like this, I'd like
to think their question would be given as much respect as if it were
asked on a Stack site.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug