Re: The sqlite3 timestamp conversion between unixepoch and localtime

2021-09-03 Thread Barry


> On 3 Sep 2021, at 13:40, Bob Martin  wrote:
> 
> On 2 Sep 2021 at 20:25:27, Alan Gauld  wrote:
>> On 02/09/2021 20:11, MRAB wrote:
>> 
 In one of them (I can't recall which is which) they change on the 4th
 weekend of October/March in the other they change on the last weekend.
 
 
>>> In the EU (and UK) it's the last Sunday in March/October.
>>> 
>>> In the US it's second Sunday in March and the first Sunday in November.
>>> 
>>> I know which one I find easier to remember!
>> 
>> Interesting. I remember it as closer than that. The bugs we found were
>> due to differences in the DST settings of the BIOS in the PCs. (They
>> were deliberately all sourced from DELL but the EU PCs had a slightly
>> different BIOS).
>> 
>> The differences you cite should have thrown up issues every year.
>> I must see if I can find my old log books...
>> 
> 
> ISTR that the USA changes were the same as the EU until a few years ago.

I recall that DST changes have been at least 1 week different between the UK 
and USA since the 80’s.

Barry
> 
> I remember thinking at the time it changed "why would they do that?"
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Add a method to list the current named logging levels

2021-09-03 Thread Barry


> On 2 Sep 2021, at 23:38, Dieter Maurer  wrote:
> 
> Edward Spencer wrote at 2021-9-2 10:02 -0700:
>> Sometimes I like to pass the logging level up to the command line params so 
>> my user can specific what level of logging they want. However there is no 
>> easy method for pulling the named logging level names.
>> 
>> Looking into the code, it would actually be incredibly easy to implement;
>> 
>> in `logging.__init__.py`;
>> 
>> def listLevelNames():
>>   return _nameToLevel.keys()
>> 
>> You could obviously add some other features, like listing only the defaults, 
>> sorted by numerical level or alphabetically, etc. But really this basic 
>> implementation would be enough to expose the internal variables which 
>> shouldn't be accessed because they change (and in fact, between python 2 and 
>> 3, they did).
>> 
>> Any thoughts?
> 
> Usually, you use 5 well known log levels: "DEBUG", "INFO", "WARNING",
> "ERROR" and "CRITICAL".
> No need to provide a special function listing those levels.

I add my own levels, but then I know I did it.

Barry

> 
> 
> 
> --
> Dieter
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: The sqlite3 timestamp conversion between unixepoch and localtime can't be done according to the timezone setting on the machine automatically.

2021-09-03 Thread Chris Angelico
On Sat, Sep 4, 2021 at 3:33 AM Alan Gauld via Python-list
 wrote:
>
> On 02/09/2021 19:30, Chris Angelico wrote:
>
> >> Without DST the schools opened in the dark so all the kids
> >> had to travel to school in the dark and the number of
> >> traffic accidents while crossing roads jumped.
> >
> > How do they manage in winter?
>
> That was the winter. Sunrise wasn't till 10:00 or so
> and the schools open at 9. With DST sunrise became
> 9:00 and with pre-dawn light it is enough to see by.

Are you saying that you had DST in winter, or that, when summer *and*
DST came into effect, there was more light at dawn? Because a *lot* of
people confuse summer and DST, and credit DST with the natural effects
of the season change.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Connecting python to DB2 database

2021-09-03 Thread Dennis Lee Bieber
On Fri, 3 Sep 2021 09:29:20 -0400, DFS  declaimed the
following:

>
>Now can you get DB2 to accept ; as a SQL statement terminator like the 
>rest of the world?   They call it "An unexpected token"...

I've never seen a semi-colon used for SQL statements via a db-api
adapter. The semi-colon is, in my experience, only required by basic
interactive query utilities -- to tell the utility that the statement is
fully entered, and can be sent to the server.


-- 
Wulfraed Dennis Lee Bieber AF6VN
wlfr...@ix.netcom.comhttp://wlfraed.microdiversity.freeddns.org/

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: The sqlite3 timestamp conversion between unixepoch and localtime can't be done according to the timezone setting on the machine automatically.

2021-09-03 Thread Alan Gauld via Python-list
On 02/09/2021 19:30, Chris Angelico wrote:

>> Without DST the schools opened in the dark so all the kids
>> had to travel to school in the dark and the number of
>> traffic accidents while crossing roads jumped.
> 
> How do they manage in winter? 

That was the winter. Sunrise wasn't till 10:00 or so
and the schools open at 9. With DST sunrise became
9:00 and with pre-dawn light it is enough to see by.

Its a recurring theme. Every now and then some smart
young politician from the south of the UK suggests
doing away with DST and a large crowd of northerners
jump up and say no way!

> Can that be solved with better street> lighting?

They had street lighting but it casts dark shadows etc.
In fact modern LED based street lighting is worse in
that respect that the old yellow sodium lights were.
But where it doesn't exist the cost of installing
street lighting in small villages is too high compared
to just changing the clocks. And of course street
lighting has a real running cost that would be reflected
in the local council taxes, and nobody wants to
pay more of them! After all street lighting has
been available for over 150 years, if they haven't
installed it by now (I mean, nearly everywhere
has some lighting, at least on the main roads,
it's just the smaller back streets that tend to be dark.)

> That was fifty years ago now, and the negative consequences
> of DST are far stronger now.

But not apparent to most people. Most still see it
as a benefit because they get longer working daylight.

-- 
Alan G
Author of the Learn to Program web site
http://www.alan-g.me.uk/
http://www.amazon.com/author/alan_gauld
Follow my photo-blog on Flickr at:
http://www.flickr.com/photos/alangauldphotos


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread Peter Pearson
On Thu, 2 Sep 2021 07:54:27 -0700 (PDT), Julio Di Egidio wrote:
> On Thursday, 2 September 2021 at 16:51:24 UTC+2, Christian Gollwitzer wrote:
>> Am 02.09.21 um 16:49 schrieb Julio Di Egidio:
>> > On Thursday, 2 September 2021 at 16:41:38 UTC+2, Peter Pearson wrote: 
>> >> On Thu, 02 Sep 2021 10:51:03 -0300, Hope Rouselle wrote: 
>> > 
>> >>> 39.61 
>> >> 
>> >> Welcome to the exciting world of roundoff error: 
>> > 
>> > Welcome to the exiting world of Usenet. 
>> > 
>> > *Plonk*
>> 
>> Pretty harsh, isn't it? He gave a concise example of the same inaccuracy 
>> right afterwards. 
>
> And I thought you were not seeing my posts...
>
> Given that I have already given a full explanation, you guys, that you
> realise it or not, are simply adding noise for the usual pub-level
> discussion I must most charitably guess.
>
> Anyway, just my opinion.  (EOD.)


Although we are in the world of Usenet, comp.lang.python is by
no means typical of Usenet.  This is a positive, helpful, welcoming
community in which "Plonk", "EOD", and "RTFM" (appearing in another
post) are seldom seen, and in which I have never before seen the
suggestion that everybody else should be silent so that the silver
voice of the chosen one can be heard.


-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: The sqlite3 timestamp conversion between unixepoch and localtime

2021-09-03 Thread Michael F. Stemper

On 03/09/2021 01.14, Bob Martin wrote:

On 2 Sep 2021 at 20:25:27, Alan Gauld  wrote:

On 02/09/2021 20:11, MRAB wrote:


In one of them (I can't recall which is which) they change on the 4th
weekend of October/March in the other they change on the last weekend.



In the EU (and UK) it's the last Sunday in March/October.

In the US it's second Sunday in March and the first Sunday in November.

I know which one I find easier to remember!


Interesting. I remember it as closer than that. The bugs we found were
due to differences in the DST settings of the BIOS in the PCs. (They
were deliberately all sourced from DELL but the EU PCs had a slightly
different BIOS).

The differences you cite should have thrown up issues every year.
I must see if I can find my old log books...



ISTR that the USA changes were the same as the EU until a few years ago.

I remember thinking at the time it changed "why would they do that?"


It was part of the Energy Policy Act of 2005[1].

However, saying that doesn't explain "why".

The explanation given at the time was that it would save energy
because people wouldn't need to turn on their lights as early.
This ignored the fact that we needed to have them on later in
the morning.

The required studies were inconclusive, but some reported that
since it was light later in the day, people went driving around
in the evening, causing aggregate energy consumption to increase
rather than decrease.

One of the bill's sponsors said that having it be light later in
the day would "make people happy".

Talk at the time (which I never verified or refuted) said that he
got significant campaign contributions from a trade group for
outdoor cooking (grills, charcoal, usw) and that they wanted it
so that the grilling season would be longer, leading to more
revenue for them.

At the time, I was product manager for real-time control systems
for critical infrastructure. Having to collect the changes to
zoneinfo, whatever the equivalent file for Oracle was, revalidate
our entire system, and get information/patches to our North American
customers was annoying.


[1] , Sec 110

--
Michael F. Stemper
If you take cranberries and stew them like applesauce they taste much
more like prunes than rhubarb does.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Select columns based on dates - Pandas

2021-09-03 Thread Martin Di Paola
You may want to reshape the dataset to a tidy format: Pandas works 
better with that format.


Let's assume the following dataset (this is what I understood from your 
message):


In [34]: df = pd.DataFrame({
...: 'Country': ['us', 'uk', 'it'],
...: '01/01/2019': [10, 20, 30],
...: '02/01/2019': [12, 22, 32],
...: '03/01/2019': [14, 24, 34],
...: })

In [35]: df
Out[35]:
  Country  01/01/2019  02/01/2019  03/01/2019
0  us  10  12  14
1  uk  20  22  24
2  it  30  32  34

Then, reshape it to a tidy format. Notice how each row now represents 
a single measure.


In [43]: pd.melt(df, id_vars=['Country'], var_name='Date', 
value_name='Cases')

Out[43]:
  CountryDate  Cases
0  us  01/01/2019 10
1  uk  01/01/2019 20
2  it  01/01/2019 30
3  us  02/01/2019 12
4  uk  02/01/2019 22
5  it  02/01/2019 32
6  us  03/01/2019 14
7  uk  03/01/2019 24
8  it  03/01/2019 34

I used strings to represent the dates but it is much handy work
with real date objects.

In [44]: df2 = _
In [45]: df2['Date'] = pd.to_datetime(df2['Date'])

Now we can filter by date:

In [50]: df2[df2['Date'] < '2019-03-01']
Out[50]:
  Country   Date  Cases
0  us 2019-01-01 10
1  uk 2019-01-01 20
2  it 2019-01-01 30
3  us 2019-02-01 12
4  uk 2019-02-01 22
5  it 2019-02-01 32

With that you could create three dataframes, one per month.

Thanks,
Martin.


On Thu, Sep 02, 2021 at 12:28:31PM -0700, Richard Medina wrote:

Hello, forum,
I have a data frame with covid-19 cases per month from 2019 - 2021 like a 
header like this:

Country, 01/01/2019, 2/01/2019, 01/02/2019, 3/01/2019, ... 01/01/2021, 
2/01/2021, 01/02/2021, 3/01/2021

I want to filter my data frame for columns of a specific month range of march 
to September of 2019, 2020, and 2021 only (three data frames).

Any ideas?
Thank you


--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


RE: on floating-point numbers

2021-09-03 Thread Schachner, Joseph
Actually, Python has an fsum function meant to address this issue.

>>> math.fsum([1e14, 1, -1e14])
1.0
>>>

Wow it works.

--- Joseph S.

Teledyne Confidential; Commercially Sensitive Business Data

-Original Message-
From: Hope Rouselle  
Sent: Thursday, September 2, 2021 9:51 AM
To: python-list@python.org
Subject: on floating-point numbers

Just sharing a case of floating-point numbers.  Nothing needed to be solved or 
to be figured out.  Just bringing up conversation.

(*) An introduction to me

I don't understand floating-point numbers from the inside out, but I do know 
how to work with base 2 and scientific notation.  So the idea of expressing a 
number as 

  mantissa * base^{power}

is not foreign to me. (If that helps you to perhaps instruct me on what's going 
on here.)

(*) A presentation of the behavior

>>> import sys
>>> sys.version
'3.8.10 (tags/v3.8.10:3d8993a, May  3 2021, 11:48:03) [MSC v.1928 64 bit 
(AMD64)]'

>>> ls = [7.23, 8.41, 6.15, 2.31, 7.73, 7.77]
>>> sum(ls)
39.594

>>> ls = [8.41, 6.15, 2.31, 7.73, 7.77, 7.23]
>>> sum(ls)
39.61

All I did was to take the first number, 7.23, and move it to the last position 
in the list.  (So we have a violation of the commutativity of
addition.)

Let me try to reduce the example.  It's not so easy.  Although I could display 
the violation of commutativity by moving just a single number in the list, I 
also see that 7.23 commutes with every other number in the list.

(*) My request

I would like to just get some clarity.  I guess I need to translate all these 
numbers into base 2 and perform the addition myself to see the situation coming 
up?
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: on floating-point numbers

2021-09-03 Thread Schachner, Joseph
What's really going on is that you are printing out more digits than you are 
entitled to.  39.61 :   16 decimal digits.   4e16 should require 55 
binary bits (in the mantissa) to represent, at least as I calculate it.

Double precision floating point has 52 bits in the mantissa, plus one assumed 
due to normalization.  So 53 bits.

The actual minor difference in sums that you see is because when you put the 
largest value 1st it makes a difference in the last few bits of the mantissa.

I recommend that you print out double precision values to at most 14 digits.  
Then you will never see this kind of issue.  If you don't like that suggestion, 
you can create your own floating point representation using a Python integer as 
the mantissa, so it can grow as large as you have memory to represent the 
value; and a sign and an exponent.  It would be slow, but it could have much 
more accuracy (if implemented to preserve accuracy).

By the way, this is why banks and other financial institutions use BCD (binary 
coded decimal).   They cannot tolerate sums that have fraction of a cent errors.

I should also point out another float issue: subtractive cancellation.   Try 
1e14 + 0.1  - 1e14. The result clearly should be 0.1, but it won't be.  
That's because 0.1 cannot be accurately represented in binary, and it was only 
represented in the bottom few bits.  I just tried it:   I got 0.09375   
This is not a Python issue.  This is a well known issue when using binary 
floating point.   So, when you sum a large array of data, to avoid these 
issues, you could either
1) sort the data smallest to largest ... may be helpful, but maybe not.
2) Create multiple sums of a few of the values.   Next layer: Sum a few of the 
sums.Top layer: Sum the sum of sums to get the final sum.  This is much 
more likely to work accurately than adding up all the values in one summation 
except the last, and then adding the last (which could be a relatively small 
value).  

--- Joseph S.







Teledyne Confidential; Commercially Sensitive Business Data

-Original Message-
From: Hope Rouselle  
Sent: Thursday, September 2, 2021 9:51 AM
To: python-list@python.org
Subject: on floating-point numbers

Just sharing a case of floating-point numbers.  Nothing needed to be solved or 
to be figured out.  Just bringing up conversation.

(*) An introduction to me

I don't understand floating-point numbers from the inside out, but I do know 
how to work with base 2 and scientific notation.  So the idea of expressing a 
number as 

  mantissa * base^{power}

is not foreign to me. (If that helps you to perhaps instruct me on what's going 
on here.)

(*) A presentation of the behavior

>>> import sys
>>> sys.version
'3.8.10 (tags/v3.8.10:3d8993a, May  3 2021, 11:48:03) [MSC v.1928 64 bit 
(AMD64)]'

>>> ls = [7.23, 8.41, 6.15, 2.31, 7.73, 7.77]
>>> sum(ls)
39.594

>>> ls = [8.41, 6.15, 2.31, 7.73, 7.77, 7.23]
>>> sum(ls)
39.61

All I did was to take the first number, 7.23, and move it to the last position 
in the list.  (So we have a violation of the commutativity of
addition.)

Let me try to reduce the example.  It's not so easy.  Although I could display 
the violation of commutativity by moving just a single number in the list, I 
also see that 7.23 commutes with every other number in the list.

(*) My request

I would like to just get some clarity.  I guess I need to translate all these 
numbers into base 2 and perform the addition myself to see the situation coming 
up?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread MRAB

On 2021-09-03 16:13, Chris Angelico wrote:

On Sat, Sep 4, 2021 at 12:08 AM o1bigtenor  wrote:

Hmmm - - - ZI would suggest that you haven't looked into
taxation yet!
In taxation you get a rational number that MUST be multiplied by
the amount in currency.


(You can, of course, multiply a currency amount by any scalar. Just
not by another currency amount.)


The error rate here is stupendous.
Some organizations track each transaction with its taxes rounded.
Then some track using  use untaxed and then calculate the taxes
on the whole (when you have 2 or 3 or 4 (dunno about more but
who knows there are some seriously tax loving jurisdictions out there))
the differences between adding amounts and then calculating taxes
and calculating taxes on each amount and then adding all items
together can have some 'interesting' differences.

So financial data MUST be able to handle rational numbers.
(I have been bit by the differences enumerated in the previous!)


The worst problem is knowing WHEN to round. Sometimes you have to do
intermediate rounding in order to make something agree with something
else :(

But if you need finer resolution than the cent, I would still
recommend trying to use fixed-point arithmetic. The trouble is
figuring out exactly how much precision you need. Often, 1c precision
is actually sufficient.

At some point, some finance/legal person has to specify how any 
fractional currency should be handled.

--
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread Chris Angelico
On Sat, Sep 4, 2021 at 12:08 AM o1bigtenor  wrote:
> Hmmm - - - ZI would suggest that you haven't looked into
> taxation yet!
> In taxation you get a rational number that MUST be multiplied by
> the amount in currency.

(You can, of course, multiply a currency amount by any scalar. Just
not by another currency amount.)

> The error rate here is stupendous.
> Some organizations track each transaction with its taxes rounded.
> Then some track using  use untaxed and then calculate the taxes
> on the whole (when you have 2 or 3 or 4 (dunno about more but
> who knows there are some seriously tax loving jurisdictions out there))
> the differences between adding amounts and then calculating taxes
> and calculating taxes on each amount and then adding all items
> together can have some 'interesting' differences.
>
> So financial data MUST be able to handle rational numbers.
> (I have been bit by the differences enumerated in the previous!)

The worst problem is knowing WHEN to round. Sometimes you have to do
intermediate rounding in order to make something agree with something
else :(

But if you need finer resolution than the cent, I would still
recommend trying to use fixed-point arithmetic. The trouble is
figuring out exactly how much precision you need. Often, 1c precision
is actually sufficient.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread o1bigtenor
On Thu, Sep 2, 2021 at 2:27 PM Chris Angelico  wrote:

> On Fri, Sep 3, 2021 at 4:58 AM Hope Rouselle  wrote:
> >
> > Hope Rouselle  writes:
> >
> > > Just sharing a case of floating-point numbers.  Nothing needed to be
> > > solved or to be figured out.  Just bringing up conversation.
> > >
> > > (*) An introduction to me
> > >
> > > I don't understand floating-point numbers from the inside out, but I do
> > > know how to work with base 2 and scientific notation.  So the idea of
> > > expressing a number as
> > >
> > >   mantissa * base^{power}
> > >
> > > is not foreign to me. (If that helps you to perhaps instruct me on
> > > what's going on here.)
> > >
> > > (*) A presentation of the behavior
> > >
> >  import sys
> >  sys.version
> > > '3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64
> > > bit (AMD64)]'
> > >
> >  ls = [7.23, 8.41, 6.15, 2.31, 7.73, 7.77]
> >  sum(ls)
> > > 39.594
> > >
> >  ls = [8.41, 6.15, 2.31, 7.73, 7.77, 7.23]
> >  sum(ls)
> > > 39.61
> > >
> > > All I did was to take the first number, 7.23, and move it to the last
> > > position in the list.  (So we have a violation of the commutativity of
> > > addition.)
> >
> > Suppose these numbers are prices in dollar, never going beyond cents.
> > Would it be safe to multiply each one of them by 100 and therefore work
> > with cents only?  For instance
>
> Yes and no. It absolutely *is* safe to always work with cents, but to
> do that, you have to be consistent: ALWAYS work with cents, never with
> floating point dollars.
>
> (Or whatever other unit you choose to use. Most currencies have a
> smallest-normally-used-unit, with other currency units (where present)
> being whole number multiples of that minimal unit. Only in forex do
> you need to concern yourself with fractional cents or fractional yen.)
>
> But multiplying a set of floats by 100 won't necessarily solve your
> problem; you may have already fallen victim to the flaw of assuming
> that the numbers are represented accurately.
>
> > --8<---cut here---start->8---
> > >>> ls = [7.23, 8.41, 6.15, 2.31, 7.73, 7.77]
> > >>> sum(map(lambda x: int(x*100), ls)) / 100
> > 39.6
> >
> > >>> ls = [8.41, 6.15, 2.31, 7.73, 7.77, 7.23]
> > >>> sum(map(lambda x: int(x*100), ls)) / 100
> > 39.6
> > --8<---cut here---end--->8---
> >
> > Or multiplication by 100 isn't quite ``safe'' to do with floating-point
> > numbers either?  (It worked in this case.)
>
> You're multiplying and then truncating, which risks a round-down
> error. Try adding a half onto them first:
>
> int(x * 100 + 0.5)
>
> But that's still not a perfect guarantee. Far safer would be to
> consider monetary values to be a different type of value, not just a
> raw number. For instance, the value $7.23 could be stored internally
> as the integer 723, but you also know that it's a value in USD, not a
> simple scalar. It makes perfect sense to add USD+USD, it makes perfect
> sense to multiply USD*scalar, but it doesn't make sense to multiply
> USD*USD.
>
> > I suppose that if I multiply it by a power of two, that would be an
> > operation that I can be sure will not bring about any precision loss
> > with floating-point numbers.  Do you agree?
>
> Assuming you're nowhere near 2**53, yes, that would be safe. But so
> would multiplying by a power of five. The problem isn't precision loss
> from the multiplication - the problem is that your input numbers
> aren't what you think they are. That number 7.23, for instance, is
> really
>
> >>> 7.23.as_integer_ratio()
> (2035064081618043, 281474976710656)
>
> ... the rational number 2035064081618043 / 281474976710656, which is
> very close to 7.23, but not exactly so. (The numerator would have to
> be ...8042.88 to be exactly correct.) There is nothing you can do at
> this point to regain the precision, although a bit of multiplication
> and rounding can cheat it and make it appear as if you did.
>
> Floating point is a very useful approximation to real numbers, but
> real numbers aren't the best way to represent financial data. Integers
> are.
>
>
Hmmm - - - ZI would suggest that you haven't looked into
taxation yet!
In taxation you get a rational number that MUST be multiplied by
the amount in currency.
The error rate here is stupendous.
Some organizations track each transaction with its taxes rounded.
Then some track using  use untaxed and then calculate the taxes
on the whole (when you have 2 or 3 or 4 (dunno about more but
who knows there are some seriously tax loving jurisdictions out there))
the differences between adding amounts and then calculating taxes
and calculating taxes on each amount and then adding all items
together can have some 'interesting' differences.

So financial data MUST be able to handle rational numbers.
(I have been bit by the differences enumerated in the previous!)

Regards
-- 
https://mail.python.org/mailman/listin

Re: Connecting python to DB2 database

2021-09-03 Thread Chris Angelico
On Fri, Sep 3, 2021 at 11:37 PM DFS  wrote:
>
> On 9/3/2021 1:47 AM, Chris Angelico wrote:
> > On Fri, Sep 3, 2021 at 3:42 PM DFS  wrote:
> >>
> >> Having a problem with the DB2 connector
> >>
> >> test.py
> >> 
> >> import ibm_db_dbi
> >> connectstring =
> >> 'DATABASE=xxx;HOSTNAME=localhost;PORT=5;PROTOCOL=TCPIP;UID=xxx;PWD=xxx;'
> >> conn = ibm_db_dbi.connect(connectstring,'','')
> >>
> >> curr  = conn.cursor
> >> print(curr)
> >
> > According to PEP 249, what you want is conn.cursor() not conn.cursor.
> >
> > I'm a bit surprised as to the repr of that function though, which
> > seems to be this line from your output:
> >
> > 
> >
> > I'd have expected it to say something like "method cursor of
> > Connection object", which would have been an immediate clue as to what
> > needs to be done. Not sure why the repr is so confusing, and that
> > might be something to report upstream.
> >
> > ChrisA
>
>
> Thanks.  I must've done it right, using conn.cursor(), 500x.
> Bleary-eyed from staring at code too long I guess.

Cool cool! Glad that's working.

> Now can you get DB2 to accept ; as a SQL statement terminator like the
> rest of the world?   They call it "An unexpected token"...
>

Hmm, I don't know that the execute() method guarantees to allow
semicolons. Some implementations will strip a trailing semi, but they
usually won't allow interior ones, because that's a good way to worsen
SQL injection vulnerabilities. It's entirely possible - and within the
PEP 249 spec, I believe - for semicolons to be simply rejected.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Connecting python to DB2 database

2021-09-03 Thread DFS

On 9/3/2021 1:47 AM, Chris Angelico wrote:

On Fri, Sep 3, 2021 at 3:42 PM DFS  wrote:


Having a problem with the DB2 connector

test.py

import ibm_db_dbi
connectstring =
'DATABASE=xxx;HOSTNAME=localhost;PORT=5;PROTOCOL=TCPIP;UID=xxx;PWD=xxx;'
conn = ibm_db_dbi.connect(connectstring,'','')

curr  = conn.cursor
print(curr)


According to PEP 249, what you want is conn.cursor() not conn.cursor.

I'm a bit surprised as to the repr of that function though, which
seems to be this line from your output:



I'd have expected it to say something like "method cursor of
Connection object", which would have been an immediate clue as to what
needs to be done. Not sure why the repr is so confusing, and that
might be something to report upstream.

ChrisA



Thanks.  I must've done it right, using conn.cursor(), 500x. 
Bleary-eyed from staring at code too long I guess.


Now can you get DB2 to accept ; as a SQL statement terminator like the 
rest of the world?   They call it "An unexpected token"...


--
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread Oscar Benjamin
On Fri, 3 Sept 2021 at 13:48, Chris Angelico  wrote:
>
> On Fri, Sep 3, 2021 at 10:42 PM jak  wrote:
> >
> > Il 03/09/2021 09:07, Julio Di Egidio ha scritto:
> > > On Friday, 3 September 2021 at 01:22:28 UTC+2, Chris Angelico wrote:
> > >> On Fri, Sep 3, 2021 at 8:15 AM Dennis Lee Bieber  
> > >> wrote:
> > >>> On Fri, 3 Sep 2021 04:43:02 +1000, Chris Angelico 
> > >>> declaimed the following:
> > >>>
> >  The naive summation algorithm used by sum() is compatible with a
> >  variety of different data types - even lists, although it's documented
> >  as being intended for numbers - but if you know for sure that you're
> >  working with floats, there's a more accurate algorithm available to
> >  you.
> > 
> > >>> math.fsum([7.23, 8.41, 6.15, 2.31, 7.73, 7.77])
> >  39.6
> > >>> math.fsum([8.41, 6.15, 2.31, 7.73, 7.77, 7.23])
> >  39.6
> > 
> >  It seeks to minimize loss to repeated rounding and is, I believe,
> >  independent of data order.
> > >>>
> > >>> Most likely it sorts the data so the smallest values get summed first,
> > >>> and works its way up to the larger values. That way it minimizes the 
> > >>> losses
> > >>> that occur when denormalizing a value (to set the exponent equal to 
> > >>> that of
> > >>> the next larger value).
> > >>>
> > >> I'm not sure, but that sounds familiar. It doesn't really matter
> > >> though - the docs just say that it is an "accurate floating point
> > >> sum", so the precise algorithm is an implementation detail.
> > >
> > > The docs are quite misleading there, it is not accurate without further 
> > > qualifications.
> > >
> > > 
> > > 
> > >
> >
> > https://en.wikipedia.org/wiki/IEEE_754
>
> I believe the definition of "accurate" here is that, if you take all
> of the real numbers represented by those floats, add them all together
> with mathematical accuracy, and then take the nearest representable
> float, that will be the exact value that fsum will return. In other
> words, its accuracy is exactly as good as the final result can be.

It's as good as it can be if the result must fit into a single float.
Actually the algorithm itself maintains an exact result for the sum
internally using a list of floats whose exact sum is the same as that
of the input list. In essence it compresses a large list of floats to
a small list of say 2 or 3 floats while preserving the exact value of
the sum.

Unfortunately fsum does not give any way to access the internal exact
list so using fsum repeatedly suffers the same problems as plain float
arithmetic e.g.:
>>> x = 10**20
>>> fsum([fsum([1, x]), -x])
0.0

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread Chris Angelico
On Fri, Sep 3, 2021 at 10:42 PM jak  wrote:
>
> Il 03/09/2021 09:07, Julio Di Egidio ha scritto:
> > On Friday, 3 September 2021 at 01:22:28 UTC+2, Chris Angelico wrote:
> >> On Fri, Sep 3, 2021 at 8:15 AM Dennis Lee Bieber  
> >> wrote:
> >>> On Fri, 3 Sep 2021 04:43:02 +1000, Chris Angelico 
> >>> declaimed the following:
> >>>
>  The naive summation algorithm used by sum() is compatible with a
>  variety of different data types - even lists, although it's documented
>  as being intended for numbers - but if you know for sure that you're
>  working with floats, there's a more accurate algorithm available to
>  you.
> 
> >>> math.fsum([7.23, 8.41, 6.15, 2.31, 7.73, 7.77])
>  39.6
> >>> math.fsum([8.41, 6.15, 2.31, 7.73, 7.77, 7.23])
>  39.6
> 
>  It seeks to minimize loss to repeated rounding and is, I believe,
>  independent of data order.
> >>>
> >>> Most likely it sorts the data so the smallest values get summed first,
> >>> and works its way up to the larger values. That way it minimizes the 
> >>> losses
> >>> that occur when denormalizing a value (to set the exponent equal to that 
> >>> of
> >>> the next larger value).
> >>>
> >> I'm not sure, but that sounds familiar. It doesn't really matter
> >> though - the docs just say that it is an "accurate floating point
> >> sum", so the precise algorithm is an implementation detail.
> >
> > The docs are quite misleading there, it is not accurate without further 
> > qualifications.
> >
> > 
> > 
> >
>
> https://en.wikipedia.org/wiki/IEEE_754

I believe the definition of "accurate" here is that, if you take all
of the real numbers represented by those floats, add them all together
with mathematical accuracy, and then take the nearest representable
float, that will be the exact value that fsum will return. In other
words, its accuracy is exactly as good as the final result can be.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread jak

Il 03/09/2021 09:07, Julio Di Egidio ha scritto:

On Friday, 3 September 2021 at 01:22:28 UTC+2, Chris Angelico wrote:

On Fri, Sep 3, 2021 at 8:15 AM Dennis Lee Bieber  wrote:

On Fri, 3 Sep 2021 04:43:02 +1000, Chris Angelico 
declaimed the following:


The naive summation algorithm used by sum() is compatible with a
variety of different data types - even lists, although it's documented
as being intended for numbers - but if you know for sure that you're
working with floats, there's a more accurate algorithm available to
you.


math.fsum([7.23, 8.41, 6.15, 2.31, 7.73, 7.77])

39.6

math.fsum([8.41, 6.15, 2.31, 7.73, 7.77, 7.23])

39.6

It seeks to minimize loss to repeated rounding and is, I believe,
independent of data order.


Most likely it sorts the data so the smallest values get summed first,
and works its way up to the larger values. That way it minimizes the losses
that occur when denormalizing a value (to set the exponent equal to that of
the next larger value).


I'm not sure, but that sounds familiar. It doesn't really matter
though - the docs just say that it is an "accurate floating point
sum", so the precise algorithm is an implementation detail.


The docs are quite misleading there, it is not accurate without further 
qualifications.




That said, fucking pathetic, when Dunning-Kruger is a compliment...

*Plonk*

Julio



https://en.wikipedia.org/wiki/IEEE_754
--
https://mail.python.org/mailman/listinfo/python-list


Re: The sqlite3 timestamp conversion between unixepoch and localtime

2021-09-03 Thread Bob Martin
On 2 Sep 2021 at 20:25:27, Alan Gauld  wrote:
> On 02/09/2021 20:11, MRAB wrote:
>
>>> In one of them (I can't recall which is which) they change on the 4th
>>> weekend of October/March in the other they change on the last weekend.
>>>
>>>
>> In the EU (and UK) it's the last Sunday in March/October.
>>
>> In the US it's second Sunday in March and the first Sunday in November.
>>
>> I know which one I find easier to remember!
>
> Interesting. I remember it as closer than that. The bugs we found were
> due to differences in the DST settings of the BIOS in the PCs. (They
> were deliberately all sourced from DELL but the EU PCs had a slightly
> different BIOS).
>
> The differences you cite should have thrown up issues every year.
> I must see if I can find my old log books...
>

ISTR that the USA changes were the same as the EU until a few years ago.

I remember thinking at the time it changed "why would they do that?"

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread Christian Gollwitzer

Am 02.09.21 um 21:02 schrieb Julio Di Egidio:

On Thursday, 2 September 2021 at 20:43:36 UTC+2, Chris Angelico wrote:

On Fri, Sep 3, 2021 at 4:29 AM Hope Rouselle  wrote:



All I did was to take the first number, 7.23, and move it to the last
position in the list. (So we have a violation of the commutativity of
addition.)


It's not about the commutativity of any particular pair of operands -
that's always guaranteed.


Nope, that is rather *not* guaranteed, as I have quite explained up thread.



No, you haven't explained that. You linked to the famous Goldberg paper. 
Where in the paper does it say that operations on floats are not 
commutative?


I'd be surprised because it is generally wrong.
Unless you have special numbers like NaN or signed zeros etc., a+b=b+a 
and a*b=b*a holds also for floats.


Christiah
--
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-03 Thread Roel Schroeven

Op 2/09/2021 om 17:08 schreef Hope Rouselle:

 ls = [7.23, 8.41, 6.15, 2.31, 7.73, 7.77]
 sum(ls)
> 39.594
>
 ls = [8.41, 6.15, 2.31, 7.73, 7.77, 7.23]
 sum(ls)
> 39.61
>
> All I did was to take the first number, 7.23, and move it to the last
> position in the list.  (So we have a violation of the commutativity of
> addition.)

Suppose these numbers are prices in dollar, never going beyond cents.
Would it be safe to multiply each one of them by 100 and therefore work
with cents only?
For working with monetary values, or any value that needs to accurate 
correspondence to 10-based values, best use Python's Decimal; see the 
documentation: https://docs.python.org/3.8/library/decimal.html


Example:

from decimal import Decimal as D
ls1 = [D('7.23'), D('8.41'), D('6.15'), D('2.31'), D('7.73'), D('7.77')]
ls2 = [D('8.41'), D('6.15'), D('2.31'), D('7.73'), D('7.77'), D('7.23')]
print(sum(ls1), sum(ls2))

Output:
39.60 39.60

(Note that I initialized the values with strings instead of numbers, to 
allow Decimal access to the exact number without it already being 
converted to a float that doesn't necessarily exactly correspond to the 
decimal value)


--
"Your scientists were so preoccupied with whether they could, they didn't
stop to think if they should"
-- Dr. Ian Malcolm

--
https://mail.python.org/mailman/listinfo/python-list