Bug in 3.12.5

2024-09-20 Thread Martin Nilsson via Python-list
Dear Sirs !

The attached program doesn’t work in 3.12.5, but in 3.9 it worked.

Best Regards
Martin Nilsson
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: scipy.optimize.least_squares for more than one dimension?

2023-07-09 Thread Martin Schöön via Python-list
Den 2023-06-30 skrev Martin Schöön :
> Yesterday I wanted to move from optimize.leastsq to 
> least_squares. I have data depending on four variables
> and want to fit a function in four variables to this
> data. This works with leastsq but not with least_squares.
>
> Am I trying to do something least_squares is not capable
> of?
>
> Disclaimer: I was burning midnight oil...
>
Problem solved.

Yes, least_squares can, of course, handle multi-dimensional situations.

Me burning midnight oil was the problem. I have been tinkering a
bit with scipy.optimize.least_squares tonight. All simple examples
I tried worked regardless of number of dimensions. I went back to
my old code and found a couple of basic mistakes. Done.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


scipy.optimize.least_squares for more than one dimension?

2023-06-30 Thread Martin Schöön via Python-list
Yesterday I wanted to move from optimize.leastsq to 
least_squares. I have data depending on four variables
and want to fit a function in four variables to this
data. This works with leastsq but not with least_squares.

Am I trying to do something least_squares is not capable
of?

Disclaimer: I was burning midnight oil...

TIA

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problem with Matplotlib example

2023-04-14 Thread Martin Schöön
Den 2023-04-13 skrev MRAB :
> On 2023-04-13 19:41, Martin Schöön wrote:
>> Anyone had success running this example?
>> https://tinyurl.com/yhhyc9r
>> 

>> As far as I know I have an up-to-date matplotlib installed. Pip has
>> nothing more modern to offer me.
>> 
> All I can say is that it works for me!
>
> Python 3.10 and 3.11, matplotlib 3.6.1 and then 3.7.1 after updating it.
>
Thanks are due to both you and Thomas. Your replies put me on the right
scent. I soon learned what I should have know since ages: pip reporting
there is now newer version of a package does not mean there is *no
newer* version. It means there is no newer version for the version of
Python I use. In my case I am on Python 3.7. At work, where I am on
Python 3.8, this matplotlib example works just fine.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Problem with Matplotlib example

2023-04-13 Thread Martin Schöön
Anyone had success running this example?
https://tinyurl.com/yhhyc9r

When I try I get this error:
"TypeError: __init__() got an unexpected keyword argument 'transform'"

This is for the line
"m = MarkerStyle(SUCESS_SYMBOLS[mood], transform=t)"

Yes, I know, I could dive into the documentation myself but I hope
some kind soul here will help out.

As far as I know I have an up-to-date matplotlib installed. Pip has
nothing more modern to offer me.

TIA

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on the python paradox

2022-12-11 Thread Martin Di Paola

On Mon, Dec 05, 2022 at 10:37:39PM -0300, Sabrina Almodóvar wrote:

The Python Paradox
   Paul Graham
   August 2004

[SNIP]

Hence what, for lack of a better name, I'll call the Python paradox: 
if a company chooses to write its software in a comparatively 
esoteric language, they'll be able to hire better programmers, 
because they'll attract only those who cared enough to learn it. And 
for programmers the paradox is even more pronounced: the language to 
learn, if you want to get a good job, is a language that people 
don't learn merely to get a job.


[SNIP]


I don't think that an esoteric language leads to better programmers.

I know really good people that work mostly in assembly which by today
standard would be considered "esoteric".

They are really good at their field but they write shitty code in higher
languages as python.

That same goes for the other direction: I saw Ruby programmers writing C
code and trust me, it didn't result in good quality code.

I would be more inclined to think that a good programmer is not the one
that knows an esoteric language but the one that can jump from one
programming paradigm to another.

And when I say "jump" I mean that he/she can understand the problem to
solve, find the best tech stack to solve it and do it in an efficient
manner using that tech stack correctly.

It is in the "using that tech stack correctly" where some programmers
that "think" they know languages A, B and C get it wrong.

Just writing code that "compiles" and "it does not immediately crash" is
not enough to say that "you are using the tech stack correctly".


On Wed, Dec 07, 2022 at 10:58:09AM -0500, David Lowry-Duda wrote:

On Mon, Dec 05, 2022 at 10:37:39PM -0300, Sabrina Almodóvar wrote:

The Python Paradox
   Paul Graham
   August 2004

[SNIP]

Hence what, for lack of a better name, I'll call the Python paradox: 
if a company chooses to write its software in a comparatively 
esoteric language, they'll be able to hire better programmers, 
because they'll attract only those who cared enough to learn it. And 
for programmers the paradox is even more pronounced: the language to 
learn, if you want to get a good job, is a language that people 
don't learn merely to get a job.


[SNIP]


I wonder what the appropriately esoteric language is today?

We can sort of think of go/rust as esoteric versions of C/C++. But 
what would be the esoteric python?


Perhaps Julia? I don't know of any large software projects happening 
in julia world that aren't essentially scientific computing libraries 
(but this is because *I* work mostly with scientific computing 
libraries and sometimes live under a rock).


- DLD
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Which architectures to support in a CI like Travis?

2022-09-19 Thread Martin Di Paola

I would depend on the project.

In the crytoanalysis tool that I developing, "cryptonita", I just
manipule bytes. Nothing that could depend on the distro so my CI picks
one OS and run the tests there.

Project: https://github.com/cryptonitas/cryptonita
CI: 
https://github.com/cryptonitas/cryptonita/blob/master/.github/workflows/test.yml

On the other extreme I have "byexample", a tool that takes the examples
in your docs and run them as automated tests. It supports different
languages (Python, Ruby, Java, ...) and it works using the interpreter
of each languages.

An there is the challenge for its CI. Because byexample highly depends
on the version of the interpreter, the CI config tries a lot of
different scenarios

Project: https://byexamples.github.io/
CI: 
https://github.com/byexamples/byexample/blob/master/.github/workflows/test.yml

I don't tests different distros but I should for some cases that I
suspect that it could be differences in how some interpreters behave.

An about OS, I'm planning to add MacOS to the CI because I know that
some users had problems in the past in that platform because how
byexample interacts with the terminal.

So two projects, both in Python, but with two totally different
dependencies on the environment where they run, so their CI are
different.

The two examples are using Gitlab actions but the same applies to
TravisCI.

Thanks,
Martin.


On Sun, Sep 18, 2022 at 09:46:45AM +, c.bu...@posteo.jp wrote:

Hello,

I am using TravisCI for my project on GitHub. The project is packaged
for Debian, Ubuntu, Arch and several other distros.

All this distros support multiple architectures and they have their own
test machines to take care that all packages working on all archs.

On my side (upstream) I wonder which arch I should "support" in my
TravisCI configuration. I wan't to speed up travis and I want to save
energy and carbon.

I suspect that my Python code should run on much every platform that
offers a Python interpreter. Of course there are edge cases. But they
would be captured by the distros own test environments.

So is there a good and objective reason to support multiple (and maybe)
exotic platforms in a CI pipeline on upstream?

Kind
Christian
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Martin Di Paola



On Wed, Jul 27, 2022 at 08:32:31PM +0200, Morten W. Petersen wrote:

You're thinking of the backlog argument of listen?


From my understanding, yes, when you set up the "accepter" socket (the
one that you use to listen and accept new connections), you can define
the length of the queue for incoming connections that are not accepted
yet.

This will be the equivalent of your SimpleQueue which basically puts a
limits on how many incoming connections are "accepted" to do a real job.

Using skt.listen(N) the incoming connections are put on hold by the OS
while in your implementation are formally accepted but they are not
allowed to do any meaningful work: they are put on the SimpleQueue and
only when they are popped then they will work (send/recv data).

The difference then between the OS and your impl is minimal. The only
case that I can think is that on the clients' side it may exist a
timeout for the acceptance of the connection so your proxy server will
eagerly accept these connections so no timeout is possible(*)

On a side note, you implementation is too thread-naive: it uses plain
Python lists, integers and boolean variables which are not thread safe.
It is a matter of time until your server will start behave weird.

One option is that you use thread-safe objects. I'll encourage to read
about thread-safety in general and then which sync mechanisms Python
offers.

Another option is to remove the SimpleQueue and the background function
that allows a connection to be "active".

If you think, the handlers are 99% independent except that you want to
allow only N of them to progress (stablish and forward the connection)
and when a handler finishes, another handler "waiting" is activated, "in
a queue fashion" as you said.

If you allow me to not have a strict queue discipline here, you can achieve
the same results coordinating the handlers using semaphores. Once again,
take this email as starting point for your own research.

On a second side note, the use of handlers and threads is inefficient
because while you have N active handlers sending/receiving data, because
you are eagerly accepting new connections you will have much more
handlers created and (if I'm not wrong), each will be a thread.

A more efficient solution could be

1) accept as many connections as you can, saving the socket (not the
handler) in the thread-safe queue.
2) have N threads in the background popping from the queue a socket and
then doing the send/recv stuff. When the thread is done, the thread
closes the socket and pops another from the queue.

So the queue length will be the count of accepted connections but in any
moment your proxy will not activate (forward) more than N connections.

This idea is thread-safe, simpler, efficient and has the queue
discipline (I leave aside the usefulness).

I encourage you to take time to read about the different things
mentioned as concurrency and thread-related stuff is not easy to
master.

Thanks,
Martin.

(*) make your proxy server slow enough and yes, you will get timeouts
anyways.



Well, STP will accept all connections, but can limit how many of the
accepted connections that are active at any given time.

So when I bombed it with hundreds of almost simultaneous connections, all
of them were accepted, but only 25 were actively sending and receiving data
at any given time. First come, first served.

Regards,

Morten

On Wed, Jul 27, 2022 at 8:00 PM Chris Angelico  wrote:


On Thu, 28 Jul 2022 at 02:15, Morten W. Petersen 
wrote:
>
> Hi.
>
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests
and
> then allowing n number of connections to pass through at a time:

How's this different from what the networking subsystem already does?
When you listen, you can set a queue length. Can you elaborate?

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list




--
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
--

I am https://leavingnorway.info

Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue

Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen

Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/

On Instagram at https://instagram.com/morphexx/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: exec() an locals() puzzle

2022-07-20 Thread Martin Di Paola

I did a few tests

# test 1
def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
a = eval('y')
print(locals())
u = a
print(u)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
{'i': 1, 'y': 1, 'a': 1}
1

# test 2
def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
a = eval('y')
print(locals())
y = a
print(y)
f()
{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1}
Traceback (most recent call last):
NameError: name 'y' is not defined


So test 1 and 2 are the same except that the variable 'y' is not
present/present in the f's code.

When it is not present, exec() modifies the f's locals and adds an 'y'
to it but when the variable 'y' is present in the code (even if not
present in the locals()), exec() does not add any 'y' (and the next
eval() then fails)

The interesting part is that if the 'y' variable is in the f's code
*and* it is defined in the f's locals, no error occur but once again the
exec() does not modify f's locals:

# test 3
def f():
i = 1
y = 42
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
a = eval('y')
print(locals())
y = a
print(y)
f()
{'i': 1, 'y': 42}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 42}
{'i': 1, 'y': 42, 'a': 42}
42

Why does this happen? No idea.

I may be related with this:

# test 4
def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
print(y)
f()
Traceback (most recent call last):
NameError: name 'y' is not defined

Despite exec() adds the 'y' variable to f's locals, the variable is not
accessible/visible from f's code.

So, a few observations (by no means this is how the vm works):

1) each function has a set of variables defined by the code (let's call
this "code-defined locals" or "cdef-locals").
2) each function also has a set of "runtime locals" accessible from
locals().
3) exec() can add variables to locals() (runtime) set but it cannot add
any to cdef-locals.
4) locals() may be a superset of cdef-locals (but entries in cdef-locals
which value is still undefined are not shown in locals())
5) due rule 4, exec() cannot add a variable to locals() if it is already
 present in the in cdef-locals.
6) when eval() runs, it uses locals() set for lookup

Perhaps rule 5 is to prevent exec() to modify any arbitrary variable of
the caller...

Anyways, nice food for our brains.

On Wed, Jul 20, 2022 at 04:56:02PM +, george trojan wrote:

I wish I could understand the following behaviour:

1. This works as I expect it to work:

def f():
   i = 1
   print(locals())
   exec('y = i; print(y); print(locals())')
   print(locals())
   exec('y *= 2')
   print('ok:', eval('y'))
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
ok: 2

2. I can access the value of y with eval() too:

def f():
   i = 1
   print(locals())
   exec('y = i; print(y); print(locals())')
   print(locals())
   u = eval('y')
   print(u)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
1

3. When I change variable name u -> y, somehow locals() in the body of
the function loses an entry:

def f():
   i = 1
   print(locals())
   exec('y = i; print(y); print(locals())')
   print(locals())
   y = eval('y')
   print(y)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1}

---NameError
   Traceback (most recent call last)
Input In [1], in ()  7 print(y)  8 # y
= eval('y')  9 #print('ok:', eval('y'))---> 10 f()

Input In [1], in f()  4 exec('y = i; print(y); print(locals())')
  5 print(locals())> 6 y = eval('y')  7 print(y)

File :1, in 
NameError: name 'y' is not defined1.

Another thing: within the first exec(), the print order seems
reversed. What is going on?

BTW, I am using python 3.8.13.

George
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: list indices must be integers or slices, not str

2022-07-20 Thread Martin Di Paola

offtopic

If you want a pure-python but definitely a more hacky implementation,
you can play around with inspect.stack() and get the variables from
the outer frames.


# code:
x = 32
y = 42
printf("Hello x={x}, y={y}", x=27)

# output:
Hello x=27, y=42


The implementation of printf() was never released in PyPI (I guess I never saw
it as more than a challenge).

But the implementation is quite simple (I did a post about it):
https://book-of-gehn.github.io/articles/2021/07/11/Home-Made-Python-F-String.html

Thanks,
Martin.
On Wed, Jul 20, 2022 at 10:46:35AM -0600, Mats Wichmann wrote:

On 7/20/22 05:04, Frank Millman wrote:


I think the preferred style these days is f'{x[-1]}' which works."

Unfortunately the 'f' option does not work for me in this case, as I am
using a string object, not a string literal.


For that you could consider

https://pypi.org/project/f-yeah/

(straying a bit off thread subject by now, admittedly)
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple message passing system and thread safe message queue

2022-07-18 Thread Martin Di Paola

Hi, I couldn't read your posts, every time I try to open one I'm
redirected to an index page.

I took a look at the smps project and I as far I understand it is a
SSL client that sends messages to a server that implements a store of
messages.

I would suggest to remove the sleep() calls and as a challenge for you,
make the server single-thread using asyncio and friends.

Thanks,
Martin.

On Mon, Jul 18, 2022 at 06:31:28PM +0200, Morten W. Petersen wrote:

Hi.

I wrote a couple of blog posts as I had to create a message passing system,
and these posts are here:

http://blogologue.com/search?category=1658082823X26

Any comments or suggestions?

Regards,

Morten

--
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: TENGO PROBLEMAS AL INSTALAR PYTHON

2022-07-08 Thread Martin Di Paola




On Fri, Jul 08, 2022 at 04:15:35PM -0600, Mats Wichmann wrote:


In addition... there is no "Python 10.0" ...



Mmm, perhaps that's the problem :D

@Angie Odette Lima Banguera, vamos a necesitar algun traceback o algo
para guiarte. Podes tambien buscar en internet (youtube) q hay varios
tutoriales para los primeros pasos.

Good luck!
--
https://mail.python.org/mailman/listinfo/python-list


Re: Filtering XArray Datasets?

2022-06-07 Thread Martin Di Paola

Hi, I'm not an expert on this so this is an educated guess:

You are calling drop=True and I presume that you want to delete the rows
of your dataset that satisfy a condition.

That's a problem.

If the underlying original data is stored in a dense contiguous array,
deleting chunks of it will leave it with "holes". Unless the backend
supports sparse implementations, it is likely that it will go for the
easiest solution: copy the non-deleted rows in a new array.

I don't know the details of you particular problem but most of the time
the trick is in not letting the whole data to be loaded.

Try to see if instead of loading all the dataset and then performing the
filtering/selection, you can do the filtering during the loading.

An alternative could use filtering "before" doing the real work. For
example, if you have a CSV of >100GB you could write a program X that
copies the dataset into a new CSV but doing the filtering. Then, you
load the filtered dataset and do the real work in a program Y.

I explicitly named X and Y as, in principle, they are 2 different programs using
even 2 different technologies.

I hope this email can give you hints of how to fix it. In my last
project I had a similar problem and I ended up doing the filtering on
Python and the "real work" in Julia.

Thanks!
Martin.


On Mon, Jun 06, 2022 at 02:28:41PM -0800, Israel Brewster wrote:

I have some large (>100GB) datasets loaded into memory in a two-dimensional (X 
and Y) NumPy array backed XArray dataset. At one point I want to filter the data 
using a boolean array created by performing a boolean operation on the dataset 
that is, I want to filter the dataset for all points with a longitude value 
greater than, say, 50 and less than 60, just to give an example (hopefully that 
all makes sense?).

Currently I am doing this by creating a boolean array (data[‘latitude’]>50, for 
example), and then applying that boolean array to the dataset using .where(), with 
drop=True. This appears to work, but has two issues:

1) It’s slow. On my large datasets, applying where can take several minutes 
(vs. just seconds to use a boolean array to index a similarly sized numpy array)
2) It uses large amounts of memory (which is REALLY a problem when the array is 
already using 100GB+)

What it looks like is that values corresponding to True in the boolean array 
are copied to a new XArray object, thereby potentially doubling memory usage 
until it is complete, at which point the original object can be dropped, 
thereby freeing the memory.

Is there any solution for these issues? Some way to do an in-place filtering?
---
Israel Brewster
Software Engineer
Alaska Volcano Observatory
Geophysical Institute - UAF
2156 Koyukuk Drive
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Automatic Gain Control in Python?

2022-05-29 Thread Martin Schöön
Den 2022-05-29 skrev Christian Gollwitzer :
> Am 29.05.22 um 00:45 schrieb Stefan Ram:
>> "Steve GS"  writes:
>>> Subject: Automatic Gain Control in Python?
>> 
>>Automatic Gain Control in Python is trivial. You have a list
>>of samples and normalize them, i.e., divide by max. Slightly
>>simplified
>> 
>> [ s/max( samples )for s in samples ]
>> 
>>(where sample values are normalized to the range -1,+1.)
>
> No, that's not it. Loudness is perceived in a different way, the crudest 



> music is incredibly loud, and you might have wondered, how they do that.
>
> Google for "Loudness war" and "dynamic range compression" if you want to 
> understand it in detail.
>
I have no suggestions for solving the problem but it struck me that
you might be interested in looking up a standard called EBU R128.
Start with youtube and you find lectures/demos.

Python connection; There is a Python package called ffmpeg-normalize
which contains an implementation of EBU R128. AFAIK it works on files,
not streaming audio.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Request for assistance (hopefully not OT)

2022-05-17 Thread Martin Di Paola

Try to reinstall python and only python and if you succeeds, then try to
reinstall the other tools.

For this, use "apt-get" instead of "apt"

$ sudo apt-get reinstall python3

When a system is heavily broken, be extra careful and read the output of
the programs. If "apt-get" says than in order to reinstall python it will have
to remove half of your computer, abort. Better ask than sorry.

Best of the lucks.

Martin.

On Tue, May 17, 2022 at 06:20:39AM -0500, o1bigtenor wrote:

Greetings

I was having space issues in my /usr directory so I deleted some
programs thinking that the space taken was more an issue than having
older versions of the program.

So one of the programs I deleted (using rm -r) was python3.9.
Python3.10 was already installed so I thought (naively!!!) that things
should continue working.
(Python 3.6, 3.7 and 3.8 were also part of this cleanup.)

So now I have problems.

Following is the system barf that I get when I run '# apt upgrade'.

What can I do to correct this self-inflicted problem?

(running on debian testing 5.17

Setting up python2.7-minimal (2.7.18-13.1) ...
Could not find platform independent libraries 
Could not find platform dependent libraries 
Consider setting $PYTHONHOME to [:]
/usr/bin/python2.7: can't open file
'/usr/lib/python2.7/py_compile.py': [Errno 2] No such file or
directory
dpkg: error processing package python2.7-minimal (--configure):
installed python2.7-minimal package post-installation script
subprocess returned error exit status 2
Setting up python3.9-minimal (3.9.12-1) ...
update-binfmts: warning: /usr/share/binfmts/python3.9: no executable
/usr/bin/python3.9 found, but continuing anyway as you request
/var/lib/dpkg/info/python3.9-minimal.postinst: 51: /usr/bin/python3.9: not found
dpkg: error processing package python3.9-minimal (--configure):
installed python3.9-minimal package post-installation script
subprocess returned error exit status 127
dpkg: dependency problems prevent configuration of python3.9:
python3.9 depends on python3.9-minimal (= 3.9.12-1); however:
 Package python3.9-minimal is not configured yet.

dpkg: error processing package python3.9 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of python2.7:
python2.7 depends on python2.7-minimal (= 2.7.18-13.1); however:
 Package python2.7-minimal is not configured yet.

dpkg: error processing package python2.7 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of python3.9-dev:
python3.9-dev depends on python3.9 (= 3.9.12-1); however:
 Package python3.9 is not configured yet.

dpkg: error processing package python3.9-dev (--configure):
dependency problems - leaving unconfigured
. . .
Errors were encountered while processing:
python2.7-minimal
python3.9-minimal
python3.9
python2.7
python3.9-dev
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Changing calling sequence

2022-05-13 Thread Martin Di Paola

You probably want something like overload/multiple dispatch. I quick search
on PyPI yields a 'multipledispatch' package.

I never used, however.

On Wed, May 11, 2022 at 08:36:26AM -0700, Tobiah wrote:

On 5/11/22 06:33, Michael F. Stemper wrote:

I have a function that I use to retrieve daily data from a
home-brew database. Its calling sequence is;

def TempsOneDay( year, month, date ):

After using it (and its friends) for a few years, I've come to
realize that there are times where it would be advantageous to
invoke it with a datetime.date as its single argument.


You could just use all keyword args:

def TempsOneDay(**kwargs):

   if 'date' in kwargs:
   handle_datetime(kwargs['date'])
   elif 'year' in kwargs and 'month' in kwargs and 'day' in kwargs:
   handle_args(kwargs['year'], kwargs['month'], kwargs['day'])
   else:
   raise Exception("Bad keyword args")

TempsOneDay(date=datetime.datetime.now)

TempsOneDay(year=2022, month=11, day=30)

--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Accuracy of multiprocessing.Queue.qsize before any Queue.get invocations?

2022-05-13 Thread Martin Di Paola

If the queue was not shared to any other process, I would guess that its
size is reliable.

However, a plain counter could be much simpler/safer.

The developer of multiprocessing.Queue, implemented
size() thinking in how to share the size and maintain a reasonable
consistency between process.
He/she probably didn't care how well works in a single-process scenario
as this is a very special case.

Thanks,
Martin.


On Thu, May 12, 2022 at 06:07:02PM -0500, Tim Chase wrote:

The documentation says[1]


Return the approximate size of the queue. Because of
multithreading/multiprocessing semantics, this number is not
reliable.


Are there any circumstances under which it *is* reliable?  Most
germane, if I've added a bunch of items to the Queue, but not yet
launched any processes removing those items from the Queue, does
Queue.qsize accurately (and reliably) reflect the number of items in
the queue?

 q = Queue()
 for fname in os.listdir():
   q.put(fname)
 file_count = q.qsize() # is this reliable?
 # since this hasn't yet started fiddling with it
 for _ in range(os.cpu_count()):
   Process(target=myfunc, args=(q, arg2, arg3)).start()

I'm currently tracking the count as I add them to my Queue,

 file_count = 0
 for fname in os.listdir():
   q.put(fname)
   file_count += 1

but if .qsize is reliably accurate before anything has a chance to
.get data from it, I'd prefer to tidy the code by removing the
redunant counting code if I can.

I'm just not sure what circumstances the "this number is not
reliable" holds.  I get that things might be asynchronously
added/removed once processes are running, but is there anything that
would cause unreliability *before* other processes/consumers run?

Thanks,

-tkc

[1]
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.qsize





--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: windows 11 what is wrong?

2022-04-27 Thread Lars Martin Hambro
Repair passed but pip3 and pip did not find pip.exe and missing modules.

Lars Martin hambro


Fra: Lars Martin Hambro 
Sendt: onsdag 27. april 2022, 21:31
Til: python-list@python.org 
Emne: windows 11 what is wrong?

[cid:image001.png@01D85A7E.07A48030]

[cid:image002.png@01D85A7E.107B8820]
[cid:image005.png@01D85A7E.2DF045D0]
Lars Martin Hambro


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Could frozendict or frozenmap be of some use for PEP 683 (Immortal objects)?

2022-03-09 Thread Martin Di Paola

I perhaps didn't understand the PEP completely but I think that the goal
of marking some objects as immortal is to remove the refcount from they.

For immutable objects that would make them truly immutable.

However I don't think that the immortality could be applied to any
immutable object by default.

Think in the immutable strings (str). What would happen with a program
that does heavy parsing? I imagine that it will generate thousands of
little strings. If those are immortal, the program will fill its memory
very quickly as the GC will not reclaim their memory.

The same could happen with any frozenfoo object.

Leaving immortality to only a few objects, like True and
None makes more sense as they are few (bound if you want).

On Wed, Mar 09, 2022 at 09:16:00PM +0100, Marco Sulla wrote:

As title. dict can't be an immortal object, but hashable frozendict
and frozenmap can. I think this can increase their usefulness.

Another advantage: frozen dataclass will be really immutable if they
could use a frozen(dict|map) instead of a dict as __dict__
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-08 Thread Martin Di Paola

Then, you must put the initialization (dynamically loading the modules)
into the function executed in the foreign process.

You could wrap the payload function into a class instances to achieve this.
In the foreign process, you call the instance which first performs
the initialization and then executes the payload.


That's what I have in mind: loading the modules first, and then unpickle
and call the real target function.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-07 Thread Martin Di Paola

I understand that yes, pickle.loads() imports any necessary module but
only if they can be find in sys.path (like in any "import" statement).

Dynamic code loaded from a plugin (which we presume it is *not* in
sys.path) will not be loaded.

Quick check. Run in one console the following:

import multiprocessing
import multiprocessing.reduction

import pickle
pickle.dumps(multiprocessing.reduction.ForkingPickler)


In a separated Python console run the following:

import pickle
import sys

'multiprocessing' in sys.modules
False

pickle.loads()

'multiprocessing' in sys.modules
True

So the last check proves that pickle.loads imports any necessary module.

Martin.

On Mon, Mar 07, 2022 at 08:28:15AM +, Barry wrote:




On 7 Mar 2022, at 02:33, Martin Di Paola  wrote:

Yes but I think that unpickle (pickle.loads()) does that plus
importing any module needed


Are you sure that unpickle will import code? I thought it did not do that.

Barry

--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola





Yeup, that would be my first choice but the catch is that "sayhi" may
not be a function of the given module. It could be a static method of
some class or any other callable.


Ah, fair. Are you able to define it by a "path", where each step in
the path is a getattr() call?


Yes but I think that unpickle (pickle.loads()) does that plus
importing any module needed
in the path which it is handy because I can preload the plugins
(modules) before the unpickle but the path may contain others
more-standard modules as well.

Something like "myplugin.re.match". unpickle should import 're' module
automatically will it is loading the function "match".


Fair. I guess, then, that the best thing to do is to preload the
modules, then unpickle. So, basically what you already have, but with
more caveats.


Yes, this will not be transparent for the user, just trying to minimize
the changes needed.

And it will require some documentation for those caveats. And tests.

Thanks for the brainstorming!
Martin.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola





I'm not so sure about that. The author of the plugin knows they're
writing code that will be dynamically loaded, and can therefore
expect the kind of problem they're having. It could be argued that
it's their responsibility to ensure that all the needed code is loaded
into the subprocess.


Yes but I try to always make my libs/programs as much as usable as
possible. "Ergonomic" would be the word.

In the case of the plugin-engine I'm trying to hide any side-effect or
unexpected behaviour of the engine so the developer of the plugin
does not have take that into account.

I agree that if the developer uses multiprocessing he/she needs to know
its implications. But if I can "smooth" any rough corner, I will try to
do it.

For example, the main project (developed by me) uses threads for
concurrency. It would be simpler to load the plugins and instantiate
them *once* and ask the plugins developers to take care of any
race condition (RC) within their implementation.

Because the plugins were instantiated *once*, it is almost guaranteed
that the plugins will suffer from race conditions and they will require
some sort of locking.

This is quite risky: you may forget to protect something and you will
end up with a RC and/or you may put the lock in the wrong place and the
whole thing will not work concurrently.

My decision back then was to instantiate each plugin N+1 times: once in
the main thread and then once per worker thread.

With this, no single plugin instance will be shared so there is no risk
of RC and no need for locking. (Yes, I know, the developer just needs to
use a module variable or a class attribute and it will get a RC and
these are shared but it is definitely not the default scenario).

If sharing is required I provide an object that minimizes the locking
needed.

It was much complex for me at the design and at the implementation level
but I think that it is safer and requires less from the plugin
developer.

Reference: https://byexamples.github.io/byexample/contrib/concurrency-model
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola

Try to use `fork` as "start method" (instead of "spawn").


Yes but no. Indeed with `fork` there is no need to pickle anything. In
particular the child process will be a copy of the parent so it will
have all the modules loaded, including the dynamic ones. Perfect.

The problem is that `fork` is the default only in Linux. It works in
MacOS but it may lead to crashes if the parent process is multithreaded
(and the my is!) and `fork` does not work in Windows.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola






The way you've described it, it's a hack. Allow me to slightly redescribe it.

modules = loader()
objs = init(modules)

def invoke(mod, func):
   # I'm assuming that the loader is smart enough to not load
   # a module that's already loaded. Alternatively, load just the
   # module you need, if that's a possibility.
   loader()
   target = getattr(modules[mod], func)
   target()

ch = multiprocessing.Process(target=invoke, args=("some_module", "sayhi"))
ch.start()



Yeup, that would be my first choice but the catch is that "sayhi" may
not be a function of the given module. It could be a static method of
some class or any other callable.

And doing the lookup by hand sounds complex.

The thing is that the use of multiprocessing is not something required by me
(by my plugin-engine), it was a decision of the developer of a particular
plugin so I don't have any control on that.

Using multiprocessing.reduction was a practical decision: if the user
wants to call something non-pickleable, it is not my fault, it is
multiprocessing's fault.

It *would* be my fault if multiprocessing.Process fails only because I'm
loading the code dynamically.


[...] I won't say "the" correct way, as there are other valid
ways, but there's certainly nothing wrong with this idea.


Do you have some in mind? Or may be a project that I could read?

Thanks!
Martin
--
https://mail.python.org/mailman/listinfo/python-list


Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola

Hi everyone. I implemented time ago a small plugin engine to load code
dynamically.

So far it worked well but a few days ago an user told me that he wasn't
able to run in parallel a piece of code in MacOS.

He was using multiprocessing.Process to run the code and in MacOS, the
default start method for such process is using "spawn". My understanding
is that Python spawns an independent Python server (the child) which
receives what to execute (the target function) from the parent process.

In pseudo code this would be like:

modules = loader() # load the plugins (Python modules at the end)
objs = init(modules) # initialize the plugins

# One of the plugins wants to execute part of its code in parallel
# In MacOS this fails
ch = multiprocessing.Process(target=objs[0].sayhi)
ch.start()

The code fails with "ModuleNotFoundError: No module named 'foo'" (where
'foo' is the name of the loaded plugin).

This is because the parent program sends to the serve (the child) what
needs to execute (objs[0].sayhi) using pickle as the serialization
mechanism.

Because Python does not really serialize code but only enough
information to reload it, the serialization of "objs[0].sayhi" just
points to its module, "foo".

Module which it cannot be imported by the child process.

So the question is, what would be the alternatives and workarounds?

I came with a hack: use a trampoline() function to load the plugins
in the child before executing the target function.

In pseudo code it is:

modules = loader() # load the plugins (Python modules at the end)
objs = init(modules) # initialize the plugins

def trampoline(target_str):
   loader() # load the plugins now that we are in the child process

   # deserialize the target and call it
   target = reduction.loads(target_str)
   target()

# Serialize the real target function, but call in the child
# trampoline(). Because it can be accessed by the child it will
# not fail
target_str = reduction.dumps(objs[0].sayhi)
ch = multiprocessing.Process(target=trampoline, args=(target_str,))
ch.start()

The hack works but is this the correct way to do it?

The following gist has the minimal example code that triggers the issue
and its workaround:
https://gist.github.com/eldipa/d9b02875a13537e72fbce4cdb8e3f282

Thanks!
Martin.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Saving/exporting plots from Jupyter-labs?

2022-02-21 Thread Martin Schöön
Den 2022-02-14 skrev Martin Schöön :
>
> Now I am trying out Jupyter-labs. I like it. I have two head-
> scratchers for now:
>

> 2) Why is Jupyter-labs hooking up to Google-analytics?

Now I can answer this one myself. In a tab I had been working my
way through a Holoviews tutorial. The tutorial demonstrated how
one can open the online Holoviews documentation inside a 
notebook cell. This is how google-analytics and twitter got
'involved'. Jupyter-labs was not guilty.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: library not initialized (pygame)

2022-02-19 Thread Martin Di Paola

Could you share the traceback / error that you are seeing?


On Sun, May 02, 2021 at 03:23:21PM -0400, Quentin Bock wrote:

Code:
#imports and variables for game
import pygame
from pygame import mixer
running = True

#initializes pygame
pygame.init()

#creates the pygame window
screen = pygame.display.set_mode((1200, 800))

#Title and Icon of window
pygame.display.set_caption("3.02 Project")
icon = pygame.image.load('3.02 icon.png')
pygame.display.set_icon(icon)

#setting up font
pygame.font.init()
font = pygame.font.Font('C:\Windows\Fonts\OCRAEXT.ttf', 16)
font_x = 10
font_y = 40
items_picked_up = 0
items_left = 3

def main():
   global running, event

   #Game Loop
   while running:
   #sets screen color to black
   screen.fill((0, 0, 0))

   #checks if the user exits the window
   for event in pygame.event.get():
   if event.type == pygame.QUIT:
   running = False
   pygame.quit()

   def display_instruction(x, y):
   instructions = font.render("Each level contains 3 items you
must pick up in each room.", True, (255, 255, 255))
   instructions_2 = font.render("When you have picked up 3 items,
you will advance to the next room, there are 3.", True, (255, 255, 255))
   instructions_3 = font.render("You will be able to change the
direction you are looking in the room, this allows you to find different
objects.", True, (255, 255, 255))
   clear = font.render("Click to clear the text.", True, (255,
255, 255))
   screen.blit(instructions, (10, 40))
   screen.blit(instructions_2, (10, 60))
   screen.blit(instructions_3, (10, 80))
   screen.blit(clear, (10, 120))

   if event.type == pygame.MOUSEBUTTONDOWN:
   if event.type == pygame.MOUSEBUTTONUP:
   screen.fill(pygame.Color('black'))  # clears the screen text

   display_instruction(font_x, font_y)
   pygame.display.update()


main()

the error apparently comes from the first instructions variable saying
library not initialized not sure why, its worked before but not now :/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: venv and executing other python programs

2022-02-17 Thread Martin Di Paola


That's correct. I tried to be systematic in the analysis so I tested all
the possibilities.


Your test results were unexpected for `python3 -m venv xxx`. By
default, virtual environments exclude the system and user site
packages. Including them should require the command-line argument
`--system-site-packages`. I'd check sys.path in the environment. Maybe
you have PYTHONPATH set.


Nope, I checked with "echo $PYTHONPATH" and nothing. I also checked 
"sys.path" within and without the environment:


Inside the environment:

['', '/usr/lib/python37.zip', '/usr/lib/python3.7', 
 '/usr/lib/python3.7/lib-dynload', 
 '/home/user/tmp/xxx/lib/python3.7/site-packages']


Outside the environment:

['', '/usr/lib/python37.zip', '/usr/lib/python3.7', 
 '/usr/lib/python3.7/lib-dynload', 
 '/home/user/.local/lib/python3.7/site-packages', 
 '/usr/local/lib/python3.7/dist-packages', 
 '/usr/lib/python3/dist-packages']


Indeed the "sys.path" inside the environment does not include system's 
site-packages.


I'll keep looking


A virtual environment is configured by a "pyvenv.cfg" file that's
either beside the executable or one directory up. Activating an
environment is a convenience, not a requirement.


Thanks, that makes a little more sense!


--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: venv and executing other python programs

2022-02-15 Thread Martin Di Paola
If you have activated the venv then any script that uses /usr/bin/env 
will use executables from the venv

bin folder.


That's correct. I tried to be systematic in the analysis so I tested all 
the possibilities.


I avoid all these issues by not activating the venv. Python has code to 
know

how to use the venv libraries that are installed in it when invoked. It does not
depend on the activate script being run.


I had to test this myself because I didn't believe it but you are right.  
Without having the venv activated, if the shebang explicitly points to 
the python executable of the venv, the program will have access to the 
libs installed in the environment.


The same if I do:

/home/user/venv/bin/python foo.py

Thanks for the info!


Barry




Do you have a particular context where you are having troubles? May be there is 
something else going on...

Thanks,
Martin.

On Tue, Feb 15, 2022 at 06:35:18AM +0100, Mirko via Python-list wrote:

Hi,

I have recently started using venv for my hobby-programming. There
is an annoying problem. Since venv modifies $PATH, python programs
that use the "#!/usr/bin/env python" variant of the hashbang often
fail since their additional modules aren't install inside in venv.

How to people here deal with that?

Please note: I'm not interested in discussing whether the
env-variant is good or bad. ;-) It's not that *I* use it, but
several progs in /usr/bin/.

Thanks for your time.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list




--
https://mail.python.org/mailman/listinfo/python-list


Re: Saving/exporting plots from Jupyter-labs?

2022-02-15 Thread Martin Schöön
Den 2022-02-15 skrev Reto :
> On Mon, Feb 14, 2022 at 08:54:01PM +0000, Martin Schöön wrote:
>> 1) In notebooks I can save a plot by right-clicking on it and do
>> save image as. In Jupyter-lab that does not work and so far I
>> have not been able to figure out how to do it. Yes, I have looked
>> in the documentation.
>
> Shift + right click brings up the usual browser menu

Thanks,

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: venv and executing other python programs

2022-02-15 Thread Martin Di Paola

I did a few experiments in my machine. I created the following foo.py

  import pandas
  print("foo")

Now "pandas" is installed under Python 3 outside the venv. I can run it 
successfully calling "python3 foo.py".


If I add the shebang "#!/usr/bin/env python3" (notice the 3), I can also 
run it as "./foo.py".


Calling it as "python foo.py" or using the shebang "#!/usr/bin/env 
python" does not work and it makes sense since "pandas" is installed

only for python 3.

New I create a virtual env with "python3 -m venv xxx" and activate it.

Once inside I can run foo.py in 4 different ways:

 - python foo.py
 - python3 foo.py
 - ./foo.py (using the shebang "#!/usr/bin/env python")
 - ./foo.py (using the shebang "#!/usr/bin/env python3")

Now all of that was with "pandas" installed outside of venv but not 
inside.


I did the same experiments with another library, "cryptonita" which it 
is not installed outside but inside and I was able to executed in 
4 different ways too (inside the venv of course):


 - python foo.py
 - python3 foo.py
 - ./foo.py (using the shebang "#!/usr/bin/env python")
 - ./foo.py (using the shebang "#!/usr/bin/env python3")

Do you have a particular context where you are having troubles? May be 
there is something else going on...


Thanks,
Martin.

On Tue, Feb 15, 2022 at 06:35:18AM +0100, Mirko via Python-list wrote:

Hi,

I have recently started using venv for my hobby-programming. There
is an annoying problem. Since venv modifies $PATH, python programs
that use the "#!/usr/bin/env python" variant of the hashbang often
fail since their additional modules aren't install inside in venv.

How to people here deal with that?

Please note: I'm not interested in discussing whether the
env-variant is good or bad. ;-) It's not that *I* use it, but
several progs in /usr/bin/.

Thanks for your time.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Saving/exporting plots from Jupyter-labs?

2022-02-14 Thread Martin Schöön
I have used Jupyter notebooks for some time now. I am not a heavy
or advanced user. I find the notebook format a nice way to create 
python documents.

Now I am trying out Jupyter-labs. I like it. I have two head-
scratchers for now:

1) In notebooks I can save a plot by right-clicking on it and do
save image as. In Jupyter-lab that does not work and so far I
have not been able to figure out how to do it. Yes, I have looked
in the documentation.

2) Why is Jupyter-labs hooking up to Google-analytics?

TIA,

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How do you log in your projects?

2022-02-10 Thread Martin Di Paola

? Logs are not intended to be read by end users. Logs are primarily
used to understand what the code is doing in a production environment.
They could also be used to gather metrics data.

Why should you log to give a message instead of simply using a print?


You are assuming that logs and prints are different but they aren't. It 
is the same technology: some string formatted in a particular way sent 
to some place (console or file generally).


But using the logging machinery instead a plain print() you win a few 
features like thread safety and log levels (think in an user that wants 
to increase the verbose level of the output)


When communicating with an user, the logs that are intended to him/her 
can be sent to the console (like a print) in addition to a file.


For user's perspective, they look just like a print.


Why? Traceback is vital to understand what and where the problem is. I
think you're confusing logs with messages. The stack trace can be
logged (I would say must), but the end user generally sees a vague
message with some hints, unless the program is used internally only.


Yes, that's exactly why the traceback is hidden by default because the 
user don't care about it. If the error is due something that the user 
did wrong, then the message should say that and, if possible, add 
a suggestion of how to do it.


For example "The file 'foo.md' was not found." is quite descriptive. If 
you add to that message a traceback, that will just clutter the console.


Tracebacks and other errors and warnings must be logged in a file.  
I totally agree with that. Specially true when we are talking with 
server-like software.


Tracebacks can be printed to the user if a more verbose output is 
enabled. In that case you could even pretty print the traceback with 
syntax highlighting.


I guess that this should be discussed case by case. May be you are 
thinking more in a case where you have a service running and logging and 
I am more thinking in a program that a human executes by hand.


What info and how is presented to the user changes quite a bit.

-- https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: How do you log in your projects?

2022-02-09 Thread Martin Di Paola

- On a line per line basis? on a function/method basis?


In general I prefer logging line by line instead per function.

It is easy to add a bunch of decorators to the functions and get the 
logs of all the program but I most of the time I end up with very 
confusing logs.


There are exceptions, yes, but I prefer the line by line where the log 
should explain what is doing the code.



- Which kind of variable contents do you write into your logfiles?
- How do you decide, which kind of log message goes into which level?
- How do you prevent logging cluttering your actual code?


These three comes to the same answer: I think on whom is going to read 
the logs.


If the logs are meant to be read by my users I log high level messages,
specially before parts that can take a while (like the classic 
"Loading...").


If I log variables, those must be the ones set by the users so he/she 
can understand how he/she is controlling the behaviour of the program.


For exceptions I print the message but not the traceback. Across the 
code tag some important functions to put an extra message that will 
enhance the final message printed to the user.


https://github.com/byexamples/byexample/blob/master/byexample/common.py#L192-L238

For example:

for example in examples:
with enhance_exceptions(example, ...):
foo()

So if an exception is raised by foo(), enhance_exceptions() will attach 
to it useful information for the user from the example variable.


In the main, then I do the pretty print
https://github.com/byexamples/byexample/blob/master/byexample/byexample.py#L17-L22

If the user of the logs is me or any other developer I write more debugging 
stuff.

My approach is to not log anything and when I have to debug something 
I use a debugger + some prints. When the issue is fixed I review which 
prints would be super useful and I turn them into logs and the rest is 
deleted.



On Tue, Feb 08, 2022 at 09:40:07PM +0100, Marco Sulla wrote:

These are a lot of questions. I hope we're not off topic.
I don't know if mine are best practices. I can tell what I try to do.

On Tue, 8 Feb 2022 at 15:15, Lars Liedtke  wrote:

- On a line per line basis? on a function/method basis?


I usually log the start and end of functions. I could also log inside
a branch or in other parts of the function/method.


- Do you use decorators to mark beginnings and ends of methods/functions
in log files?


No, since I put the function parameters in the first log. But I think
that such a decorator it's not bad.


- Which kind of variable contents do you write into your logfiles? Of
course you shouldn't leak secrets...


Well, all the data that is useful to understand what the code is
doing. It's better to repeat the essential data to identify a specific
call in all the logs of the function, so if it is called
simultaneously by more clients you can distinguish them


- How do you decide, which kind of log message goes into which level?


It depends on the importance, the verbosity and the occurrences of the logs.


- How do you prevent logging cluttering your actual code?


I have the opposite problem, I should log more. So I can't answer your question.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Waht do you think about my repeated_timer class

2022-02-04 Thread Martin Di Paola




In _run I first set the new timer and then I execute the function. So
that will go mostly OK.


Yes, that's correct however you are not taking into consideration the 
imprecision of the timers.


Timer will call the next _run() after self._interval *plus* some unknown 
arbitrary time (and extra delay).


Let's assume that when you setup an 1 sec Timer but the Timer calls 
_run() after 1.01 secs due this unknown extra delay.


The first time fn() should be called after 1 sec since the begin but it 
is called after 1.01 secs so the extra delay was of 0.01 sec.


The second time fn() should be called after 2 secs since the begin but 
it is called after 2.02 secs. The second fn() was delayed not by 0.01 
but by 0.02 secs.


The third fn() will be delayed by 0.03 secs and so on.

This arbitrary delay is very small however it will sum up on each 
iteration and depending of your application can be a serious problem.


I wrote a post about this and how to create "constant rate loops" which 
fixes this problem:

https://book-of-gehn.github.io/articles/2019/10/23/Constant-Rate-Loop.html

In the post I also describe two solutions (with their trade-offs) for 
when the target function fn() takes longer than the self._interval time.


See if it helps.

Thanks,
Martin.


On Thu, Feb 03, 2022 at 11:41:42PM +0100, Cecil Westerhof via 
Python-list wrote:

Barry  writes:


On 3 Feb 2022, at 04:45, Cecil Westerhof via Python-list 
 wrote:

Have to be careful that timing keeps correct when target takes a 'lot'
of time.
Something to ponder about, but can wait.


You have noticed that your class does call the function at the repeat interval 
but
rather at the repeat interval plus processing time.


Nope:
   def _next(self):
   self._timer = Timer(self._interval, self._run)
   self._timer.start()

   def _run(self):
   self._next()
   self._fn()

In _run I first set the new timer and then I execute the function. So
that will go mostly OK.



The way to fix this is to subtract the last processing elapsed time for the 
next interval.
Sort of a software phase locked loop.

Just before you call the run function record the time.time() as start_time.
Then you can calculate next_interval = max( .001, interval - time.time() - 
start_time)
I use 1ms as the min interval.


But I am working on a complete rewrite to create a more efficient
class. (This means I have to change also the code that uses it.) There
I have to do something like you suggest. (I am already working on it.)


Personally I am also of the opinion that the function should finish in
less as 10% from the interval. (That was one of my rewrites.)

--
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: sharing data across Examples docstrings

2022-01-14 Thread Martin Di Paola

Hello,

I understand that you want to share data across examples (docstrings) 
because you are running doctest to validate them (and test).


The doctest implementation evaluates each docstring separately without 
sharing the context so the short answer is "no".


This is a limitation of doctest but it is not the only testing engine 
that you can use.


You could use "byexample" ( https://byexamples.github.io ) which it 
shares the context by default.


byexample has more features and fixes other caveats of doctests, but 
don't take me too serious, I have a natural bias because I'm its author.


If you want to go with byexample, you may want to try its "doctest 
compatibility mode" first so you don't have to rewrite any test.

( https://byexamples.github.io/byexample/recipes/python-doctest )

Let me know if it is useful for you.

Thanks,
Martin.

On Tue, Jan 11, 2022 at 04:09:57PM -0600, Sebastian Luque wrote:

Hello,

I am searching for a mechanism for sharing data across Examples sections
in docstrings within a class.  For instance:

class Foo:

   def foo(self):
   """Method foo title

   The example generating data below may be much more laborious.

   Examples
   
   >>> x = list("abc")  # may be very long and tedious to generate

   """
   pass

   def bar(self):
   """Method bar title

   Examples
   
   >>> # do something else with x from foo Example

   """
   pass


Thanks,
--
Seb
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Call julia from Python: which package?

2021-12-18 Thread Martin Di Paola
I played with Julia a few months ago. I was doing some data-science 
stuff with Pandas and the performance was terrible so I decided to give 
Julia a try.


My plan was to do the slow part in Julia and call it from Python.  
I tried juliacall (if I don't remember wrong) but I couldn't set up it.


It wasn't as smooth as it was advertised (but hey! you may have a better 
luck than me).


The other thing that I couldn't figure out is *how the data is shared 
between Python and Julia*. Basically numpy/pandas structures cannot be 
used by Julia own libraries as they are, they need to be copied at least 
and this can be a real performance hit.


I desisted the idea but you may still consider this as an real option.  
Just validate how much data you need to share (in my cases where quite 
large datasets, hence the issue).


Having said that, is Julia a real option? May be.

In my case the processing that I needed was quite basic and Julia did 
a really good job.


But I felt that the library is too fragmented. In Python you can relay 
on Pandas/numpy for processing and matplotlib/seaborn for plotting and 
you will 99% covered.


In Julia I need DataFrames.jl, Parquet.jl, CategoricalArrays.jl, 
StatsBase.jl, Statistics.jl and Peaks.jl


And I'm not including any plotting stuff.

This fragmentation is not just "inconvenient" because you need to 
install more packages but it is more difficult to code.


The API is not consistent so you need to be careful and double check 
that what you are calling is really doing what you think.


About the performance, Julia is not magic. It depends on how well it was 
coded each package.


In my case I had a good experience except with Parquet.jl which it 
didn't understand how to handle categories in the dataset and ended up 
loading a lot of duplicated strings and blew up the memory a few times.


I suggest you to write down what you need to speed up and see if it is 
implemented in Julia (do a proof of concept). Only then consider to do 
the switch.


Good luck and share your results!
Martin.


On Fri, Dec 17, 2021 at 07:12:22AM -0800, Dan Stromberg wrote:

On Fri, Dec 17, 2021 at 7:02 AM Albert-Jan Roskam 
wrote:


Hi,

I have a Python program that uses Tkinter for its GUI. It's rather slow so
I hope to replace many or all of the non-GUI parts by Julia code. Has
anybody experience with this? Any packages you can recommend? I found three
alternatives:

* https://pyjulia.readthedocs.io/en/latest/usage.html#
* https://pypi.org/project/juliacall/
* https://github.com/JuliaPy/PyCall.jl

Thanks in advance!



I have zero Julia experience.

I thought I would share this though:
https://stromberg.dnsalias.org/~strombrg/speeding-python/

Even if you go the Julia route, it's probably still best to profile your
Python code to identify the slow ("hot") spots, and rewrite only them.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Martin Di Paola

Hi!, in short your code should work.

I think that the join-joined problem is just an interpretation problem.

In pseudo code the background_thread function does:

def background_thread()
  # bla
  print("join?")
  # bla
  print("joined")

When running this function in parallel using threads, you will probably
get a few "join?" first before receiving any "joined?". That is because
the functions are running in parallel.

The order "join?" then "joined" is preserved within a thread but not
preserved globally.

Now, I see another issue in the output (and perhaps you was asking about 
this one):


join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

So you have 4 "join?" that correspond to the 4 background_thread 
function calls in threads but only 2 "myfnc" and 2 "joined".


Could be possible that the output is truncated by accident?

I ran the same program and I got a reasonable output (4 "join?", "myfnc" 
and "joined"):


join?
join?
myfnc
join?
myfnc
join?
joined.
myfnc
joined.
joined.
myfnc
joined.

Another issue that I see is that you are not joining the threads that 
you spawned (background_thread functions).


I hope that this can guide you to fix or at least narrow the issue.

Thanks,
Martin.


On Mon, Dec 06, 2021 at 12:50:11AM +0100, Johannes Bauer wrote:

Hi there,

I'm a bit confused. In my scenario I a mixing threading with
multiprocessing. Threading by itself would be nice, but for GIL reasons
I need both, unfortunately. I've encountered a weird situation in which
multiprocessing Process()es which are started in a new thread don't
actually start and so they deadlock on join.

I've created a minimal example that demonstrates the issue. I'm running
on x86_64 Linux using Python 3.9.5 (default, May 11 2021, 08:20:37)
([GCC 10.3.0] on linux).

Here's the code:


import time
import multiprocessing
import threading

def myfnc():
print("myfnc")

def run(result_queue, callback):
result = callback()
result_queue.put(result)

def start(fnc):
def background_thread():
queue = multiprocessing.Queue()
proc = multiprocessing.Process(target = run, args = (queue, 
fnc))
proc.start()
print("join?")
proc.join()
print("joined.")
result = queue.get()
threading.Thread(target = background_thread).start()

start(myfnc)
start(myfnc)
start(myfnc)
start(myfnc)
while True:
time.sleep(1)


What you'll see is that "join?" and "joined." nondeterministically does
*not* appear in pairs. For example:

join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

What's worse is that when this happens and I Ctrl-C out of Python, the
started Thread is still running in the background:

$ ps ax | grep minimal
370167 pts/0S  0:00 python3 minimal.py
370175 pts/2S+ 0:00 grep minimal

Can someone figure out what is going on there?

Best,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Alternatives to Jupyter Notebook

2021-11-16 Thread Martin Schöön
Den 2021-11-15 skrev Loris Bennett :
> Martin Schöön  writes:
>
>> Den 2021-10-20 skrev Shaozhong SHI :
>>>
>>> My Jupyter notebook becomes unresponsive in browsers.
>>>
>> Odd, I never had any problems like that. I use Firefox on Linux.
>>
>>> Are there alternatives to read, edit and run Jupyter Notebook?
>>>
>> I know some people use emacs orgmode. I have never tried it
>> myself and do not know how well it works.
>
> I don't know Jupyter well enough to know whether Org mode is a real
> alternative.  However, with Org mode you can combine text and code in
> multiple languages, run the code within Emacs and then export the
> results in various formats such as PDF and HTML.
>
I realise I was not making myself clear. Orgmode is great. Full stop.

What I don't have experience of is combining orgmode with Jupyter.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Alternatives to Jupyter Notebook

2021-11-14 Thread Martin Schöön
Den 2021-10-20 skrev Shaozhong SHI :
>
> My Jupyter notebook becomes unresponsive in browsers.
>
Odd, I never had any problems like that. I use Firefox on Linux.

> Are there alternatives to read, edit and run Jupyter Notebook?
>
I know some people use emacs orgmode. I have never tried it
myself and do not know how well it works.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: walrus with a twist :+

2021-10-29 Thread Bob Martin
On 28 Oct 2021 at 18:52:26, "Avi Gross"  wrote:
>
> Ages ago, IBM used a different encoding than ASCII called EBCDIC (Extended
> Binary Coded Decimal Interchange Code ) which let them use all 8 bits and
> thus add additional symbols. =B1  =A6  =AC

IBM started using EBCDIC with System 360 and it is still used on mainframes.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: The task is to invent names for things

2021-10-28 Thread Martin Di Paola

IMHO, I prefer really weird names.

For example if I'm not sure how to name a class that I'm coding, I name it
like XXXYYY (literally). Really ugly.

This is a way to avoid the so called "naming paralysis".

Once I finish coding the class I look back and it should be easy to see "what
it does" and from there, the correct name.

If the "what it does" results in multiple things I refactor it, splitting it
into two or more pieces and name each separately.

Some people prefer using more generic names like "Manager", "Helper",
"Service" but those names are problematic.

Yes, they fit in any place but that's the problem. If I'm coding a class and I
name it as "FooHelper", I may not realize later that the class is doing too
many things (unrelated things), because "it is a helper".

The thing gets wrong with the time; I bet that most of us saw a "Helper" class
with thousands of lines (~5000 lines was my record) that just grows over time.


On Wed, Oct 27, 2021 at 12:41:56PM +0200, Karsten Hilbert wrote:

Am Tue, Oct 26, 2021 at 11:36:33PM + schrieb Stefan Ram:


xyzzy = lambda x: 2 * x

  . Sometimes, this can even lead to "naming paralysis", where
  one thinks excessively long about a good name. To avoid this
  naming paralysis, one can start out with a mediocre name. In
  the course of time, often a better name will come to one's mind.


In that situation, is it preferable to choose a nonsensical
name over a mediocre one ?

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Select columns based on dates - Pandas

2021-09-03 Thread Martin Di Paola
You may want to reshape the dataset to a tidy format: Pandas works 
better with that format.


Let's assume the following dataset (this is what I understood from your 
message):


In [34]: df = pd.DataFrame({
...: 'Country': ['us', 'uk', 'it'],
...: '01/01/2019': [10, 20, 30],
...: '02/01/2019': [12, 22, 32],
...: '03/01/2019': [14, 24, 34],
...: })

In [35]: df
Out[35]:
  Country  01/01/2019  02/01/2019  03/01/2019
0  us  10  12  14
1  uk  20  22  24
2  it  30  32  34

Then, reshape it to a tidy format. Notice how each row now represents 
a single measure.


In [43]: pd.melt(df, id_vars=['Country'], var_name='Date', 
value_name='Cases')

Out[43]:
  CountryDate  Cases
0  us  01/01/2019 10
1  uk  01/01/2019 20
2  it  01/01/2019 30
3  us  02/01/2019 12
4  uk  02/01/2019 22
5  it  02/01/2019 32
6  us  03/01/2019 14
7  uk  03/01/2019 24
8  it  03/01/2019 34

I used strings to represent the dates but it is much handy work
with real date objects.

In [44]: df2 = _
In [45]: df2['Date'] = pd.to_datetime(df2['Date'])

Now we can filter by date:

In [50]: df2[df2['Date'] < '2019-03-01']
Out[50]:
  Country   Date  Cases
0  us 2019-01-01 10
1  uk 2019-01-01 20
2  it 2019-01-01 30
3  us 2019-02-01 12
4  uk 2019-02-01 22
5  it 2019-02-01 32

With that you could create three dataframes, one per month.

Thanks,
Martin.


On Thu, Sep 02, 2021 at 12:28:31PM -0700, Richard Medina wrote:

Hello, forum,
I have a data frame with covid-19 cases per month from 2019 - 2021 like a 
header like this:

Country, 01/01/2019, 2/01/2019, 01/02/2019, 3/01/2019, ... 01/01/2021, 
2/01/2021, 01/02/2021, 3/01/2021

I want to filter my data frame for columns of a specific month range of march 
to September of 2019, 2020, and 2021 only (three data frames).

Any ideas?
Thank you


--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: The sqlite3 timestamp conversion between unixepoch and localtime

2021-09-03 Thread Bob Martin
On 2 Sep 2021 at 20:25:27, Alan Gauld  wrote:
> On 02/09/2021 20:11, MRAB wrote:
>
>>> In one of them (I can't recall which is which) they change on the 4th
>>> weekend of October/March in the other they change on the last weekend.
>>>
>>>
>> In the EU (and UK) it's the last Sunday in March/October.
>>
>> In the US it's second Sunday in March and the first Sunday in November.
>>
>> I know which one I find easier to remember!
>
> Interesting. I remember it as closer than that. The bugs we found were
> due to differences in the DST settings of the BIOS in the PCs. (They
> were deliberately all sourced from DELL but the EU PCs had a slightly
> different BIOS).
>
> The differences you cite should have thrown up issues every year.
> I must see if I can find my old log books...
>

ISTR that the USA changes were the same as the EU until a few years ago.

I remember thinking at the time it changed "why would they do that?"

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: basic auth request

2021-08-21 Thread Martin Di Paola
While it is correct to say that Basic Auth without HTTPS is absolutely 
insecure, using Basic Auth *and* HTTPS is not secure either.


Well, the definition of "secure" depends of your threat model.

HTTPS ensures encryption so the content, including the Basic Auth 
username and password, is secret for any external observer.


But it is *not* secret for the receiver (the server): if it was 
compromised an adversary will have access to your password. It is much 
easier to print a captured password than cracking the hashes.


Other authentication mechanisms exist, like OAuth, which are more 
"secure".


Thanks,
Martin


On Wed, Aug 18, 2021 at 11:05:46PM -, Jon Ribbens via Python-list wrote:

On 2021-08-18, Robin Becker  wrote:

On 17/08/2021 22:47, Jon Ribbens via Python-list wrote:
...

That's only true if you're not using HTTPS - and you should *never*
not be using HTTPS, and that goes double if forms are being filled
in and double again if passwords are being supplied.


I think I agree with most of the replies; I understood from reading
the rfc that the charset is utf8 (presumably without ':')


The username can't contain a ':'. It shouldn't matter in the password.


and that basic auth is considered insecure. It is being used over
https so should avoid the simplest net scanning.


It's not insecure over HTTPS. Bear in mind the Basic Auth RFC was
written when HTTP was the standard and HTTPS was unusual. The positions
are now effectively reversed.


I googled a bunch of ways to do this, but many come down to 1) using
the requests package or 2) setting up an opener. Both of these seem to
be much more complex than is required to add the header.

I thought there might be a shortcut or more elegant way to replace the
old code, but it seems not


It's only a trivial str/bytes difference, it shouldn't be any big deal.
But using 'requests' instead is likely to simplify things and doesn't
tend to be an onerous dependency.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: on perhaps unloading modules?

2021-08-17 Thread Martin Di Paola
This may not answer your question but it may provide an alternative 
solution.


I had the same challenge that you an year ago so may be my solution will 
work for you too.


Imagine that you have a Markdown file that *documents* the expected 
results.


--8<---cut here---start->8---
This is the final exam, good luck!

First I'm going to load your code (the student's code):

```python

import student

```

Let's see if you programmed correctly a sort algorithm

```python

data = [3, 2, 1, 3, 1, 9]
student.sort_numbers(data)

[1, 1, 2, 3, 3, 9]
```

Let's now if you can choose the correct answer:

```python

t = ["foo", "bar", "baz"]
student.question1(t)

"baz"
```
--8<---cut here---end--->8---

Now you can run the snippets of code with:

  byexample -l python the_markdown_file.md

What byexample does is to run the Python code, capture the output and 
compare it with the expected result.


In the above example "student.sort_numbers" must return the list sorted.  
That output is compared by byexample with the list written below.


Advantages? Each byexample run is independent of the other and the 
snippet of codes are executed in a separated Python process. byexample 
takes care of the IPC.


I don't know the details of your questions so I'm not sure if byexample 
will be the tool for you. In my case I evaluate my students giving them 
the Markdown and asking them to code the functions so they return the 
expected values.


Depending of how many students you have you may considere to complement 
this with INGInious. It is designed to run students' assignments 
assuming nothing on the untrusted code.


Links:

https://byexamples.github.io/byexample/
https://docs.inginious.org/en/v0.7/


On Sun, Aug 15, 2021 at 12:09:58PM -0300, Hope Rouselle wrote:

Hope Rouselle  writes:

[...]


Of course, you want to see the code.  I need to work on producing a
small example.  Perhaps I will even answer my own question when I do.


[...]

Here's a small-enough case.  We have two students here.  One is called
student.py and the other is called other.py.  They both get question 1
wrong, but they --- by definition --- get question 2 right.  Each
question is worth 10 points, so they both should get losses = 10.

(*) Student student.py

--8<---cut here---start->8---
def question1(t): # right answer is t[2]
 return t[1] # lack of attention, wrong answer
--8<---cut here---end--->8---

(*) Student other.py

--8<---cut here---start->8---
def question1(t): # right answer is t[2]
 return t[0] # also lack of attention, wrong answer
--8<---cut here---end--->8---

(*) Grading

All is good on first run.

Python 3.5.2 [...] on win32
[...]

reproducible_problem()

student.py, total losses 10
other.py, total losses 10

The the problem:


reproducible_problem()

student.py, total losses 0
other.py, total losses 0

They lose nothing because both modules are now permanently modified.

(*) The code of grading.py

--8<---cut here---start->8---
# -*- mode: python; python-indent-offset: 2 -*-
def key_question1(t):
 # Pretty simple.  Student must just return index 2 of a tuple.
 return t[2]

def reproducible_problem(): # grade all students
 okay, m = get_student_module("student.py")
 r = grade_student(m)
 print("student.py, total losses", r) # should be 10
 okay, m = get_student_module("other.py")
 r = grade_student(m)
 print("other.py, total losses", r) # should be 10

def grade_student(m): # grades a single student
 losses  = question1_verifier(m)
 losses += question2_verifier(m)
 return losses

def question1_verifier(m):
 losses = 0
 if m.question1( (0, 1, 2, 3) ) != 2: # wrong answer
   losses = 10
 return losses

def question2_verifier(m):
 m.question1 = key_question1
 # To grade question 2, we overwrite the student's module by giving
 # it the key_question1 procedure.  This way we are able to let the
 # student get question 2 even if s/he got question 1 incorrect.
 losses = 0
 return losses

def get_student_module(fname):
 from importlib import import_module
 mod_name = basename(fname)
 try:
   student = import_module(mod_name)
 except Exception as e:
   return False, str(e)
 return True, student

def basename(fname): # drop the the .py extension
 return "".join(fname.split(".")[ : -1])
--8<---cut here---end--->8---
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Empty list as a default param - the problem, and my suggested solution

2021-08-14 Thread Martin Di Paola
I don't know if it is useful but it is an interesting 
metaprogramming/reflection challenge.


You used `inspect` but you didn't take its full potential. Try to see if 
you can simplify your code and see if you can come with a decorator

that does not require special parameters.


from new import NEW
@NEW

... def new_func(a=[]):
... a.append('new appended')
... return a
...

new_func()

['new appended']

new_func()

['new appended']

Spoiler - My solution is at 
https://book-of-gehn.github.io/articles/2021/08/14/Fresh-Python-Defaults.html



On Fri, Aug 13, 2021 at 03:44:20PM -0700, guruyaya wrote:

I am fairly sure all of us know about this python quirk:

def no_new_func(a=[]):

...a.append('new')
...return a


no_new_func()

['new']

no_new_func()

['new', 'new']




For some time I was bothered about that there's no elegant way to use empty 
list or dict as a default parameter. While this can be solved like this:

def no_new_func(a=None):

...if a == None:
   a = []
...a.append('new')
...return a

I have to say I find this solution very far from the spirit of python. Kinda 
ugly, and not explicit. So I've decided to try and create a new module, that 
will try and make, what I think, is a more beautiful and explicit:


from new import NEW
@NEW.parse

... def new_func(a=NEW.new([])):
... a.append('new appended')
... return a
...

new_func()

['new appended']

new_func()

['new appended']

I'd like to hear your thoughts on my solution and code. You can find and give 
your feedback in this project
https://github.com/guruyaya/new
If I see that people like this, I will upload it to pip. I'm not fully sure about the 
name I choose (I thought about the "new" keyword used in JAVA, not sure it 
applies here as well)

Thanks in advance for your feedback
Yair
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Austin -- CPython frame stack sampler v3.0.0 is now available

2021-07-02 Thread Martin Di Paola

Very nice. I used rbspy for Ruby programs https://rbspy.github.io/
and it can give you some insights about the running code that other 
profiling techniques may not give you.


I'll use it in my next performance-bottleneck challenge.

On Fri, Jul 02, 2021 at 04:04:24PM -0700, Gabriele Tornetta wrote:

I am delighted to announce the release 3.0.0 of Austin. If you haven't heard of 
Austin before, it is an open-source frame stack sampler for CPython, 
distributed under the GPLv3 license. It can be used to obtain statistical 
profiling data out of a running Python application without a single line of 
instrumentation. This means that you can start profiling a Python application 
straight away, even while it's running in a production environment, with 
minimal impact on performance.

The best way to leverage Austin is to use the new extension for VS Code, which 
brings interactive flame graphs straight into the text editor to allow you to 
quickly jump to the source code with a simple click. You can find the extension 
on the Visual Studio Marketplace and install it directly from VS Code:

https://marketplace.visualstudio.com/items?itemName=p403n1x87.austin-vscode

To see how to make the best of Austin with VS Code to find and fix performance 
issues, check out this blog post, which shows you the editor extension in 
action on a real Python project:

https://p403n1x87.github.io/how-to-bust-python-performance-issues.html

The latest release comes with many improvements, including a re-worked 
sleepless mode that now gives an estimate of CPU time, initial support for 
Python 3.10, better support for Python-based binaries like gunicorn, uWSGI, 
etc. on all supported platforms.

Austin is a pure C application that has no dependencies other than the C 
standard library. Its source code is hosted on GitHub at

https://github.com/P403n1x87/austin

The README contains installation and usage details, as well as some examples of 
Austin in action. Details on how to contribute to Austin's development can be 
found at the bottom of the page.

Austin can be installed easily on the following platforms and from the 
following sources:

Linux:
- Snap Store
- Debian repositories

macOS:
- Homebrew

Windows:
- Chocolatey
- Scoop

An Austin image, based on Ubuntu 20.04, is also available from Docker Hub:

https://hub.docker.com/r/p403n1x87/austin

Austin is also simple to compile from sources as it only depends on the 
standard C library if you don't have access to the above-listed sources.


You can stay up-to-date with the project's development by following Austin on 
Twitter (https://twitter.com/AustinSampler).

All the best,
Gabriele
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: optimization of rule-based model on discrete variables

2021-06-14 Thread Martin Di Paola
From what I'm understanding it is an "optimization problem" like the 
ones that you find in "linear programming".


But in your case the variables are not Real (they are Integers) and the 
function to minimize g() is not linear.


You could try/explore CVXPY (https://www.cvxpy.org/) which it's a solver 
for different kinds of "convex programming". I don't have experience 
with it however.


The other weapon in my arsenal would be Z3 
(https://theory.stanford.edu/~nikolaj/programmingz3.html) which it's 
a SMT/SAT solver with a built-in extension for optimization problems.


I've more experience with this so here is a "draft" of what you may be 
looking for.



from z3 import Integers, Optimize, And, If

# create a Python array X with 3 Z3 Integer variables named x0, x1, x2
X = Integers('x0 x1 x2')
Y = Integers('y0 y1')

# create the solver
solver = Optimize()

# add some restrictions like lower and upper bounds
for x in X:
  solver.add(And(0 <= x, x <= 2)) # each x is between 0 and 2
for y in Y:
  solver.add(And(0 <= y, y <= 2))

def f(X):
  # Conditional expression can be modeled too with "If"
  # These are *not* evaluated like a normal Python "if" but
  # modeled as a whole. It'll be the solver which will "run it"
  return If(
And(x[0] == 0, x[1] == 0),  # the condition
Y[0] == 0,  # Y[0] will *must* be 0 *if* the condition holds
Y[0] == 2   # Y[0] will *must* be 2 *if* the condition doesn't hold
)

solver.add(f(X))

# let's define the function to optimize
g = Y[0]**2
solver.maximize(g)

# check if we have a solution
solver.check() # this should return 'sat'

# get one of the many optimum solutions
solver.model()


I would recommend you to write a very tiny problem with 2 or 3 variables 
and a very simple f() and g() functions, make it work (manually and with 
Z3) and only then build a more complex program.


You may find useful (or not) these two posts that I wrote a month ago 
about Z3. These are not tutorials, just personal experience with 
a concrete example.


Combine Real, Integer and Bool variables:
https://book-of-gehn.github.io/articles/2021/05/02/Planning-Space-Missions.html

Lookup Tables (this may be useful for programming a f() "variable" 
function where the code of f() (the decision tree) is set by Z3 and not 
by you such f() leads to the optimum of g())

https://book-of-gehn.github.io/articles/2021/05/26/Casting-Broadcasting-LUT-and-Bitwise-Ops.html


Happy hacking.
Martin.


On Mon, Jun 14, 2021 at 12:51:34PM +, Elena via Python-list wrote:

Il Mon, 14 Jun 2021 19:39:17 +1200, Greg Ewing ha scritto:


On 14/06/21 4:15 am, Elena wrote:

Given a dataset of X={(x1... x10)} I can calculate Y=f(X) where f is
this rule-based function.

I know an operator g that can calculate a real value from Y: e = g(Y)
g is too complex to be written analytically.

I would like to find a set of rules f able to minimize e on X.


There must be something missing from the problem description.
 From what you've said here, it seems like you could simply find
a value k for Y that minimises g, regardless of X, and then f would
consist of a single rule: y = k.

Can you tell us in more concrete terms what X and g represent?


I see what you mean, so I try to explain it better: Y is a vector say [y1,
y2, ... yn], with large (n>>10), where yi = f(Xi) with Xi = [x1i, x2i, ...
x10i] 1<=i<=n. All yi and xji assume discrete values.

I already have a dataset of X={Xi} and would like to find the rules f able
to minimize a complicated-undifferenciable Real function g(f(X)).
Hope this makes more sense.

x1...x10 are 10 chemical components that can be absent (0), present (1),
modified (2). yi represent a quality index of the mixtures and g is a
global quality of the whole process.

Thank you in advance

ele
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Recommendation for drawing graphs and creating tables, saving as PDF

2021-06-11 Thread Martin Di Paola

You could try https://plantuml.com and http://ditaa.sourceforge.net/.

Plantuml may not sound as the right tool but it is quite flexible and 
after a few tweak you can create a block diagram as you shown.


And the good thing is that you *write* which elements and relations are 
in your diagram and it is Plantuml which will draw it for you.


On the other hand, in Ditaa you have to do the layout but contrary to 
most of the GUI apps, Ditaa processes plaintext (ascii art if you want).


For simple things, Ditaa is probably a good option too.

Finally, I use https://pandoc.org/ to transform my markdowns into PDFs 
for a textbook that I'm writing (and in the short term for my blog).


None of those are "libraries" in the sense that you can load in Python, 
however nothing should prevent you to call them from Python with 
`subprocess`.


By the way, I'm interesting too in to know other tools for making 
diagrams.


On Fri, Jun 11, 2021 at 08:52:20AM -0400, Neal Becker wrote:

Jan Erik Moström wrote:


I'm doing something that I've never done before and need some advise for
suitable libraries.

I want to

a) create diagrams similar to this one
https://www.dropbox.com/s/kyh7rxbcogvecs1/graph.png?dl=0 (but with more
nodes) and save them as PDFs or some format that can easily be converted
to PDFs

b) generate documents that contains text, lists, and tables with some
styling. Here my idea was to save the info as markdown and create PDFs
from those files, but if there is some other tools that gives me better
control over the tables I'm interested in knowing about them.

I looked around around but could only find two types of libraries for a)
libraries for creating histograms, bar charts, etc, b) very basic
drawing tools that requires me to figure out the layout etc. I would
prefer a library that would allow me to state "connect A to B", "connect
C to B", "connect B to D", and the library would do the whole layout.

The closest I've found it to use markdown and mermaid or graphviz but
... PDFs (perhaps I should just forget about PDFs, then it should be
enough to send people to a web page)

(and yes, I could obviously use LaTeX ...)

= jem


Like this?
https://pypi.org/project/blockdiag/

--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Data structure for plotting monotonically expanding data set

2021-06-05 Thread Martin Di Paola
One way to go is using Pandas as it was mentioned before and Seaborn for 
plotting (built on top of matplotlib)


I would approach this prototyping first with a single file and not with 
the 1000 files that you have.


Using the code that you have for parsing, add the values to a Pandas 
DataFrame (aka, a table).


# load pandas and create a 'date' object to represent the file date
# You'll have "pip install pandas" to use it
import pandas as pd

file_date = pd.to_datetime('20210527')

# data that you parsed as list of lists with each list being
# each line in your file.
data = [
["alice", 123, file_date],
["bob", 4, file_date],
["zebedee", 999, file_date]
]

# then, load it as a pd.DataFrame
df = pd.DataFrame(data, columns=['name', 'kb', 'date'])

# print it
print(df)
name   kb   date
  0alice  123 2021-05-27
  1  bob4 2021-05-27
  2  zebedee  999 2021-05-27

Now, this is the key point: You can save the dataframe in a file
so you don't have to process the same file over and over.

Pandas has different formats, some are more suitable than others.

# I'm going to use "parquet" format which compress really well
# and it is quite fast. You'll have "pip install pyarrow" to use it
df.to_parquet('df-20210527.pq')

Now you repeat this for all your files so you will end up with ~1000 
parquet files.


So, let's say that you want to plot some lines. You'll need to load 
those dataframes from disk.


You read each file, get a Pandas DataFrame for each and then
"concatenate" them into a single Pandas DataFrame

all_dfs = [pd.read_parquet() for  in <...>]
df = pd.concat(all_dfs, ignore_index=True)

Now, the plotting part. You said that you wanted to use matplotlib. I'll 
go one step forward and use seaborn (which it is implemented on top of 
matplotlib)


import matplotlib.pyplot as plt
import seaborn as sns

# plot the mean of 'kb' per date as a point. Per each point
# plot a vertical line showing the "spread" of the values and connect
# the points with lines to show the slope (changes) between days
sns.pointplot(data=df, x="date", y="kb")
plt.show()

# plot the distribution of the 'kb' values per each user 'name'.
sns.violinplot(data=df, x="name", y="kb")
plt.show()

# plot the 'kb' per day for the 'alice' user
sns.lineplot(data=df.query('name == "alice"'), x="date", y="kb")
plt.show()

That's all, a very quick intro to Pandas and Seaborn.

Enjoy the hacking.

Thanks,
Martin.


On Thu, May 27, 2021 at 08:55:11AM -0700, Edmondo Giovannozzi wrote:

Il giorno giovedì 27 maggio 2021 alle 11:28:31 UTC+2 Loris Bennett ha scritto:

Hi,

I currently a have around 3 years' worth of files like

home.20210527
home.20210526
home.20210525
...

so around 1000 files, each of which contains information about data
usage in lines like

name kb
alice 123
bob 4
...
zebedee 999

(there are actually more columns). I have about 400 users and the
individual files are around 70 KB in size.

Once a month I want to plot the historical usage as a line graph for the
whole period for which I have data for each user.

I already have some code to extract the current usage for a single from
the most recent file:

for line in open(file, "r"):
columns = line.split()
if len(columns) < data_column:
logging.debug("no. of cols.: %i less than data col", len(columns))
continue
regex = re.compile(user)
if regex.match(columns[user_column]):
usage = columns[data_column]
logging.info(usage)
return usage
logging.error("unable to find %s in %s", user, file)
return "none"

Obviously I will want to extract all the data for all users from a file
once I have opened it. After looping over all files I would naively end
up with, say, a nested dict like

{"20210527": { "alice" : 123, , ..., "zebedee": 999},
"20210526": { "alice" : 123, "bob" : 3, ..., "zebedee": 9},
"20210525": { "alice" : 123, "bob" : 1, ..., "zebedee": 999},
"20210524": { "alice" : 123, ..., "zebedee": 9},
"20210523": { "alice" : 123, ..., "zebedee": 999},
...}

where the user keys would vary over time as accounts, such as 'bob', are
added and latter deleted.

Is creating a potentially rather large structure like this the best way
to go (I obviously could limit the size by, say, only considering the
last 5 years)? Or is there some better approach for this kind of
problem? For plotting I would probably use matplotlib.

Cheers,

Loris

--
This signature is currently under construction.


Have you tried to use pandas to read the data?
Then you may try to add a column with the date and then join the datasets.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Use Chrome's / Firefox's dev-tools in python

2021-05-23 Thread Martin Di Paola
"unselectable text" not necessary means that it is an image. There is 
a CSS property that you can change to make a text 
selectable/unselectable.


And if it is an image, it very likely that it comes from the server as 
such, so "intercepting" the packet coming from there will be for 
nothing: you will have the same image.


About the "packet listener", you could setup a proxy between your 
browser and the server and use the proxy to see the messages. "Burp" is 
the classical tool for this.


But I have the feeling that the solution is easier.

Try the following: do it manually but take note of the steps you do.

Example:

1) Go to page https://www.parisclassenumerique.fr
2) Click in the upper-right menu button and choose "Tutoriels". Now the 
URL is 
https://www.parisclassenumerique.fr/lutece/jsp/site/Portal.jsp?page_id=9

3) Then click in "Comment démarrer sur PCN ?", on the left panel

... and so on.

Basically you can then translate those steps to Selenium/selectq and 
automate them. It's here where I could help you but I cannot do much 
without more info because I don't know which page you are looking and in 
which link you are trying to click and stuff like that.


On Sun, May 23, 2021 at 01:36:48AM -0700, max pothier wrote:

Hi,
Seems like that could be a method of doing things. Just one clarification: the 
website has unselectable text, looks like it's an image strangely generated, so 
if I can get the packet with it, it would be perfect. As I said (I think), 
logging in with Selenium was already possible, and I could get a screenshot of 
the page after logging in.
If you got this working like a packet listener in browser capable of seeing 
packet data, I'd gladly accept the code.
I've tried to do this for 3 years now (since I came into that school 
basically), looks like it's coming to an end!
Thanks!
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Use Chrome's / Firefox's dev-tools in python

2021-05-22 Thread Martin Di Paola

Hello,

I'm not 100% sure but I think that I understand what you are trying to
do. I faced the same problem a few months ago.

I wanted to know when a particular blog posted a new article.

My plan was to query the blog every day running a python script, get the
list of articles it has and comparing them with the list of the previous
day.

I used Selenium (https://selenium-python.readthedocs.io/) and on top of
that I implemented a thin layer to manipulate the web page called
"selectq" (https://github.com/SelectQuery/sQ)

You could write a similar script.

Using Selenium or selectq will open a web browser but given that it is
fully automated, it should not be a problem (well, yes, it may run a
little slow however).

The good side is that both can inject javascript if you have to.

Would this work for you or am I saying nonsense?

Thanks!
Martin.

On Fri, May 21, 2021 at 03:46:50AM -0700, max pothier wrote:

Hello,
Thanks for you answer!
Actually my goal is not to automatically get the file once I open the page, but 
more to periodically check the site and get a notification when there's new 
homework or, at the morning, know when an hour is cancelled, so I don't want to 
have to open the browser every time.
I have pretty good javascript knowledge so if you could better explain that 
idea, it would be a great help.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


byexample: free, open source tool to find snippets of code in your docs and execute them as regression tests

2021-05-03 Thread Martin Di Paola

Hi everyone, I would like to share a free, open source tool with you that
I've been developing in the last few years.

You'll be probably familiar with things like this in the Python
documentation:

```
  >>> 1 + 3
  4
```

byexample will find those snippets, it will execute "1 + 3" and the
output will be compared with the expected one (the "4") so you can know
that your docs are in sync with your code.

If you are familiar with Python's doctest module, it is the same idea
but after a few years of using it I found some limitations that I tried
to break.

That's how byexample was born: it allows you find and execute
the snippets/examples written in different languages in different files.

You could run Ruby and C++ code written in the docstrings of your Python
source code or in a fenced code block of a Markdown file.

You could "capture" the output of one example and "paste" it into
another as a way to share data between examples.

```
  $ cat somefile   # a Shell example ( will capture a word)
  Lorem ipsum  sit amet.

  >>> "" == "dolor"  # Python example  # byexample: +paste
  True
```

You could even "type" text when your example is interactive and requires
some input:

```
  >>> name = input("your name please: ")  # byexample: +type
  your name please: [john]

  >>> print(name)
  john
```

There are a few more features but this email is long enough.

The full set of features and tutorials are in https://byexamples.github.io
(by the way, the examples in that web page are the tests of byexample!)

Repo: https://github.com/byexamples/byexample (feel free to submit any
issue or question)

You can install it with pip:

  pip install byexample

And if you are a fan of Python's doctest (as I am), there is a
compatibility mode that you may want to check:
https://byexamples.github.io/byexample/recipes/python-doctest

I would like to receive your feedback.

Thanks for your time!
Martin.
--
https://mail.python.org/mailman/listinfo/python-list


Python 3.9. 1 not working

2021-02-09 Thread Martin Lopez
Where do I inquire about installation support?
-- 
https://mail.python.org/mailman/listinfo/python-list


installation issues

2021-02-09 Thread Martin Lopez
Hello,

My name is Martin Lopez. I just downloaded Python 3.9.1 (64 bit) Setup.

After I install the program then try to run it, with no success.

I've uninstalled all previous versions and reinstalled them, but it does
not seem to help.

Can you please assist?

Thank you,
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jupyter notebooks to A4 (again)

2021-01-28 Thread Martin Schöön
Den 2021-01-25 skrev tommy yama :
> Hi Martin,
>
> I noticed that i did use the same ,
> formats are mentioned in git already.
>
> https://github.com/jupyter/nbconvert
>
Are you telling me there are instruction for how to get A4paper
format there? I have looked around but...

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jupyter notebooks to A4 (again)

2021-01-28 Thread Martin Schöön
Den 2021-01-28 skrev Pieter van Oostrum :
> Martin Schöön  writes:
>
>> Hello all,
>>
>> Some years ago I asked about exporting notebooks to pdf in
>> A4 rather than US Letter. I got help, rather detailed
>> instructions from you in general and Piet von Oostrum in
>
> Who now calls himself Pieter van Oostrum, just like his passport says :)

Sorry, my bad.

Any ideas regarding my question?

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Jupyter notebooks to A4 (again)

2021-01-24 Thread Martin Schöön
Hello all,

Some years ago I asked about exporting notebooks to pdf in
A4 rather than US Letter. I got help, rather detailed
instructions from you in general and Piet von Oostrum in
particular. Following the advice helped and I was happy.

Now it does not work any longer:

 nbconvert failed: A4article

I am stumped. I have not changed anything and all
looks OK.

Today I tried up-dating all things Python and Jupyter
but that did not help.

I have also tried removing the A4 stuff and after 
restarting Jupyter I can export to PDF and get US Letter
paper format.

A quick and (obviously) not very clever internet search
yielded nothing helpful.

Any ideas?

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: dayofyear is not great when going into a new year

2021-01-10 Thread Martin Schöön
Den 2021-01-09 skrev Michael F. Stemper :
>
> A week is like a piece of string. It has two ends.
>
The control line of the main sheet traveler on my boat is spliced into
an endless loop.
http://hem.bredband.net/b262106/pages/controls/index.html

I am glad work weeks are not like that :-)

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: dayofyear is not great when going into a new year

2021-01-08 Thread Martin Schöön
Den 2021-01-05 skrev Stefan Ram :
> Martin =?UTF-8?Q?Sch=C3=B6=C3=B6n?=  writes:
>>I have had some Python fun with COVID-19 data. I have done
>>some curve fitting and to make that easier I have transformed
>>date to day of year. Come end of 2020 and beginning of 2021
>>and this idea falls on its face.
>
> import datetime
>
> continuous_day_of_the_year = \
> ( datetime.date.today() - datetime.date( 2020, 1, 1 )).days
>
>
Thanks guys, you got me on the right track. After some further
meandering I did something close to what Stefan suggest above.
I added a column to my Pandas data frame and populated it with

content of date column - datetime(2020, 1, 1)

"regardless of what you have been told, recreational use of
mathematics is harmless"

I hope that is true for recreational programming as well :-)

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


dayofyear is not great when going into a new year

2021-01-05 Thread Martin Schöön
Hello,

I have had some Python fun with COVID-19 data. I have done
some curve fitting and to make that easier I have transformed
date to day of year. Come end of 2020 and beginning of 2021
and this idea falls on its face.

There must be a better way of doing this.

I am using Pandas for reading and manipulating data coming
from a csv file. Scipy, numpy and matplotlib are used
for the curve fitting and plotting.

TIA,

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Any timeline for PIL for Python 3.4

2020-08-10 Thread Martin

Hi,

I am running Python 3.4.4, and would like to
use the Python Imaging Library (PIL).  This
is currently not available for Python
Version 3.  Does anybody know when it will
become available?

Plan B is to install Python 2.7.18.  I just
need an idea of how long I would need to
wait for Plan A.

--
Regards,
Martin Leese
E-mail: ple...@see.web.for.e-mail.INVALID
Web: http://members.tripod.com/martin_leese/
--
https://mail.python.org/mailman/listinfo/python-list


Re: type annotations for xpath list

2020-04-08 Thread Martin Alaçam
Hi,

Thanks for the answer. I just discovered the problem had nothing to do with
xpath, but with the initial value of the descriptor.
This fixed it:

self._pi: List[etree.Element] = []


On Thu, Apr 9, 2020 at 2:29 AM DL Neil via Python-list <
python-list@python.org> wrote:

> On 9/04/20 10:02 AM, Martin Alaçam wrote:
> > Hello,
> >
> > I have the following descriptor:
> >
> >  self._pi = None
> >  @property
> >  def pi(self) -> list:
> >  self._pi = self._root.xpath('processing-instruction()')
> >  return self._pi
> >
> > Mypy says: "Incompatible return value type (got "None", expected
> > "List[Any]")"
> > The xpath expression always returns a list, if it doesn't find anything
> it
> > is an empty list. I am just getting started with type hinting, would
> > appreciate any help.
>
>
> Out of interest, what happens if you change the function to:
>
> self._pi:list = self._root.xpath('processing-instruction()')
>
> Mypy *seems* to be remembering the type from the outer namespace and
> noting that the function's use of the same name differs. (yes, you had
> probably worked-out that for yourself)
>
>
> Interestingly-enough, in a recent (off-line) communication with another
> list-member, we noted similar behavior from (?) a linter or was it Black
> ("the uncompromising code formatter"), and concluded that because using
> the same identifier in both 'inner' and 'outer' name-spaces can lead to
> awkward 'gotchas' in certain situations, a general advice/'rule' of:
> 'don't do it' is being applied.
>
> Perhaps there is another/a better reason that someone else will provide...
>
>
> Disclaimers:
> - Python typing (and thus: mypy) has a somewhat experimental approach
> and is not a required part of the language, nor even particularly
> integrated into the language, as-such.
> - linters are useful to some people, particularly those which have an
> option to turn-off individual aspects which otherwise become 'nagging'.
> - some find Black useful, but to me its "uncompromising" philosophy
> seems non-pythonic (IMHO) - and I won't recommend anything that thinks
> it should make decisions because I'm too stupid (see also Apple, MSFT,
> Google, ...).
> - the latter assessment may be correct, but not IMHO.
> --
> Regards =dn
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


type annotations for xpath list

2020-04-08 Thread Martin Alaçam
Hello,

I have the following descriptor:

self._pi = None
@property
def pi(self) -> list:
self._pi = self._root.xpath('processing-instruction()')
return self._pi

Mypy says: "Incompatible return value type (got "None", expected
"List[Any]")"
The xpath expression always returns a list, if it doesn't find anything it
is an empty list. I am just getting started with type hinting, would
appreciate any help.

Best regards,
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jupyter Notebook -> PDF with A4 pages?

2019-11-22 Thread Martin Schöön
Den 2019-11-01 skrev Andrea D'Amore :
> On Thu, 31 Oct 2019 at 22:08, Martin Schöön  wrote:
>> Den 2019-10-16 skrev Piet van Oostrum :
>>> Why should that not work?
>> pip install --user pip broke pip.  I have not been able to repair pip
>
> I guess that's just the local pip shadowing the system one when you
> let the command "pip" to be resolved by the shell.
> , try calling the absolute path /usr/bin/pip .
>
Thanks, that seems to work -- "seems" because I have only had time try
out list and search functions. Which explains the slow response on my
part...

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jupyter Notebook -> PDF with A4 pages?

2019-10-31 Thread Martin Schöön
Den 2019-10-16 skrev Piet van Oostrum :
> Martin Schöön  writes:
>
>> Den 2019-10-15 skrev Piet van Oostrum :
>>>


>> pip is version 8.1.1 which is what Ubuntu 16.04 comes
>> with. I have learnt -- the hard way -- that pip should be
>> used with the --user option. Does this mean I am stuck with
>> pip version 8.1.1? I mean, pip install --user pip seams like
>> cheating...
>
> Why should that not work?
>
I have been too busy to look into this but today I did and found
that pip install --user pip broke pip. I have not been able to
repair pip but then I only had some ten minutes to spend on it.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jupyter Notebook -> PDF with A4 pages?

2019-10-16 Thread Martin Schöön
Den 2019-10-15 skrev Piet van Oostrum :
>
> What does this report? Compare if there is a difference between home and work.
>
> from jupyter_core.paths import jupyter_path
> print(jupyter_path('notebook','templates'))
>
In both cases I get (with different usernames):

/home/username/.local/share/jupyter/notebook/templates
/usr/local/share/jupyter/notebook/templates
/usr/share/jupyter/notebook/templates

> And maybe also
> print(jupyter_path('nbconvert','templates'))

Same as above but with "nbconvert" substituting "notebook".

Pretty much all jupyter components are of older versions at
work. pip is version 8.1.1 which is what Ubuntu 16.04 comes
with. I have learnt -- the hard way -- that pip should be
used with the --user option. Does this mean I am stuck with
pip version 8.1.1? I mean, pip install --user pip seams like
cheating...

For a moment I thought that maybe pdflatex was missing at work
but not so.

Disclaimer: I only had a few minutes to spend on this today.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jupyter Notebook -> PDF with A4 pages?

2019-10-14 Thread Martin Schöön
Den 2019-10-13 skrev Piet van Oostrum :
> Martin Schöön  writes:
>
>> Is there a way to do "Download as PDF" and get A4 pages instead
>> of Letter? Yes, I know I can do "Download as LaTeX" and edit the
>
< snip >

> Make a directory ~/.jupyter/templates and put a file A4article.tplx inside it:
>
> #
> ((=- Default to the notebook output style -=))
> ((* if not cell_style is defined *))
> ((* set cell_style = 'style_jupyter.tplx' *))
> ((* endif *))
>
< snip >

> c.LatexExporter.template_file = 'A4article'
> c.PDFExporter.latex_count = 3
> c.PDFExporter.template_file = 'A4article'
> c.PDFExporter.latex_command = ['pdflatex', '{filename}']
> #
> Replace 'pdflatex' with 'xelatex' if you prefer that.
> You can leave out the c.LatexExporter.template_file line if
> you don't want the LaTeX exporter to generate A4.
>
Thanks a lot.

That worked right away -- at home but not at work. Both are 
Linux systems but there are differences in both Python and
LaTeX installations. Shouldn't be too hard to figure out --
I hope.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Jupyter Notebook -> PDF with A4 pages?

2019-10-13 Thread Martin Schöön
Is there a way to do "Download as PDF" and get A4 pages instead
of Letter? Yes, I know I can do "Download as LaTeX" and edit the
result to get A4 but if there is a setting I have missed I save
work and time.

Yes, I have looked through the documentation and searched the
Internet but so far to no avail.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python shows error on line 15 that i cant fix

2019-09-21 Thread Dave Martin
On Saturday, September 21, 2019 at 2:46:15 PM UTC-4, boB Stepp wrote:
> On Sat, Sep 21, 2019 at 1:01 PM Dave Martin  wrote:
> >
> > On Saturday, September 21, 2019 at 1:33:12 PM UTC-4, Terry Reedy wrote:
> > > On 9/21/2019 11:53 AM, Dave Martin wrote:
> [...]
> > > > #get the combined data and load the fits files
> > > >
> > > > fits_filename="Gaia_DR2/gaiadr2_100pc.fits"
> > > > df=pd.DataFrame()
> > > > with fits.open(fits_filename) as data:
> > > > df=pd.DataFrame(data[1].data)
> > >
> > > A 'with' statement is a compound statement.  It must be followed by a
> > > 'suite', which usually consists of an indented block of statements.
> > > This is line 17 from the first non-blank line you posted.
> [...]
> 
> > Can you provide an example of how to use the suite feature. Thank you.
> 
> Dave, you seem to have some expectation that you should be given the
> answer.  That is not how help is given in this forum.  You are
> expected to be doing the needed to work before being helped further.
> You have been referred to the tutorial multiple times.  Please read
> it!  Indentation is so fundamental to structuring Python code that it
> is clear that you need grounding in Python fundamentals.  Otherwise
> you are essentially Easter-egging through a code sample that you have
> no true understanding of.
> 
> If you must continue to Easter-egg Python instead of reading the
> tutorial (or something equivalent) then check the section of the
> tutorial on files.  You will find examples of the use of "with" there.
> 
> 
> -- 
> boB
Bob,
You seem to have the expectation that you know more about coding than me and 
that you can insult me without me retaliating. If I were you, I would leave 
this forum and never respond to another person question again, if you think 
that you can rudely ransack your way through what is supposed to be a helpful 
tool.

-Dave
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python shows error on line 15 that i cant fix

2019-09-21 Thread Dave Martin
On Saturday, September 21, 2019 at 1:33:12 PM UTC-4, Terry Reedy wrote:
> On 9/21/2019 11:53 AM, Dave Martin wrote:
> > 
> > # starAbsMags=df['radial_velocity']
> > 
> > #GaiaPandasEscapeVelocityCode
> > 
> > import pandas as pd
> > import numpy as np
> > from astropy.io import fits
> > import astropy
> > import matplotlib.pyplot as plt
> > 
> > 
> > #get the combined data and load the fits files
> > 
> > fits_filename="Gaia_DR2/gaiadr2_100pc.fits"
> > df=pd.DataFrame()
> > with fits.open(fits_filename) as data:
> > df=pd.DataFrame(data[1].data)
> 
> A 'with' statement is a compound statement.  It must be followed by a 
> 'suite', which usually consists of an indented block of statements.
> This is line 17 from the first non-blank line you posted.
> 
> Please stop spamming the list with multiple posts.  Do spend a few hours 
> reading the tutorial until you understand my answer.
> https://docs.python.org/3/tutorial/index.html  Also read 
> https://stackoverflow.com/help/minimal-reproducible-example
> so you can ask better questions.
> 
> I presume you got "SyntaxError: expected an indented block".
> A minimal example getting this error is, for instance,
> 
> while True:
> a = 1
> 
> -- 
> Terry Jan Reedy

Can you provide an example of how to use the suite feature. Thank you. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python is bugging

2019-09-21 Thread Dave Martin
On Saturday, September 21, 2019 at 12:44:27 PM UTC-4, Brian Oney wrote:
> On Sat, 2019-09-21 at 08:57 -0700, Dave Martin wrote:
> > On Saturday, September 21, 2019 at 11:55:29 AM UTC-4, Dave Martin
> > wrote:
> > > what does expected an indented block
> > 
> > *what does an indented block mean?
> 
> It means that the line of code belongs to a certain body as defined
> above its position.  
> 
> Please follow the tutorial.
> 
> https://docs.python.org/3/tutorial/index.html

df.to_csv(r"faststars.csv", index=None,header=True)
# starAbsMags=df['radial_velocity']

#GaiaPandasEscapeVelocityCode

import pandas as pd
import numpy as np
from astropy.io import fits
import astropy
import matplotlib.pyplot as plt


#get the combined data and load the fits files

fits_filename="Gaia_DR2/gaiadr2_100pc.fits"
df=pd.DataFrame()
with fits.open(fits_filename) as data:
df=pd.DataFrame(data[1].data)
df.columns=[c.lower() for c in df.columns]
print("Columns.")
print(df.columns.values)
print("n/n")
#print out some data meta info to see what we're working with
print("Number of stars:")
nstars=len(df)
print(nstars)
distances = (df['parallax']/1000)
starAbsMags =df['phot_g_mean_mag']
df = df[(df.parallax_over_error > 10 ) ]
print("Left after filter: " +str(len(df)/float(nstars)*100)+" %")
df.hist(column='radial_velocity')
#fastdf=df[(df.radial_velocity > 200) | (df.radial_velocity < -200)]
fastdf=df[(df.radial_velocity > 550)|(df.radial_velocity<-550)]
print(len(fastdf))
#print(fastdf)# starTemps=df['astrometric_weight_al']
# df.plot.scatter("radial_velocity", "astrometric_weight_al", s=1, 
c="radial_velocity", colormap="plasma")
# #df=df[(df.radial_velocity>=-550)]
# #plt.axis([0,400,-800,-550])
# #plt.axis([0,400,550,800])
# plt.xlabel('weight(Au)')
# plt.ylabel('Speed')
# plt.title('Gaia Speed vs Weight')

this is my code the error is on line 15
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python is bugging

2019-09-21 Thread Dave Martin
On Saturday, September 21, 2019 at 11:55:29 AM UTC-4, Dave Martin wrote:
> what does expected an indented block

*what does an indented block mean?
-- 
https://mail.python.org/mailman/listinfo/python-list


python is bugging

2019-09-21 Thread Dave Martin
what does expected an indented block
-- 
https://mail.python.org/mailman/listinfo/python-list


Python shows error on line 15 that i cant fix

2019-09-21 Thread Dave Martin


# starAbsMags=df['radial_velocity']

#GaiaPandasEscapeVelocityCode

import pandas as pd
import numpy as np
from astropy.io import fits
import astropy
import matplotlib.pyplot as plt


#get the combined data and load the fits files

fits_filename="Gaia_DR2/gaiadr2_100pc.fits"
df=pd.DataFrame()
with fits.open(fits_filename) as data:
df=pd.DataFrame(data[1].data)
df.columns=[c.lower() for c in df.columns]
print("Columns.")
print(df.columns.values)
print("n/n")
#print out some data meta info to see what we're working with
print("Number of stars:")
nstars=len(df)
print(nstars)
distances = (df['parallax']/1000)
starAbsMags =df['phot_g_mean_mag']
df = df[(df.parallax_over_error > 10 ) ]
print("Left after filter: " +str(len(df)/float(nstars)*100)+" %")
df.hist(column='radial_velocity')
#fastdf=df[(df.radial_velocity > 200) | (df.radial_velocity < -200)]
fastdf=df[(df.radial_velocity > 550)|(df.radial_velocity<-550)]
print(len(fastdf))
#print(fastdf)# starTemps=df['astrometric_weight_al']
# df.plot.scatter("radial_velocity", "astrometric_weight_al", s=1, 
c="radial_velocity", colormap="plasma")
# #df=df[(df.radial_velocity>=-550)]
# #plt.axis([0,400,-800,-550])
# #plt.axis([0,400,550,800])
# plt.xlabel('weight(Au)')
# plt.ylabel('Speed')
# plt.title('Gaia Speed vs Weight')

-- 
https://mail.python.org/mailman/listinfo/python-list


How do I purge pip intsall --user packages?

2019-09-17 Thread Martin Schöön
I have installed a bunch of packages using pip install --user and
I went for a non-standard location for the install. Now I regret
this and would like to wipe this and start all over again using
the standard location. Is it enough to delete the folder I
specified or am I missing something? Having to uninstall each
and every package would be tedious...

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


problem about the installation

2019-06-17 Thread David Martin
Hello there!


My computer windows is 8 and the operating system is 32-bit operating
system, based upon x64 processor but I got a problem during installation,
could you please assist me about this?
I appreciate your efforts in advance.
Thanks




Yours sincerely
YarDel Daudy



Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: subprocess svn checkout password issue

2019-03-15 Thread Martin De Kauwe
On Saturday, 16 March 2019 16:50:23 UTC+11, dieter  wrote:
> Martin De Kauwe  writes:
> 
> > I'm trying to write a script that will make a checkout from a svn repo and 
> > build the result for the user. However, when I attempt to interface with 
> > the shell it asks the user for their filename and I don't know how to 
> > capture this with my implementation. 
> >
> > user = "XXX578"
> > root="https://trac.nci.org.au/svn/cable";
> > repo_name = "CMIP6-MOSRS"
> >
> > cmd = "svn checkout %s/branches/Users/%s/%s" % (root, user, repo_name)
> > p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
> >   stdout=subprocess.PIPE, 
> > stderr=subprocess.PIPE)
> > error = subprocess.call(cmd, shell=True)
> > if error is 1:
> > raise("Error downloading repo"
> >
> > I tried adding .wait(timeout=60) to the subprocess.Popen command but that 
> > didn't work.
> >
> > Any advice on whether there is an augmentation to the above, or a better 
> > approach, would be much appreciated. I need to solve this with standard 
> > python libs as I'm trying to make this as simple as possible for the user.
> 
> That is non-trivial.
> 
> Read the "svn" documentation. You might be able to pass in the
> required information by other means, maybe an option, maybe
> an envvar, maybe via a configuration file.
> 
> Otherwise, you must monitor what it written to the subprocess'
> "stdout" and "stderr", recognized the interaction request
> perform the interaction with the user and send the result
> to the subprocess' stdin.

Thanks, I think this solution will work.

import subprocess
import getpass

user = "XXX578"
root="https://trac.nci.org.au/svn/cable";
repo_name = "CMIP6-MOSRS"

pswd = "'" + getpass.getpass('Password:') + "'"
cmd = "svn checkout %s/branches/Users/%s/%s --password %s" %\
 (root, user, repo_name, pswd)
error = subprocess.call(cmd, shell=True)
if error is 1:
raise("Error checking out repo")
-- 
https://mail.python.org/mailman/listinfo/python-list


subprocess svn checkout password issue

2019-03-15 Thread Martin De Kauwe
Hi,

I'm trying to write a script that will make a checkout from a svn repo and 
build the result for the user. However, when I attempt to interface with the 
shell it asks the user for their filename and I don't know how to capture this 
with my implementation. 

user = "XXX578"
root="https://trac.nci.org.au/svn/cable";
repo_name = "CMIP6-MOSRS"

cmd = "svn checkout %s/branches/Users/%s/%s" % (root, user, repo_name)
p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
  stdout=subprocess.PIPE, 
stderr=subprocess.PIPE)
error = subprocess.call(cmd, shell=True)
if error is 1:
raise("Error downloading repo"

I tried adding .wait(timeout=60) to the subprocess.Popen command but that 
didn't work.

Any advice on whether there is an augmentation to the above, or a better 
approach, would be much appreciated. I need to solve this with standard python 
libs as I'm trying to make this as simple as possible for the user.

The full script is here if that helps:

https://github.com/mdekauwe/CABLE_benchmarking/blob/master/scripts/get_cable.py

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Best way to (re)load test data in Mongo DB

2019-02-25 Thread Martin Sand Christensen
Nagy László Zsolt  writes:

> We have a system where we have to create an exact copy of the original
> database for testing. The database size is over 800GB. [...]

That all sounds pretty cool, but it's precisely the opposite of what I'm
trying to acheive: keeping things as simple as possible. Snapshotting is
neat for testing, especially for the type of snapshots that writes
deltas on top of some base. ZopeDB offers precisely this sort of
feature directly, I've read; that's what I'd wish from every database.

> For much smaller databases, you can (of course) use pure python code to
> insert test data into a test database. If it only takes seconds, then it
> is not a problem, right? I believe that for small tests (e.g. unit
> tests), using python code to populate a test database is just fine.

Yeah... I'd just hoped to push it further down given that I only have
about a handful entries for each collection.

> Regarding question #2, you can always directly give an _id for documents
> if you want:
>
> https://api.mongodb.com/python/current/api/bson/objectid.html#bson.objectid.ObjectId

Cheers. I'll give it another go.


Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Best way to (re)load test data in Mongo DB

2019-02-24 Thread Martin Sand Christensen
breamore...@gmail.com writes:
> I was going to suggest mocking. I'm no expert so I suggest that you
> search for "python test mock database" and go from there. Try to run
> tests with a real db and you're likely to be at it until domesday.

Mocking is definitely the order of the day for most tests, but I'd like
to test the data layer itself, too, and I want a number of comprehensive
function tests as well, and these need to exercise the whole stack.
Mocking is great so long as you also remember to test the things that
you mock.

The point of this exercise is to eventually release it as a sort of
example project of how to build a complex web application. Testing is
particularly important to me since it's too often being overlooked in
tutorials, or it only deals with trivial examples.


Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Best way to (re)load test data in Mongo DB

2019-02-23 Thread Martin Sand Christensen
Hi!

I'm toying around with Pyramid and am using Mongo via MongoEngine for
storage. I'm new to both Pyramid and MongoEngine. For every test case in
the part of my suite that tests the data access layer I want to reload
the database from scratch, but it feels like there should be a better
and faster way than what I'm doing now.

I have two distinct issues:
1. What's the fastest way of resetting the database to a clean state?
2. How do I load data with Mongo's internal _id being kept persistent?

For issue #1:
First of all I'd very much prefer to avoid having to use external client
programs such as mongoimport to keep the number of dependencies minimal.
Thus if there's a good way to do it through MongoEngine or PyMongo,
that'd be preferable.

My first shot at populating the database was simply to load data from a
JSON file, use this to create my model objects (based on
MongoEngine.Document) and save them to the DB. With a single-digit
number of test cases and very limited data, this approach already takes
close to a second, so I'm thinking there should be a faster way. It's
Mongo, after all, not Oracle.

My second version uses the underlying PyMongo module's insert_many()
function to add all the documents for each collection in one go, but for
this small amount of data it doesn't seem any faster.

Which brings us to issue #2:
For both of these strategies I'm unable to insert the Mongo ObjectId
type _id. I haven't made _id properties part of my models, because they
seem a bit... alien. I'd rather not include them solely to be able to
load my test data properly. How can I populate _id as an ObjectId, not
just as a string? (I'm assuming there's a difference, but it's never
come up until now.)


Am I being too difficult? I haven't been able to find much written about
this topic: discussions about mocking drown out everything else the
moment you mention 'mongo' and 'test' in the same search.


Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading 'scientific' csv using Pandas?

2018-11-20 Thread Martin Schöön
Den 2018-11-19 skrev Martin Schöön :
> I spoke too early. Upon closer inspection I get the first column with
> decimal '.' and the rest with decimal ','. I have tried the converter
> thing to no avail :-(
>
Problem solved!

This morning I woke up with the idea of testing if all this fuss may
be caused by the fact that the 'objectional' files missed some data
in their first few rows. I tested by replacing the blank spaces in one
file with zeros. Bingo! For real this time!

Missing data threw read_csv off course.

Fortunately, only a few files needed a handful of zeros to work so
I could do it manually without breaking too much sweat.

Thanks for the keen interest shown.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading 'scientific' csv using Pandas?

2018-11-19 Thread Martin Schöön
Den 2018-11-19 skrev Martin Schöön :
> Den 2018-11-19 skrev Peter Otten <__pete...@web.de>:
>>
>> The engine="python" produces an exception over here:
>>
>> """
>> ValueError: The 'decimal' option is not supported with the 'python' engine
>> """
>>
>> Maybe you can try and omit that option?
>
> Bingo!
> No, I don't remember why I added that engine thing. It was two days ago!
>
>> If that doesn't work you can specify a converter:
>>
>>>>> pd.read_csv("file.csv", sep="\t", converters={0: lambda s: 
>> float(s.replace(",", "."))})
>>col1  col2
>> 0  1.10e+00 0
>> 1  1.024000e-04 1
>> 2  9.492000e-10 2
>>
>> [3 rows x 2 columns]
>
I spoke too early. Upon closer inspection I get the first column with
decimal '.' and the rest with decimal ','. I have tried the converter
thing to no avail :-(

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading 'scientific' csv using Pandas?

2018-11-19 Thread Martin Schöön
Den 2018-11-19 skrev Peter Otten <__pete...@web.de>:
> Martin Schöön wrote:
>
>> My pandas is up to date.
>> 
>
> The engine="python" produces an exception over here:
>
> """
> ValueError: The 'decimal' option is not supported with the 'python' engine
> """
>
> Maybe you can try and omit that option?

Bingo!
No, I don't remember why I added that engine thing. It was two days ago!

> If that doesn't work you can specify a converter:
>
>>>> pd.read_csv("file.csv", sep="\t", converters={0: lambda s: 
> float(s.replace(",", "."))})
>col1  col2
> 0  1.10e+00 0
> 1  1.024000e-04 1
> 2  9.492000e-10 2
>
> [3 rows x 2 columns]
>
I save that one for later. One never nows...

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading 'scientific' csv using Pandas?

2018-11-19 Thread Martin Schöön
Too many files to go through them with an editor :-(

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading 'scientific' csv using Pandas?

2018-11-19 Thread Martin Schöön
Den 2018-11-18 skrev Stefan Ram :
> Martin =?UTF-8?Q?Sch=C3=B6=C3=B6n?=  writes:
>>to read from such files. This works so so. 'Common floats' (3,1415 etc)
>>works just fine but 'scientific' stuff (1,6023e23) does not work.
>
>   main.py
>
> import sys
> import pandas
> import locale
> print( sys.version )
> print( pandas.__version__ )
> with open( 'schoon20181118232102.csv', 'w' ) as file:
> print( 'col0\tcol1', file=file, flush=True )
> print( '1,1\t0', file=file, flush=True )
> print( '10,24e-05\t1', file=file, flush=True )
> print( '9,492e-10\t2', file=file, flush=True )
> EUData = pandas.read_csv\
> ( 'schoon20181118232102.csv', sep='\t', decimal=',', engine='python' )
> locale.setlocale( locale.LC_ALL, 'de' )
> print( 2 * locale.atof( EUData[ 'col0' ][ 1 ]))
>
>   transcript
>
> 3.7.0
> 0.23.4
> 0.0002048
>
Thanks, I just tried this. The line locale.setlocale... throws an
error:

"locale.Error: unsupported locale setting"

Trying other ideas instead of 'de' results in more of the same.
'' results in no errors.

The output I get is this:

3.4.2 (default, Oct  8 2014, 10:45:20) 
[GCC 4.9.1]
0.22.0
0.0002048

Scratching my head and speculating: I run this in a Virtualenv
I have created for Jupyter and pandas and whatever I feel I need
for this. Could locale be undefined or something that causes this?

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading 'scientific' csv using Pandas?

2018-11-18 Thread Martin Schöön
Den 2018-11-18 skrev Shakti Kumar :
> On Sun, 18 Nov 2018 at 18:18, Martin Schöön  wrote:
>>
>> Now I hit a bump in the road when some of the data is not in plain
>> decimal notation (xxx,xx) but in 'scientific' (xx,xxxe-xx) notation.
>>
>
> Martin, I believe this should be done by pandas itself while reading
> the csv file,
> I took an example in scientific notation and checked this out,
>
> my sample.csv file is,
> col1,col2
> 1.1,0
> 10.24e-05,1
> 9.492e-10,2
>
That was a quick answer!

My pandas is up to date.

In your example you use the US convention of using "." for decimals
and "," to separate data. This works perfect for me too.

However, my data files use European conventions: decimal "," and TAB
to separate data:

col1col2
1,1 0
10,24e-05   1
9,492e-10   2

I use 

EUData = pd.read_csv('file.csv', skiprows=1, sep='\t',
decimal=',', engine='python')

to read from such files. This works so so. 'Common floats' (3,1415 etc)
works just fine but 'scientific' stuff (1,6023e23) does not work.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Reading 'scientific' csv using Pandas?

2018-11-18 Thread Martin Schöön
I am in this project where I try to get an overview of a bunch of
computer generated (finite element program) data. I have it stored in a
number of csv files.

Reading the data into spreadsheet programs works fine but is very labour
intensive so I am working with Pandas in Jupyter notebooks which I find
much more efficient.

Now I hit a bump in the road when some of the data is not in plain
decimal notation (xxx,xx) but in 'scientific' (xx,xxxe-xx) notation.

I use read.csv and I read its documentation and poke around for
information on this but so far I have failed. Either I have found it
already but I don't understand or I ask the wrong question to the search
engines.

My experience of Pandas is limited and I would appreciate some guidance.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Program to find Primes of the form prime(n+2) * prime(n+1) - prime(n) +- 1.

2018-10-02 Thread Martin Musatov
I am drafting a sequence for OEIS.

I was told Python was most accesible for beginners.

On Tue, Oct 2, 2018, 4:48 PM Bob Gailer  wrote:

> On Oct 2, 2018 4:59 PM, "Musatov"  wrote:
> >
> >  Primes of the form prime(n+2) * prime(n+1) - prime(n) +- 1.
> > DATA
> >
> > 31, 71, 73, 137, 211, 311, 419, 421, 647, 877, 1117, 1487, 1979, 2447,
> 3079, 3547, 4027, 7307, 7309, 12211, 14243, 18911, 18913, 23557, 25439,
> 28729, 36683, 37831, 46853, 50411, 53129, 55457, 57367, 60251, 67339,
> 70489, 74797, 89669, 98909, 98911
> >
> > EXAMPLE
> >
> > 7*5 - 3 - 1 = 31
> >
> > 11*7 - 5 - 1 = 71
> >
> > 11*7 - 5 + 1 = 73
> >
> > 13*11 - 7 + 1 = 137
> >
> > Can someone put this in a Python program and post?
>
> It is our policy to not write code at others requests. We are glad to help
> if you've started writing a program and are stuck.
>
> Out of curiosity where does this request come from?
>
> If you want to hire one of us to write the program, in other words pay us
> for our time and expertise, that's a different matter. We would be happy to
> comply.
> > --
> > https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fumbling with emacs + elpy + flake8

2018-09-15 Thread Martin Schöön
Den 2018-09-14 skrev Toni Sissala :
> I'm on Ubuntu 16.04. I found out that flake8 did not play well with 
> emacs if installed with --user option, nor when installed in a virtual 
> environment. Didn't research any further, since I got it working with 
> plain pip3 install flake8
>


Toni, your advice did not work out-of-the-box but it put me on the
right track. When I revert to installing flake8 from Debian's repo
it works. Strange as I have not done it like that on my primary
computer. Both Debian installations but a generation apart.

Case closed, I think.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fumbling with emacs + elpy + flake8

2018-09-15 Thread Martin Schöön
Den 2018-09-13 skrev Brian Oney :
> Hi Martin,
>
> I have messed around alot with the myriad emacs configurations out 
> there. I found spacemacs and threw out my crappy but beloved .emacs
> config. I have looked back, but will stay put. http://spacemacs.org/
>


Thanks Brian but not the answer I was looking for this time. I will
investigate though.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


Fumbling with emacs + elpy + flake8

2018-09-13 Thread Martin Schöön
I am trying to set up emacs for Python coding on my secondary computer.
I follow these instructions but fail to make flake8 play with elpy:

https://realpython.com/emacs-the-best-python-editor/#elpy-python-development

I have done this some time in the past on my main computer and there it
works just fine. I have compared the set-up of this on the two computers
and fail to figure out why it works on one and not the other.

There are a couple of things that are not the same on the computers:

1) The secondary computer has a later version of emacs installed.

2) I used pip3 install --user flake8 on the secondary computer but on
the primary computer I think I left out the --user flag (not knowing
about it at the time) but I have added the path to .local/bin and
M-x elpy-config finds flake8 on both computers. Yet it does not
work on my secondary computer...

Any leads are greatly appreciated.

/Martin

PS Debian on both computers.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Guido van Rossum resigns as Python leader

2018-07-13 Thread Bob Martin
in 796624 20180714 064331 Gregory Ewing  wrote:
>Larry Martell wrote:
>> And while we're talking about the Dutch, why is the country called
>> Holland, but then also The Netherlands, but the people are Dutch?
>
>And Germany is called Deutchland?

The real question is why do English speakers refer to Deutschland as Germany.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Meaning of abbreviated terms

2018-05-11 Thread Bob Martin
in 793617 20180511 072806 Steven D'Aprano 
 wrote:
>On Fri, 11 May 2018 07:20:36 +0000, Bob Martin wrote:
>
>> in 793605 20180511 044309 T Berger  wrote:
>>>On Saturday, May 5, 2018 at 6:45:46 PM UTC-4, MRAB wrote:
>>>> On 2018-05-05 17:57, T Berger wrote:
>>>> > What does the "p" in "plist" stand for? Is there a python glossary
>>>> > that spells out the meanings of abbreviated terms?
>>>> >
>>>> "plist" is "property list". It's listed in the Python documentation.
>>>
>>>Thanks for the answer. Missed it till now.
>>
>> In IBM-speak it was parameter list.
>
>
>
>But that's not where plists came from, was it? As I understand it, the
>plist data format was invented by Apple, and they called it a property
>list.

How old is Apple?
I was using plist for parameter list in OS/360 in 1965.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Meaning of abbreviated terms

2018-05-10 Thread Bob Martin
in 793605 20180511 044309 T Berger  wrote:
>On Saturday, May 5, 2018 at 6:45:46 PM UTC-4, MRAB wrote:
>> On 2018-05-05 17:57, T Berger wrote:
>> > What does the "p" in "plist" stand for?
>> > Is there a python glossary that spells out the meanings of abbreviated 
>> > terms?
>> >
>> "plist" is "property list". It's listed in the Python documentation.
>
>Thanks for the answer. Missed it till now.

In IBM-speak it was parameter list.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Issue with python365.chm on window 7

2018-04-23 Thread Bob Martin
in 793268 20180423 223830 "Brian Gibbemeyer"  wrote:
>From:   Brian Gibbemeyer/Detroit/IBM
>To: python-list@python.org, d...@python.org
>Date:   04/23/2018 03:35 PM
>Subject:Issue with python365.chm on window 7
>
>
>Not sure which email this should go to.
>
>But I downloaded .chm version of the Python guide and found that it is not =
>
>working in windows 7
>
>
>
>
>
>
>Thank you,
>
>Brian Gibbemeyer
>Sr Software Engineer
>Watson Health - Value Based Care
>
>
>Phone: 1-7349133594 | Mobile: 1-7347258319
>E-mail: bgibb...@us.ibm.com
>
>
>100 Phoenix Dr
>Ann Arbor, MI 48108-2202
>United States

Senior Software Engineer?
Seriously?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Mayavi website?

2018-02-18 Thread Martin Schöön
Den 2018-02-17 skrev Martin Schöön :
> Anyone else having problems with interacting with
> http://code.enthought.com/pages/mayavi-project.html
> ?
>
Later yesterday I found this:
http://docs.enthought.com/mayavi/mayavi/
and it works without a hitch.

/Martin
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >