Re: Time Date Conversion?

2020-11-04 Thread Cameron Simpson
On 04Nov2020 18:02, Steve  wrote:
>The text File entry is:
>   BPd 2020-11-04 17:28:03.352027  66
>
>I bring it into the program using:
>with open("_TIME-DATE.txt" , 'r') as infile:
> for lineEQN in infile: # loop to find each line in the file for that
>dose
>and set it in a variable as follows:
>ItemDateTime = lineEQN[7:36].strip()
>
>When I print ItemDateTime, it looks like:
>  2020-11-04 17:28:03.352027
>
>How do I display it as "Wednesday, November 4, 2020 5:28pm" ?

Larry has pointed you at strptime and strftime, which read ("parse") a 
string for date information and write ("format") a string for 
presentation. The intermediate form is usually either a timestamp (an 
offset from some point in time, in seconds) or a datetime object (see 
the datetime module).

I'd also point out that your source format looks like a nice ISO8601 
format time, and the datetime module has a handy fromisoformat function 
to the basic forms of that.

As programmers we like the ISO8601 presentation because it has the most 
significant values first and also naively sorts lexically into the time 
ordering.

A less fragile way to parse your example line is to use split() to break 
it into whitepsace separated fields and then parse field[1]+" "+field[2].

That also gets you the first word as field[0], which  might be useful - 
it likely helps classify the input lines in some way.

Cheers,
Cameron Simpson 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Time Date Conversion?

2020-11-04 Thread MRAB

On 2020-11-04 23:02, Steve wrote:

The text File entry is:
BPd 2020-11-04 17:28:03.352027  66

I bring it into the program using:
with open("_TIME-DATE.txt" , 'r') as infile:
  for lineEQN in infile: # loop to find each line in the file for that
dose
and set it in a variable as follows:
ItemDateTime = lineEQN[7:36].strip()

When I print ItemDateTime, it looks like:
   2020-11-04 17:28:03.352027

How do I display it as "Wednesday, November 4, 2020 5:28pm" ?

Use the datetime module. Parse it with datetime.strptime and then format 
it with the .strftime method of the resultant datetime object.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Time Date Conversion?

2020-11-04 Thread Larry Martell
On Wed, Nov 4, 2020 at 6:21 PM Steve  wrote:
>
> The text File entry is:
>BPd 2020-11-04 17:28:03.352027  66
>
> I bring it into the program using:
> with open("_TIME-DATE.txt" , 'r') as infile:
>  for lineEQN in infile: # loop to find each line in the file for that
> dose
> and set it in a variable as follows:
> ItemDateTime = lineEQN[7:36].strip()
>
> When I print ItemDateTime, it looks like:
>   2020-11-04 17:28:03.352027
>
> How do I display it as "Wednesday, November 4, 2020 5:28pm" ?

Look at strptime/strftime
-- 
https://mail.python.org/mailman/listinfo/python-list


Time Date Conversion?

2020-11-04 Thread Steve
The text File entry is: 
   BPd 2020-11-04 17:28:03.352027  66  

I bring it into the program using:
with open("_TIME-DATE.txt" , 'r') as infile:
 for lineEQN in infile: # loop to find each line in the file for that
dose
and set it in a variable as follows:
ItemDateTime = lineEQN[7:36].strip()

When I print ItemDateTime, it looks like:
  2020-11-04 17:28:03.352027

How do I display it as "Wednesday, November 4, 2020 5:28pm" ?
Steve
-
Footnote:
Seatbelts are very dangerous.
I cannot tell you how many times I almost
got into an accident trying to buckle one.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: IDEL from Windows It does not work

2020-11-04 Thread Igor Korot
Hi,

On Wed, Nov 4, 2020 at 2:47 PM David Ruíz Domínguez
 wrote:
>
>IDEL from Windows It does not work, started the program and won’t open it.
>I already uninstalled and reinstalled it and it still does not open

Do you mean IDLE?
If yes - please define "does not work". Are you trying to start it
from the Desktop? Start Menu?
Does it give you any error? Which one?
Are you trying to execute some script with it?

Please five us more info...

I also presume you are working under Windows 10.

Thank you.

>
>thanks for your service
>
>
>
>Enviado desde [1]Correo para Windows 10
>
>
>
> References
>
>Visible links
>1. https://go.microsoft.com/fwlink/?LinkId=550986
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


IDEL from Windows It does not work

2020-11-04 Thread David Ruíz Domínguez
   IDEL from Windows It does not work, started the program and won’t open it.
   I already uninstalled and reinstalled it and it still does not open

   thanks for your service

    

   Enviado desde [1]Correo para Windows 10

    

References

   Visible links
   1. https://go.microsoft.com/fwlink/?LinkId=550986
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Find word by given characters

2020-11-04 Thread duncan smith
On 04/11/2020 19:12, Avi Gross wrote:
> My comments at end:
> 
> -Original Message-
> From: Python-list  On
> Behalf Of duncan smith
> Sent: Wednesday, November 4, 2020 1:09 PM
> To: python-list@python.org
> Subject: Re: Find word by given characters
> 
> On 04/11/2020 04:21, Avi Gross wrote:
>> Duncan, my comments below yours at end.
>>
>> ---YOURS---
>> The Counter approach only requires iterating over the letters once to 
>> construct the letters bag, then each word once to create the relevant 
>> word bag. After that it's (at worst) a couple of lookups and a 
>> comparison for each unique character in letters (for each word).
>>
>> Your approach requires iteration over the words to create the lists of 
>> characters. Then they are (potentially) iterated over multiple times 
>> looking for the characters in letters. There's also the cost of 
>> removing items from arbitrary positions in the lists. Also, it seems 
>> that the character frequencies must be equal, and your approach only 
>> ensures that the words contain at least the required numbers of
> characters.
>>
>> In computational terms, if the first approach is something like O(n+m) 
>> for n letters and words of length m, your algorithm is more like O(nm).
>> Not to say that it will be slower for all possible letters and 
>> dictionaries, but probably for any realistic cases and a lot slower 
>> for large enough dictionaries.
>>
>> Duncan
>>
>> --MINE---
>>
>> I appreciate your analysis. I have not looked at the "counter"
>> implementation and suspect it does some similar loops within, albeit 
>> it may be implemented in a compiled language like C++.
>>
> 
> Before the introduction of counters I would have used a dict to create a
> mapping of characters to counts. That would require iterating over the
> characters in a string only once to create / update the dict entries, and I
> doubt counters are implemented less efficiently than that.
> 
>> I did not write out my algorithm in Python but have done it for 
>> myself. It runs fast enough with most of the time spent in the slow I/O
> part.
>>
>> We can agree all algorithms have to read in all the words in a data file.
>> There may be ways to store the data such as in a binary tree and even 
>> ways to thus prune the search as once a node is reached where all 
>> required letters have been found, all further words qualify below that 
>> point. If you match say "instant" then instants and instantiation 
>> would be deeper in the tree and also qualify assuming extra letters are
> allowed.
> 
> I don't see how a binary tree would be useful. As I've pointed out in
> another post, there are other data structures that could be useful. What I
> had in mind was a trie (prefix tree). But it's only the distinct characters
> and frequencies that are relevant and so I'd exploit that (and one or two
> other things) to reduce space and improve search.
> 
> We may differ on
>> the requirement as I think that the number of repeats for something 
>> like a,t,t require to be at least as big as in "attend" but that 
>> "attention" with yet another "t" would also be OK. If I am wrong, 
>> fine, but I note the person requesting this has admitted a certain 
>> lack of credentials while also claiming he made up a scenario just for 
>> fun. So this is not actually a particularly worthy challenge let alone
> with a purpose.
>>
>> My impression is that the average word length would be something small 
>> like 5-7. The number of words in a dictionary might be 100,000 or 
>> more. So if you want efficiency, where do you get the bang for the buck?
>>
>> I would argue that a simple test run on all the words might often 
>> narrow the field to a much smaller number of answers like just a few 
>> thousand or even much less. Say you test for the presence of "aeiou" 
>> in words, in whatever order. That might be done from reading a file 
>> and filtering out a relatively few potential answers. You can save 
>> those for  second round to determine if they are fully qualified by 
>> any additional rules that may involve more expensive operations.
>>
> 
> Your proposed approach didn't involve any trees (or tries) or filtering of
> words. So I don't see how any of this justifies it.
> 
>> How fast (or slow) are regular expressions for this purpose? Obviously 
>> it depends on complexity and something like "^[^aeiou]*[aeiou] 
>> [^aeiou]*[aeiou] [^aeiou]*[aeiou] [^aeiou]*[aeiou] [^aeiou]*[aeiou] 
>> [^aeiou]*$"
>>
>> would be easy to construct once but likely horribly inefficient in 
>> searching and a bit of overkill here. I suspect there is already some 
>> simple C function that could be used from within python that looks 
>> like findall(choices, word) that might return how many of the letters 
>> in choices were found in word and you simply comparer that to 
>> length(word) perhaps more efficiently.
>>
>> It looks easier to check if a character exists in one of the ways 
>> already discussed within python using a loop as d

RE: Find word by given characters

2020-11-04 Thread Avi Gross via Python-list
My comments at end:

-Original Message-
From: Python-list  On
Behalf Of duncan smith
Sent: Wednesday, November 4, 2020 1:09 PM
To: python-list@python.org
Subject: Re: Find word by given characters

On 04/11/2020 04:21, Avi Gross wrote:
> Duncan, my comments below yours at end.
> 
> ---YOURS---
> The Counter approach only requires iterating over the letters once to 
> construct the letters bag, then each word once to create the relevant 
> word bag. After that it's (at worst) a couple of lookups and a 
> comparison for each unique character in letters (for each word).
> 
> Your approach requires iteration over the words to create the lists of 
> characters. Then they are (potentially) iterated over multiple times 
> looking for the characters in letters. There's also the cost of 
> removing items from arbitrary positions in the lists. Also, it seems 
> that the character frequencies must be equal, and your approach only 
> ensures that the words contain at least the required numbers of
characters.
> 
> In computational terms, if the first approach is something like O(n+m) 
> for n letters and words of length m, your algorithm is more like O(nm).
> Not to say that it will be slower for all possible letters and 
> dictionaries, but probably for any realistic cases and a lot slower 
> for large enough dictionaries.
> 
> Duncan
> 
> --MINE---
> 
> I appreciate your analysis. I have not looked at the "counter"
> implementation and suspect it does some similar loops within, albeit 
> it may be implemented in a compiled language like C++.
> 

Before the introduction of counters I would have used a dict to create a
mapping of characters to counts. That would require iterating over the
characters in a string only once to create / update the dict entries, and I
doubt counters are implemented less efficiently than that.

> I did not write out my algorithm in Python but have done it for 
> myself. It runs fast enough with most of the time spent in the slow I/O
part.
> 
> We can agree all algorithms have to read in all the words in a data file.
> There may be ways to store the data such as in a binary tree and even 
> ways to thus prune the search as once a node is reached where all 
> required letters have been found, all further words qualify below that 
> point. If you match say "instant" then instants and instantiation 
> would be deeper in the tree and also qualify assuming extra letters are
allowed.

I don't see how a binary tree would be useful. As I've pointed out in
another post, there are other data structures that could be useful. What I
had in mind was a trie (prefix tree). But it's only the distinct characters
and frequencies that are relevant and so I'd exploit that (and one or two
other things) to reduce space and improve search.

We may differ on
> the requirement as I think that the number of repeats for something 
> like a,t,t require to be at least as big as in "attend" but that 
> "attention" with yet another "t" would also be OK. If I am wrong, 
> fine, but I note the person requesting this has admitted a certain 
> lack of credentials while also claiming he made up a scenario just for 
> fun. So this is not actually a particularly worthy challenge let alone
with a purpose.
> 
> My impression is that the average word length would be something small 
> like 5-7. The number of words in a dictionary might be 100,000 or 
> more. So if you want efficiency, where do you get the bang for the buck?
> 
> I would argue that a simple test run on all the words might often 
> narrow the field to a much smaller number of answers like just a few 
> thousand or even much less. Say you test for the presence of "aeiou" 
> in words, in whatever order. That might be done from reading a file 
> and filtering out a relatively few potential answers. You can save 
> those for  second round to determine if they are fully qualified by 
> any additional rules that may involve more expensive operations.
> 

Your proposed approach didn't involve any trees (or tries) or filtering of
words. So I don't see how any of this justifies it.

> How fast (or slow) are regular expressions for this purpose? Obviously 
> it depends on complexity and something like "^[^aeiou]*[aeiou] 
> [^aeiou]*[aeiou] [^aeiou]*[aeiou] [^aeiou]*[aeiou] [^aeiou]*[aeiou] 
> [^aeiou]*$"
> 
> would be easy to construct once but likely horribly inefficient in 
> searching and a bit of overkill here. I suspect there is already some 
> simple C function that could be used from within python that looks 
> like findall(choices, word) that might return how many of the letters 
> in choices were found in word and you simply comparer that to 
> length(word) perhaps more efficiently.
> 
> It looks easier to check if a character exists in one of the ways 
> already discussed within python using a loop as discussed. Something 
> as simple as
> this:
> 
>   needed = "aeiou"
>   trying = "education"
>   found = all([trying.find(each) >= 0  for each in needed ])
>   

Re: Find word by given characters

2020-11-04 Thread duncan smith
On 04/11/2020 04:21, Avi Gross wrote:
> Duncan, my comments below yours at end.
> 
> ---YOURS---
> The Counter approach only requires iterating over the letters once to
> construct the letters bag, then each word once to create the relevant word
> bag. After that it's (at worst) a couple of lookups and a comparison for
> each unique character in letters (for each word).
> 
> Your approach requires iteration over the words to create the lists of
> characters. Then they are (potentially) iterated over multiple times looking
> for the characters in letters. There's also the cost of removing items from
> arbitrary positions in the lists. Also, it seems that the character
> frequencies must be equal, and your approach only ensures that the words
> contain at least the required numbers of characters.
> 
> In computational terms, if the first approach is something like O(n+m) for n
> letters and words of length m, your algorithm is more like O(nm).
> Not to say that it will be slower for all possible letters and dictionaries,
> but probably for any realistic cases and a lot slower for large enough
> dictionaries.
> 
> Duncan
> 
> --MINE---
> 
> I appreciate your analysis. I have not looked at the "counter"
> implementation and suspect it does some similar loops within, albeit it may
> be implemented in a compiled language like C++.
> 

Before the introduction of counters I would have used a dict to create a
mapping of characters to counts. That would require iterating over the
characters in a string only once to create / update the dict entries,
and I doubt counters are implemented less efficiently than that.

> I did not write out my algorithm in Python but have done it for myself. It
> runs fast enough with most of the time spent in the slow I/O part. 
> 
> We can agree all algorithms have to read in all the words in a data file.
> There may be ways to store the data such as in a binary tree and even ways
> to thus prune the search as once a node is reached where all required
> letters have been found, all further words qualify below that point. If you
> match say "instant" then instants and instantiation would be deeper in the
> tree and also qualify assuming extra letters are allowed. 

I don't see how a binary tree would be useful. As I've pointed out in
another post, there are other data structures that could be useful. What
I had in mind was a trie (prefix tree). But it's only the distinct
characters and frequencies that are relevant and so I'd exploit that
(and one or two other things) to reduce space and improve search.

We may differ on
> the requirement as I think that the number of repeats for something like
> a,t,t require to be at least as big as in "attend" but that "attention" with
> yet another "t" would also be OK. If I am wrong, fine, but I note the person
> requesting this has admitted a certain lack of credentials while also
> claiming he made up a scenario just for fun. So this is not actually a
> particularly worthy challenge let alone with a purpose.
> 
> My impression is that the average word length would be something small like
> 5-7. The number of words in a dictionary might be 100,000 or more. So if you
> want efficiency, where do you get the bang for the buck? 
> 
> I would argue that a simple test run on all the words might often narrow the
> field to a much smaller number of answers like just a few thousand or even
> much less. Say you test for the presence of "aeiou" in words, in whatever
> order. That might be done from reading a file and filtering out a relatively
> few potential answers. You can save those for  second round to determine if
> they are fully qualified by any additional rules that may involve more
> expensive operations.
> 

Your proposed approach didn't involve any trees (or tries) or filtering
of words. So I don't see how any of this justifies it.

> How fast (or slow) are regular expressions for this purpose? Obviously it
> depends on complexity and something like 
> "^[^aeiou]*[aeiou] [^aeiou]*[aeiou] [^aeiou]*[aeiou] [^aeiou]*[aeiou]
> [^aeiou]*[aeiou] [^aeiou]*$"
> 
> would be easy to construct once but likely horribly inefficient in searching
> and a bit of overkill here. I suspect there is already some simple C
> function that could be used from within python that looks like
> findall(choices, word) that might return how many of the letters in choices
> were found in word and you simply comparer that to length(word) perhaps more
> efficiently.
> 
> It looks easier to check if a character exists in one of the ways already
> discussed within python using a loop as discussed. Something as simple as
> this:
> 
>   needed = "aeiou"
>   trying = "education"
>   found = all([trying.find(each) >= 0  for each in needed ])
>   print(found)
> 
>   trying = "educated"
>   found = all([trying.find(each) >= 0  for each in needed ])
>   print(found)
>   The above prints 
> 
> My point is you can use the above to winnow down possible answers and only
> subject that smaller

Frontend Developer | Job position at CMCC Foundation, Italy

2020-11-04 Thread info cmcc
*Please, feel free to circulate **to anyone you think may be interested.*
--

*Frontend Developer (code 12294)*

*Deadline: 10/11/2020*

The CMCC is taking into consideration the possibility to hire a talented,
motivated and proactive Frontend Developer to support the digital ocean
applications.
This job announcement is a public invitation to express interest for the
above mentioned CMCC Position.

The location is *CMCC Headquarters in Lecce, Italy*.

The primary purpose for this position is to support both the research and
operational activities of the OPA division.
The desired, mandatory qualifications are:

   - M.Sc. degree (or candidate for graduation in the next couple of
   months) or equivalent working experience in Computer Science, Engineering;
   - web services and web application development;
   - REST api;
   - HTML and CSS (with cross-browser development);
   - JavaScript frameworks (jQuery, ReactJS);
   - Modern authorization mechanisms, such as JSON Web Token;
   - Programming languages (such as Python or Java)
   - Version Control Management Systems
   - Fluency in the English language

Furthermore, it is welcome to have as much as possible of the following
experience:

   - UNIX/Linux operating systems and script;
   - authorization mechanisms, such as JSON Web TokenFds;
   - Web Feature Service (WFS), Web Map Service (WMS), Web GIS;
   - DBMS (mySQL);
   - mobile applications languages;
   - Object-oriented design and developmental skills;
   - Platforms for publishing spatial data and interactive mapping
   applications to the web (i.e. MapServer);
   - experience in managing/manipulating NetCDF data.

Belonging to legally protected categories (ex L. 68/99) will constitute a
preferential condition.

The initial appointment is for 24 months starting as soon as possible at an
annual salary ranging from 24 to 36K Euros for Junior Research Associates
and from 32 to 50K Euros for Senior Research Associates, comprehensive of
benefits, depending on qualification and experience.

*APPLY NOW:  
**https://cmccfoundation.applytojob.com/apply/ABzlrQovtP/Frontend-Developer
*

-- 

Fondazione CMCCCentro Euro-Mediterraneo sui Cambiamenti Climatici
Via Augusto Imperatore, 16 - 73100 Lecce
i...@cmcc.it - www.cmcc.it
-- 
https://mail.python.org/mailman/listinfo/python-list


Seeking guidance to start a career in python programming

2020-11-04 Thread ankur gupta
Good Morning to All,
My name is Ankur Gupta and I wish to seek guidance from you. I belong to a
non-computer science background but have always been attracted to this
field. I had computer science in class 12th (Where I learned C++ and
Python) but I did Mechanical Engineering instead in college. I wish to
pursue a career in Python programming and therefore undertook 2 online
certification courses in python but besides this, my progress is almost
stalled.

Request you all to please guide how I can move forward with my current
learning of the language and also steps that I can take to pursue a career
in this field.

Once again thanks to you all for your time and consideration and I look
forward to your response


Regards
Ankur Gupta
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Please help test astral char display in tkinter Text (especially *nix)

2020-11-04 Thread Menno Holscher

Op 03-11-2020 om 04:04 schreef Terry Reedy:
Perhaps half of the assigned chars in the first plane are printed 
instead of being replaced with a narrow box. This includes emoticons as 
foreground color outlines on background color.  Maybe all of the second 
plane of extended CJK chars are printed.  The third plane is unassigned 
and prints as unassigned boxes (with an X).


If you get errors, how many.  If you get a hang or crash, how far did 
the program get?



openSuse Linux 15.2 Leap, Python 3.6.10, tcl and tk 8.6.7

The program runs fine, but complains in the text scrollbox:
0x1 character U+1 is above the range (U+-U+) allowed by Tcl

until

0x3ffe0 character U+3ffe0 is above the range (U+-U+) allowed by Tcl

Menno Hölscher


--
https://mail.python.org/mailman/listinfo/python-list