Re: [Tutor] An Advice to a dedicated beginner

2016-01-19 Thread Anshu Kumar
Hey,

This sounds very usual and common. Your next step should be something that
fetches you feedback like go for job/ internship or participate in coding
contests. Try to look for some small projects on github.

Most importantly never loose zeal it will take you there.

Good luck
Anshu
On Jan 19, 2016 7:03 AM, "Michael Appiah Boachie"  wrote:

> Hello Tutors,
>
> I am a programming beginner. Throughout college, I have not properly grasp
> anything the professor’s teach but when i sat on my own with a couple of
> materials and videos, I have become very good at the basics of python and
> completing various beginner courses and projects with ease but however I
> have run into some kind of “beginner wall” where I don’t know where or what
> to take on next. This is killing my excitement. I think this isn’t
> something new to experienced programmers to hear. That’s why I am asking
> for help. Please any advice would help a dedicated one here. I don’t know
> what to do next with the knowledge I have acquired. People keep saying “get
> into open source” , “do that and that”. I wish they actually knew how
> someone like me feel. There are so many videos, articles and materials to
> get you to know basics and also become a top expert but almost nothing on
> how to transition into that. That’s exactly how I’m feeling.
>
> Hoping to receive some kind words.
>
> Michael
> ___
> Tutor maillist  -  Tutor@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Simultaneous read and write on file

2016-01-19 Thread Anshu Kumar
Hello All,

So much Thanks for your response.

Here is my actual scenario. I have a csv file and it would already be
present. I need to read and remove some rows based on some logic. I have
written earlier two separate file opens which I think was nice and clean.

actual code:

with open(file_path, 'rb') as fr:
for row in csv.DictReader(fr):
#Skip for those segments which are part of overridden_ids
if row['id'] not in overriden_ids:
segments[row['id']] = {
'id': row['id'],
'attrib': json.loads(row['attrib']),
'stl': json.loads(row['stl']),
'meta': json.loads(row['meta']),
}
#rewriting files with deduplicated segments
with open(file_path, 'wb') as fw:
writer = csv.UnicodeWriter(fw)
writer.writerow(["id", "attrib", "stl", "meta"])
for seg in segments.itervalues():
writer.writerow([seg['id'], json.dumps(seg["attrib"]),
json.dumps(seg["stl"]), json.dumps(seg["meta"])])


I have got review comments to improve this block by having just single
file open and minimum memory usage.


Thanks and Regards,

Anshu



On Tue, Jan 19, 2016 at 11:04 AM, Cameron Simpson  wrote:

> On 18Jan2016 20:41, Martin A. Brown  wrote:
>
>> Yes and so have I. Maybe twice in 30 years of programming. [...]

>>>
>>> I may have done it a little more than that; I agree it is very
>>> rare. I may be biased because I was debugging exactly this last
>>> week. (Which itself is an argument against mixed rerad/write with
>>> one file - it was all my own code and I spent over a day chasing
>>> this because I was looking in the wrong spot).
>>>
>>
>> Oh yes.  Ooof.  Today's decisions are tomorrow's albatross.
>>
>
> Actually I have good reason to mix these in this instance, and now that it
> is debugged it is reliable and more efficient to boot.
>
> [...]
>
>> Tip for new players: if you do any .write()s, remember to do a
> .flush() before doing a seek or a read
>

 That's exactly my point. There are so many things you have to do
 extra when working in mixed mode. Too easy to treat things like
 normal mode files and get it wrong. Experts can do it and make it
 work, but mostly it's just not needed.

>>>
>>> Yes. You're write - for simplicity and reliability two distinct
>>> open file instances are much easier.
>>>
>>
>> Yes, he's write [sic].  He writes a bunch!  ;)
>>
>
> Alas, I have a tendency to substitute homophones, or near homophones, when
> typing in a hurry. You'll see this in a bunch of my messages. More
> annoyingly, some are only visible when I reread a posted message instead of
> when I was proofreading prior to send.
>
> [Homonyms mess me up when I'm typing, all sew.]
>>
>
> Homonyms too.
>
> Cheers,
> Cameron Simpson 
>
> ___
> Tutor maillist  -  Tutor@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Beautiful Soup

2016-01-19 Thread Peter Otten
Crusier wrote:

> Hi Python Tutors,
> 
> I am currently able to strip down to the string I want. However, I
> have problems with the JSON script and I am not sure how to slice it
> into a dictionary.
> 
> import urllib
> import json
> import requests
> 
> from bs4 import BeautifulSoup
> 
> 
> url =
> 
'https://bochk.etnet.com.hk/content/bochkweb/eng/quote_transaction_daily_history.php?code=6881\
> 
=F=09=16=S=44c99b61679e019666f0570db51ad932=0=0'
> 
> def web_scraper(url):
> 
> response = requests.get(url)
> html = response.content
> soup = BeautifulSoup(html, 'lxml')
> 
> stock1 = soup.findAll('script')[4].string
> stock2 = stock1.split()
> stock3 = stock2[3]
> # is stock3 sufficient to process as JSON or need further cleaning??
> 
> text =  json.dumps(stock3)
> print(text)
> 
> 
> web_scraper(url)
> 
> If it is possible, please give me some pointers. Thank you

- You need json.loads(), not dumps() to convert text into a python data
  structure
- It looks like you have to remove a trailing ";" from stock3 for loads() to
  succeed


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Simultaneous read and write on file

2016-01-19 Thread Alan Gauld
On 19/01/16 05:41, Anshu Kumar wrote:

> Here is my actual scenario. I have a csv file and it would already be
> present. I need to read and remove some rows based on some logic. I have
> written earlier two separate file opens which I think was nice and clean.

Yes, it looks straightforward. The only possible issue is that
it reads the entire input file in before writing the output
which could become a memory hog.

> with open(file_path, 'rb') as fr:
> for row in csv.DictReader(fr):
> #Skip for those segments which are part of overridden_ids
> if row['id'] not in overriden_ids:
> segments[row['id']] = {
> 'id': row['id'],
> 'attrib': json.loads(row['attrib']),
> 'stl': json.loads(row['stl']),
> 'meta': json.loads(row['meta']),
> }
> #rewriting files with deduplicated segments
> with open(file_path, 'wb') as fw:
> writer = csv.UnicodeWriter(fw)
> writer.writerow(["id", "attrib", "stl", "meta"])
> for seg in segments.itervalues():
> writer.writerow([seg['id'], json.dumps(seg["attrib"]),
> json.dumps(seg["stl"]), json.dumps(seg["meta"])])
> 
> 
> I have got review comments to improve this block by having just single
> file open and minimum memory usage.

I'd ignore the advice to use a single file. One extra file
handle is insignificant in memory terms and the extra simplicity
two handles brings is worth far more.
What I would do is open both files at the start and instead
of creating the segments just write the data direct to the
output file. That will slash your memory footprint.

Contrast that with using a single file:
You need to read a line. check its length, seek back to
the beginning of the line.
Create the new output string. Check its length.
If it is the same length(miracles happen!) just write the line
if it is shorter than the original write the new line,
then write spaces to fill the gap.
If it is longer than the original - oh dear. If you write it you will
overwrite part of your next line. So you need to do a look ahead to grab
the next line of data before writing.
But now your next line has to compare against
data.length-overlap.length and if the new line
is longer than that repeat.
And if your new line is longer than two old lines it gets even worse.
On top of that you now have a file that is partially full of new style
data while the rest is old style. Anyone trying to read that will get
very confused.
And we haven't even considered what to do about the lines you
want to delete...

In short this is not a situation where + mode is a good idea.

-- 
Alan G
Author of the Learn to Program web site
http://www.alan-g.me.uk/
http://www.amazon.com/author/alan_gauld
Follow my photo-blog on Flickr at:
http://www.flickr.com/photos/alangauldphotos


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Simultaneous read and write on file

2016-01-19 Thread Peter Otten
Anshu Kumar wrote:

> Hello All,
> 
> So much Thanks for your response.
> 
> Here is my actual scenario. I have a csv file and it would already be
> present. I need to read and remove some rows based on some logic. I have
> written earlier two separate file opens which I think was nice and clean.
> 
> actual code:
> 
> with open(file_path, 'rb') as fr:
> for row in csv.DictReader(fr):
> #Skip for those segments which are part of overridden_ids
> if row['id'] not in overriden_ids:

Oops typo; so probably not your actual code :(

> segments[row['id']] = {
> 'id': row['id'],
> 'attrib': json.loads(row['attrib']),
> 'stl': json.loads(row['stl']),
> 'meta': json.loads(row['meta']),
> }
> #rewriting files with deduplicated segments
> with open(file_path, 'wb') as fw:
> writer = csv.UnicodeWriter(fw)
> writer.writerow(["id", "attrib", "stl", "meta"])
> for seg in segments.itervalues():
> writer.writerow([seg['id'], json.dumps(seg["attrib"]),
> json.dumps(seg["stl"]), json.dumps(seg["meta"])])
> 
> 
> I have got review comments to improve this block by having just single
> file open and minimum memory usage.

Are the duplicate ids stored in overridden_ids or are they implicitly 
removed by overwriting them in

segments[row["id"]] = ...

? If the latter, does it matter whether the last or the first row with a 
given id is kept?

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


[Tutor] python plotting

2016-01-19 Thread Bachir Bachir via Tutor
 Dear all,
I have some data taken at specific time of the day and i  want to display those 
data according to the time,attached is the cvs file and the display output from 
python pandas . I used the following script 
 v2=read_csv('v2_12.dat')  #data frame for v2
 v2.plot(kind='bar', x='Time_hhmmss', y='Av_phase',figsize=(12,1)) #display for 
v2 only
I want to see a gap on the display because there was no data recorded between  
08:20:56  and   14:55:33      but on my display i see them side by  side Is 
there any way to do  this using python display optionsYour help is highly 
appreciated Thanks much
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fwd:

2016-01-19 Thread Danny Yoo
My apologies for this ugliness.  Followup to the mailing list: I've
contacted the organizers of the Amrita InCTF competition and told them
that one of their members was trying to use us for cheat for answers.
I'll follow up if I hear back from the organizers.


Apparently, this is an endemic problem, if one can generalize from the
multiple posts the organizers have made about folks breaking the
rules:

https://www.facebook.com/cybergurukulam
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] python plotting

2016-01-19 Thread Francois Dion
I'm guessing you loaded from pandas import *... It is better to import
pandas as pd, then use pd.read_csv. I also tend to name my data frames df
or a variation and time series as ts.

Speaking of series, If your data is not a series with a datetime type, then
it will be plotted as a categorical, meaning each different X value is
represented at a constant interval, in index order. You need to convert
your time string to a datetime. (pd.to_datetime is your friend).

Francois

On Tue, Jan 19, 2016 at 2:28 PM, Bachir Bachir via Tutor 
wrote:

>  Dear all,
> I have some data taken at specific time of the day and i  want to display
> those data according to the time,attached is the cvs file and the display
> output from python pandas . I used the following script
>  v2=read_csv('v2_12.dat')  #data frame for v2
>  v2.plot(kind='bar', x='Time_hhmmss', y='Av_phase',figsize=(12,1))
> #display for v2 only
> I want to see a gap on the display because there was no data recorded
> between  08:20:56  and   14:55:33  but on my display i see them side by
>  side Is there any way to do  this using python display optionsYour help is
> highly appreciated Thanks much
> ___
> Tutor maillist  -  Tutor@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>



-- 
raspberry-python.blogspot.com - www.pyptug.org - www.3DFutureTech.info -
@f_dion
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Source of MySQL Command Interpreter

2016-01-19 Thread Alan Gauld
On 16/01/16 23:27, Ricardo Martínez wrote:
> Hi, i wrote a small APP to execute MySQL commands and retrieve to a Treeview

I finally got round to looking at this.

Here are a couple of comments.

I don't understand what the else part is supposed to do here:

if self.cursor.description is not None:
 self.resultset = self.cursor.fetchall()
else:
 print("DES: ",self.cursor.description,"\n")

Surely it only executes if description is None, in
which case what do you expect to print?

Also, why is this a second if/else when it's the same test?

 if self.cursor.description is not None:
 columns = [x[0] for x in self.cursor.description]
 else:
 columns = []

Why not just set the columns in the branches of the first test?

Also at the top of that function:

def executeSQL(self, sqlstr, grdResult):
if sqlstr is not None:
self.txtCommand.clipboard_clear()
self.txtCommand.clipboard_append(sqlstr)
print("QUERY: ",sqlstr,"\n")
if self.cursor is not None:
try:
self.cursor.execute(sqlstr)

If sqlstr is None you still try to execute it? Is that correct?

Finally you use the \ line continuation in a few places where
it is not needed because you are inside parens.

You can use as many newlines as you like inside parens:
eg.

def foo(bar,   # a watering hole
baz,   # an English Barry
bash,  # a shell
bob):  # Blackadder's new servant
pass


Sorry, not much time but those were just some quick observations.


-- 
Alan G
Author of the Learn to Program web site
http://www.alan-g.me.uk/
http://www.amazon.com/author/alan_gauld
Follow my photo-blog on Flickr at:
http://www.flickr.com/photos/alangauldphotos


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fwd:

2016-01-19 Thread Danny Yoo
No.  This is definitely wrong.
On Jan 19, 2016 2:51 AM, "Deepak Nn"  wrote:

> Finding the answer is very important that's why for the competition .
>
> On Tue, Jan 19, 2016 at 4:19 PM, Deepak Nn 
> wrote:
>
>> This is for an online competition i am now participating in Amrita InCTF
>> Junior  .Please don't misunderstand and sent
>> me the code .
>>
>> On Tue, Jan 19, 2016 at 12:29 AM, Danny Yoo 
>> wrote:
>>
>>> > Please provide a python program to run a program (.exe) and get Hash
>>> > *exactly* as :
>>> >
>>> >  160 106 182 190 228 64 68 207 248 109 67 88 41 .The username to be
>>> > used is admin
>>> > .The *password* is what to be found out .The hash provided is of the
>>> > correct password .Mostly the password will be *13 char long *and has a
>>> > small chance of being all alpha characters .
>>>
>>>
>>> Hi Deepak,
>>>
>>> This doesn't seem like a beginner-level question.  If I had a guess,
>>> it sounds more like something out of a shady rent-a-coder kind of
>>> thing.
>>>
>>> Unfortunately, I don't think we can help with this.  Even if we did
>>> have the technical expertise, I still don't think we should help on
>>> this in the first place.  If I'm understanding the question correctly,
>>> you're asking for brute password breaking, which goes against most
>>> professional codes of conduct.  Example:
>>> https://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct
>>> .
>>>
>>
>>
>
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor