AES Encryption/Decryption
Python newbie here, looking for code samples for encrypting and decrypting functions, using AES. See lots of stuff on the interwebs, but lots of comments back an forth about bugs, or implemented incorrect, etc... I need to encrypt some strings that will be passed around in URL, and then also some PII data at rest. Thanks. -- https://mail.python.org/mailman/listinfo/python-list
Re: Is pypi the best place to find external python packages?
On Tuesday, August 21, 2018 at 9:48:50 AM UTC-6, Jeff M wrote: > are there other places also? On pypi I did not see anywhere the status of > defects or downloads, if it's actively supported. nevermind, i see the info i was looking for on the left side. -- https://mail.python.org/mailman/listinfo/python-list
Is pypi the best place to find external python packages?
are there other places also? On pypi I did not see anywhere the status of defects or downloads, if it's actively supported. -- https://mail.python.org/mailman/listinfo/python-list
Project Structure for Backend ETL Project
Is this a good example to follow for a project that does mostly python to interact with external data sources including files, transformation, and import into Postgres? https://github.com/bast/somepackage I have a SWE background but not with python, and I want to make sure my team is following good practices. I am aware of pep8 but does that have project structure examples also? -- https://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Jun 10, 12:49 pm, Seamus MacRae wrote: > Jeff M. wrote: > > On Jun 9, 9:08 pm, Arved Sandstrom wrote: > >> Jon Harrop wrote: > >>> Arved Sandstrom wrote: > >>>> Jon, I do concurrent programming all the time, as do most of my peers. > >>>> Way down on the list of why we do it is the reduction of latency. > >>> What is higher on the list? > >> Correctness. > > > IMO, that response is a bit of a cop-out. Correctness is _always_ most > > important, no matter what application you are creating; without it, > > you don't have a job and the company you work for goes out of > > business. > > And when, exactly, did Microsoft go out of business? I hadn't heard it > mentioned in the news. :) Touche. :) Jeff M. -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Jun 9, 9:08 pm, Arved Sandstrom wrote: > Jon Harrop wrote: > > > > Arved Sandstrom wrote: > >> > >> Jon, I do concurrent programming all the time, as do most of my peers. > >> Way down on the list of why we do it is the reduction of latency. > > > What is higher on the list? > > Correctness. > IMO, that response is a bit of a cop-out. Correctness is _always_ most important, no matter what application you are creating; without it, you don't have a job and the company you work for goes out of business. But, assuming that your program works and does what it's supposed to, I agree with Jon that performance needs to be right near the top of the list of concerns. Why? Performance isn't about looking good as a programmer, or having fun making a function run in 15 cycles instead of 24, or coming up with some neat bit packing scheme so that your app now only uses 20K instead of 200K. Performance is - pure and simple - about one thing only: money. Programs that use more memory require more money for the hardware of every user. Programs that run slower eat more time per day. If you have 100,000 users, all doing an operation once per day that takes 20 seconds, being able to shave 5 seconds off that saves 5.78 man-days of work. Hell, for some applications, that 20 seconds is just startup time spent at a splash screen. Just imagine if every Google search took even 5 seconds to resolve, how much time would be wasted every day around the world - ignoring the fact that Google wouldn't exist if that were the case ;-). Obviously Google engineers work incredibly hard every day to ensure correct results, but performance better be right up there at the top of the list as well. Jeff M. -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Jun 7, 3:19 pm, Arved Sandstrom wrote: > Jon Harrop wrote: > > Arved Sandstrom wrote: > >> Jon Harrop wrote: > >>> I see no problem with mutable shared state. > >> In which case, Jon, you're in a small minority. > > > No. Most programmers still care about performance and performance means > > mutable state. > > Quite apart from performance and mutable state, I believe we were > talking about mutable _shared_ state. And this is something that gets a > _lot_ of people into trouble. > Mutable shared state gets _bad_ (err.. perhaps "inexperienced" would be a better adjective) programmers - who don't know what they are doing - in trouble. There are many problem domains that either benefit greatly from mutable shared states or can't [easily] be done without them. Unified memory management being an obvious example... there are many more. Unshared state has its place. Immutable state has its place. Shared immutable state has its place. Shared mutable place has its place. Jeff M. -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Jun 7, 1:56 am, Paul Rubin <http://phr...@nospam.invalid> wrote: > "Jeff M." writes: > > > > Even the lightest weight > > > > user space ("green") threads need a few hundred instructions, minimum, > > > > to amortize the cost of context switching > > There's always a context switch. It's just whether or not you are > > switching in/out a virtual stack and registers for the context or the > > hardware stack/registers. > > I don't see the hundreds of instructions in that case. > > http://shootout.alioth.debian.org/u32q/benchmark.php?test=threadring&;... > > shows GHC doing 50 million lightweight thread switches in 8.47 > seconds, passing a token around a thread ring. Almost all of that is > probably spent acquiring and releasing the token's lock as the token > is passed from one thread to another. That simply doesn't leave time > for hundreds of instructions per switch. Who said there has to be? Sample code below (just to get the point across): struct context { vir_reg pc, sp, bp, ... ; object* stack; // ... context* next; }; struct vm { context* active_context; }; void switch_context(vm* v) { // maybe GC v->active_context before switching v->active_context = v->active_context->next; } Also, there isn't "hundreds of instructions" with multiplexing, either. It's all done in hardware. Take a look at the disassembly for any application: one that uses native threads on a platform that supports preemption. You won't see any instructions anywhere in the program that perform a context switch. If you did that would be absolutely horrible. Imagine if the compiler did something like this: while(1) { // whatever } do_context_switch_here(); That would suck. ;-) That's not to imply that there isn't a cost; there's always a cost. The example above just goes to show that for green threads, the cost [of the switch] can be reduced down to a single pointer assignment. Jeff M. -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-core software
On Jun 6, 9:58 pm, Paul Rubin <http://phr...@nospam.invalid> wrote: > George Neuner writes: > > Even the lightest weight > > user space ("green") threads need a few hundred instructions, minimum, > > to amortize the cost of context switching. > > I thought the definition of green threads was that multiplexing them > doesn't require context switches. There's always a context switch. It's just whether or not you are switching in/out a virtual stack and registers for the context or the hardware stack/registers. Jeff M. -- http://mail.python.org/mailman/listinfo/python-list
Re: The fundamental concept of continuations
> (6) any good readable references that explain it lucidly ? This was something that has been very interesting to me for a while now, and I'm actually still having a difficult time wrapping my head around it completely. The best written explanation that I've come across was in "The Scheme Programming Language" (http://mitpress.mit.edu/catalog/item/ default.asp?ttype=2&tid=9946). But perhaps others have better references. I'll attempt my own little explanation of call/cc. I'll butcher some of it, I'm sure, but hopefully those more knowledgeable will politely correct me. I will start with a loose analogy and point out a couple examples I came across that did make a lot of sense. First, the bad analogy I have (if you are coming from C programming like me) is setjmp and longjmp. This is a bad analogy in that you're talking about hardware and stack states as opposed to functions, but a good analogy in that it saves the current state of execution, and returns to that same state at a later time with a piece of data attached to it. My first example of using this would be to create a return function in Scheme. I hope I don't get this wrong, but the example would be something like this: (define (my-test x) (call/cc (lambda (return) (return x Now, here's my understanding of what is happening under-the-hood: 1. call/cc stores the current execution state and creates a function to restore to that state. 2. call/cc then calls its own argument with the function it created. The key here is that "return" is a function (created by call/cc) taking 1 argument, and it restores execution at the same state it was when the call/cc began (or immediately after it?). This line: (return x) is really just calling the function created by call/cc, which will restore the execution state to what it was just prior to the call/cc, along with a parameter (in this case, the value of x). My next example I don't follow 100%, and I won't attempt to reproduce it here, but it generates a continuation that modifies itself (bad?) to define a list iterator. http://blog.plt-scheme.org/2007/07/callcc-and-self-modifying-code.html I recommend putting that code into a Scheme interpreter and running it. You'll get it. Hope this helps, and I look forward to better explanations than mine that will help me along as well. :) Jeff M. -- http://mail.python.org/mailman/listinfo/python-list