[issue30449] Improve __slots__ datamodel documentation

2017-12-31 Thread mpb

mpb <mpb.m...@gmail.com> added the comment:

@rhettinger

I disagree (but you're the boss).  If a function can take type X as a 
parameter, I believe docs should also say what the expected behavior is when 
you call the function and pass it type X, especially when type X is 
fundamentally different from every other type the function accepts.  (And yes, 
__slots__ is not a function, but I still find the metaphor apt.) 
 Cheers!

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30449] Improve __slots__ datamodel documentation

2017-12-29 Thread mpb

mpb <mpb.m...@gmail.com> added the comment:

Can __slots__ be assigned a string?  If so, what are the consequences?  I find 
the current docs lack clarity.  For example from the docs for 3.7.0a3:

https://docs.python.org/3.7/reference/datamodel.html#slots

>  3.3.2.4. __slots__
>
>  [snip]
>
>  object.__slots__
>This class variable can be assigned a string,
>iterable, or sequence of strings [snip]

However, "3.3.2.4.1. Notes on using __slots__" does not discuss what happens 
when a string is assigned to __slots__.

The "notes" section does discuss assigning a "sequence of strings" to __slots__.

The "notes" section also says: "Any non-string iterable may be assigned to 
__slots__."

Based on quick experimentation, it appears that the string must be a single 
identifier.  I get a TypeError if I try to assign "foo bar" to __slots__.  The 
consequence of a string appears to be that only a single slot is created.  It 
would be nice if this was stated clearly in the docs.

The docs for 2.7 seem similar to version 3.7.0a3.  So maybe all versions of the 
docs could be improved regarding the specifics of what happens when you assign 
a string to __slots__.

--
nosy: +mpb

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17145] memoryview(array.array)

2013-11-21 Thread mpb

Changes by mpb mpb.m...@gmail.com:


--
nosy: +mpb

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17145
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19530] cross thread shutdown of UDP socket exhibits unexpected behavior

2013-11-15 Thread mpb

mpb added the comment:

 It's just a patch to avoid returning garbage in the address.

Right, which is why I pursued the point.  recvfrom should not return ambiguous 
data (the ambiguity being between shutdown and receiving a zero 
length message).  It is now possible to distinguish the two by looking at the 
src_addr.  (Arguably this could have been done before, but garbage in src_addr 
is not a reliable indicator, IMO.)

 But AFAICT, recvfrom() returning 0 is enough to know that the socket
 was shut down.

My example code clearly shows a zero length UPD message being sent and received 
prior to shutdown.

I admit, sending a zero length UDP message is probably pretty rare, but it is 
allowed and it does work.  And it makes more sense than returning garbage in 
src_addr.

 But two things to keep in mind:
 - it'll only work on connected datagram sockets

What will only work on connected datagram sockets?  Shutdown *already* works 
(ie, wakes up blocked threads) on non-connected datagram sockets on Linux.  
Shutdown does wake them up (it just happens to return an error *after* waking 
them up).  So... the only reason to connect the UDP socket (prior to calling 
shutdown) is to avoid the error (or, in Python, to avoid the raised Exception).

 - even then, I'm not sure it's supported by POSIX: I can't think of
 any spec specifying the behavior in case of cross-thread shutdown (and
 close won't unblock for example).  Also, I think HP-UX doesn't wake up
 the waiting thread in that situation.

Do you consider the POSIX specifications to be robust when it comes to 
threading?  It would not surprise me if there are other threading related grey 
areas in POSIX.

 So I'd still advise you to either use a timeout or a select().

My application only needs to run on Linux.  If I cared about portability, I 
might well do something else.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19530
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19530] cross thread shutdown of UDP socket exhibits unexpected behavior

2013-11-14 Thread mpb

mpb added the comment:

Someone wrote a kernel patch based on my bug report.

http://www.spinics.net/lists/netdev/msg257653.html

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19530
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19577] memoryview bind (the opposite of release)

2013-11-13 Thread mpb

New submission from mpb:

I'm writing Python code to parse binary (byte oriented) data.

I am (at least somewhat) aware of the performance implications of various 
approaches to doing the parsing.  With performance in mind, I would like to 
avoid unnecessary creation/destruction/copying of memory/objects.

An example:

Let's say I am parsing b'0123456789'.
I want to extract and return the substring b'234'.

Now let's say I do this with memoryviews, to avoid unnecessary creation and 
copying of memory.

m0 = memoryview (b'0123456789')
m1 = m0[2:5]# m1 == b'234'

Let's say I do this 1000 times.  Each time I use readinto to load the next data 
into m0.  So I can create m0 only once and reuse it.

But if the relative position of m1 inside m0 changes with each parse, then I 
need to create a new m1 for each parse.

In the context of the above example, I think it might be nice if I could rebind 
an existing memoryview to a new object.  For example:

m0 = memoryview (b'0123456789')
m1.bind (m0, 2, 5)# m1 == b'234'

Is this an idea worth considering?

(Possibly related: Issue 9789, 9757, 3506; PEP 3118)

--
messages: 202806
nosy: mpb
priority: normal
severity: normal
status: open
title: memoryview bind (the opposite of release)
type: enhancement
versions: Python 3.5

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19577
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19577] memoryview bind (the opposite of release)

2013-11-13 Thread mpb

mpb added the comment:

It would be nice in terms of avoiding malloc()s and free()s.

I could estimate it in terms of memoryview creations per message parse.  I'll 
be creating 10-20 memoryviews to parse each ~100 byte message.

So... I guess I'd have to build a test to see how long a memoryview 
creation/free takes.  And then perhaps compare it with variable to variable 
assignment instead.

If Python pools and recycles unused object by type (the way Lisp recycles cons 
cells) without free()ing them back to the heap, then there would be minimal 
speed improvement from my suggestion.  I don't know how CPython works 
internally, however.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19577
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18240] hmac unnecessarily restricts input to bytes

2013-11-12 Thread mpb

Changes by mpb mpb.m...@gmail.com:


--
nosy: +mpb

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18240
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19530] cross thread shutdown of UDP socket exhibits unexpected behavior

2013-11-11 Thread mpb

mpb added the comment:

 Connecting a UDP socket doesn't established a duplex connection like
 in TCP:

Stream and duplex are orthogonal concepts.

I still contend that connected UDP sockets are a duplex communication channel 
(under every definition of duplex I have read).

The Linux connect manpage and the behavior of the Linux connect and shutdown 
system calls agree with me.  (So does the OpenBSD shutdown manpage.)

But we agree that this is not a Python issue (unless Python wants to improve 
its documentation to explicitly mention the benefits of cross thread shutdowns 
of TCP sockets).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19530
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19530] cross thread shutdown of UDP socket exhibits unexpected behavior

2013-11-09 Thread mpb

mpb added the comment:

After some research...

 Which is normal, since UDP sockets aren't connected.

But UDP sockets can be connected!

If I connect the UDP sockets, then shutdown succeeds (no exception is raised), 
but recvfrom still appears to succeed, returning a zero length message with a 
bogus address family, IP address and port.  (Bogus even if I set them to zero 
before the call!)

FYI, the FreeBSD (and OpenBSD) shutdown manpages anticipate calling shutdown on 
DGRAM sockets.  And the Linux connect manpage discusses connecting DGRAM 
sockets.

Here is the updated Python code.  I do expect to try to report this upstream.  
(Also, I now have C/pthreads code, if you want to see it.  As expected, C 
behaves identically.)



import socket, threading, time

fd_0 = socket.socket (socket.AF_INET, socket.SOCK_DGRAM)
fd_0.bind(('localhost', 8000))
fd_0.connect (('localhost', 8001))

fd_1 = socket.socket (socket.AF_INET, socket.SOCK_DGRAM)
fd_1.bind(('localhost', 8001))
fd_1.connect (('localhost', 8000))

def thread_main ():
  for i in range (3) :
# print ('recvfrom  blocking ...')  
recv, remote_addr = fd_0.recvfrom (1024)
print ('recvfrom  %s  %s' % (recv, remote_addr))

def main ():
  fd_1.send (b'test')
  fd_1.send (b'')
  fd_0.shutdown (socket.SHUT_RD)

thread = threading.Thread ( target = thread_main )
thread.start ()
time.sleep (0.5)
main ()
thread.join ()
print ('exiting')



And the code outputs:
recvfrom  b'test'  ('127.0.0.1', 8001)
recvfrom  b''  ('127.0.0.1', 8001)
recvfrom  b''  (36100, b'\xe4\xc6\xf0^7\xe2\x85\xf8\x07\xc1\x04\x8d\xe4\xc6')
exiting

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19530
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19530] cross thread shutdown of UDP socket exhibits unexpected behavior

2013-11-08 Thread mpb

New submission from mpb:

I have a multi-threaded application.
A background thread is blocked, having called recvfrom on a UDP socket.
The main thread wants to cause the background thread to unblock.
With TCP sockets, I can achieve this by calling:
sock.shutdown (socket.SHUT_RD)

When I try this with a UDP socket, the thread calling shutdown raises an OS 
Error (transport end point not connected).

The blocked thread does unblock (which is helpful), but recvform appears to 
return successfully, returning a zero length byte string, and a bogus address!

(This is the opposite of the TCP case, where the blocked thread raises the 
exception, and the call to shutdown succeeds.)

In contrast, sock.close does not cause the blocked thread to unblock.  (This is 
the same for both TCP and UDP sockets.)

I suspect Python is just exposing the underlying C behavior of shutdown and 
recvfrom.  I'd test it in C, but I'm not fluent in writing multi-threaded code 
in C.

It would be nice if the recvfrom thread could raise some kind of exception, 
rather than appearing to return successfully.  It might also be worth reporting 
this bug upstream (where ever upstream is for recvfrom).  I'm running Python 
3.3.1 on Linux.

See also this similar bug.
http://bugs.python.org/issue8831

The Python socket docs could mention that to unblock a reading thread, sockets 
should be shutdown, not closed.  This might be implied in the current docs, but 
it could be made explicit.  See:

http://docs.python.org/3/library/socket.html#socket.socket.close

For example, the following sentence could be appended to the Note at the above 
link.  Note: (...)  Specifically, in multi-threaded programming, if a thread 
is blocked performing a read or write on a socket, calling shutdown from 
another thread will unblock the blocked thread.  Unblocking via shutdown seems 
to work with TCP sockets, but may result in strange behavior with UDP sockets.

Here is sample Python code that demonstrates the behavior.

import socket, threading, time

sock = socket.socket (socket.AF_INET, socket.SOCK_DGRAM)
sock.bind (('localhost', 8000))

def recvfrom ():
  for i in range (2) :
print ('recvfrom  blocking ...')
recv, remote_addr = sock.recvfrom (1024)
print ('recvfrom  %s  %s' % (recv, remote_addr))

thread = threading.Thread ( target = recvfrom )
thread.start ()
time.sleep (0.5)

sock2 = socket.socket (socket.AF_INET, socket.SOCK_DGRAM)
sock2.sendto (b'test', ('localhost', 8000))

time.sleep (0.5)

try:  sock.shutdown (socket.SHUT_RD)
except OSError as exc :  print ('shutdown  os error  %s' % str (exc))

sock.close ()

thread.join ()
print ('exiting')




And here is the output of the above code:

recvfrom  blocking ...
recvfrom  b'test'  ('127.0.0.1', 48671)
recvfrom  blocking ...
shutdown  os error  [Errno 107] Transport endpoint is not connected
recvfrom  b''  (59308, b'\xaa\xe5\xec\xde3\xe6\x82\x02\x00\x00\xa8\xe7\xaa\xe5')
exiting

--
components: IO
messages: 202457
nosy: mpb
priority: normal
severity: normal
status: open
title: cross thread shutdown of UDP socket exhibits unexpected behavior
type: behavior
versions: Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19530
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19438] Where is NoneType in Python 3?

2013-10-29 Thread mpb

New submission from mpb:

types.NoneType seems to have disappeared in Python 3.  This is probably 
intentional, but I cannot figure out how to test if a variable is of type 
NoneType in Python 3.

Specifically, I want to write:
assert  type (v) in ( bytes, types.NoneType )

Yes, I could write:
assert  v is None or type (v) is bytes

But the first assert statement is easier to read (IMO).

Here are links to various Python 3 documentation about None:

[1] http://docs.python.org/3/library/stdtypes.html#index-2

[2] http://docs.python.org/3/library/constants.html#None

Link [2] says: None  The sole value of the type NoneType.  However, NoneType 
is not listed in the Python 3 documentation index.  (As compared with the 
Python 2 index, where NoneType is listed.)

[3] http://docs.python.org/3/library/types.html

If NoneType is gone in Python 3, mention of NoneType should probably be removed 
from link [2].  If NoneType is present in Python 3, the docs (presumably at 
least one of the above links, and hopefully also the index) should tell me how 
to use it.

Here is another link:

[4] http://docs.python.org/3/library/stdtypes.html#bltin-type-objects

The standard module types defines names for all standard built-in types.  
(Except class 'NoneType' ???)

None is a built-in constant.  It has a type.  If None's type is not considered 
to be a standard built-in type, then IMO this is surprising(!!) and should be 
documented somewhere (for example, at link [4], and possibly elsewhere as well.)

Thanks!

--
assignee: docs@python
components: Documentation
messages: 201666
nosy: docs@python, mpb
priority: normal
severity: normal
status: open
title: Where is NoneType in Python 3?
versions: Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19438
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19438] Where is NoneType in Python 3?

2013-10-29 Thread mpb

mpb added the comment:

Of your 4 suggestions, I mentioned #3 and #4 in my post.  They are less 
readable, IMO.

1 and 2 are nicer, but both have an extra set of nested parenthesis.

While I appreciate the suggestions, I submitted this as a documentation bug, 
because I think I should be able to find these suggestions somewhere in the 
Python 3 documentation, at one (or more) of the links I included in my bug 
report.  Also, the Python 3 documentation does mention NoneType, and if 
NoneType is not part of Python 3, I claim this is an error in the documentation.

And then, there is my personal favorite work-around:

NoneType = type (None)# only needed once
assert type (v) in ( bytes, NoneType )

Or (perhaps more confusingly, LOL!):

none = type (None)
assert type (v) in ( bytes, none )

isinstance is more confusing because it takes two arguments.  Whenever I use it 
I have to think, isinstance vs instanceof, which is Python, which is Java?  
(And I haven't used Java seriously in years!)  And then I have to think about 
the order of the arguments (v, cls) vs (cls, v).  type is just simpler than 
isinstance.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19438
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19438] Where is NoneType in Python 3?

2013-10-29 Thread mpb

mpb added the comment:

Regarding http://www.python.org/dev/peps/pep-0294/ ...

Complete removal of the types module makes more sense to me than letting types 
continue, but removing NoneType from it!

If type(None) is the one_true_way, then the docs should say that, possibly in 
multiple locations.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19438
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19167] sqlite3 cursor.description varies across Linux (3.3.1), Win32 (3.3.2), when selecting from a view.

2013-10-08 Thread mpb

mpb added the comment:

No, I have not checked to see if it is a bug in the Windows version of SQLite.

How would I even test that?

I just tried running the command line version of SQLite (version 3.8.0.2 
2013-09-03) on Windows (XP SP2, in VirtualBox).

I manually ran the same statements from the Python script.  I turned on headers 
(.headers ON).  The headers did not contain the quotes around foo_id.

That's probably all the testing I can do easily, unless there is some other way 
to access the cursor description.  I don't have a C development environment 
installed on Windows, nor have I ever written C code that calls SQLite.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19167
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19167] sqlite3 cursor.description varies across Linux (3.3.1), Win32 (3.3.2), when selecting from a view.

2013-10-04 Thread mpb

New submission from mpb:

On Win32, when I select from an SQLite view, and enclose the column name in 
double quotes in the select query, the cursor description (erroneously?) 
contains the double quotes.

On Linux (or on Win32 when selecting from a table rather than a view) the 
cursor description does not contain the double quotes.  I expect the Linux 
behavior, not the Win32 behavior.

The following code demonstrates the problem.

import sqlite3, sys

print (sys.platform)
print (sys.version)

conn = sqlite3.connect (':memory:')
cur  = conn.cursor ()

cur.execute ('create table Foo ( foo_id integer primary key ) ;')
cur.execute ('create view  Foo_View as select * from Foo ;')

cur.execute ('select foo_id from Foo;')
print (cur.description[0][0])
cur.execute ('select foo_id from Foo;')
print (cur.description[0][0])
cur.execute ('select foo_id from Foo_View;')
print (cur.description[0][0])
cur.execute ('select foo_id from Foo_View;')
print (cur.description[0][0])


Sample output on Linux and Win32.

linux
3.3.1 (default, Apr 17 2013, 22:32:14) 
[GCC 4.7.3]
foo_id
foo_id
foo_id
foo_id

win32
3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit (Intel)]
foo_id
foo_id
foo_id
foo_id


Above, please note the (erroneous?) double quotes around the final foo_id.

--
components: Library (Lib)
messages: 198964
nosy: mpb
priority: normal
severity: normal
status: open
title: sqlite3 cursor.description varies across Linux (3.3.1), Win32 (3.3.2), 
when selecting from a view.
type: behavior
versions: Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19167
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18915] ssl.wrap_socket, pass in certfile and keyfile as PEM strings

2013-09-03 Thread mpb

New submission from mpb:

It would be nice to be able to pass ssl.wrap_socket the key and certificate as 
PEM encoded strings, rather than as paths to files.

Similarly for SSLContext.load_cert_chain.

--
components: Library (Lib)
messages: 196878
nosy: mpb
priority: normal
severity: normal
status: open
title: ssl.wrap_socket, pass in certfile and keyfile as PEM strings
type: enhancement
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18915
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18293] ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated certificates

2013-06-25 Thread mpb

mpb added the comment:

Christian wrote:
 sslsocket gives you access to the peer's cert and chain (with 
 #18233).

Very interesting (and useful).  I've mostly been working with Python
2.7, and I had not fully noticed that Python 3.2+ has a ssl.SSLContext
class.

 I'd rather not implement a full wrapper for X509_STORE_CTX and X509 
 certs. It's way too much code, super complex and easily confuses even 
 experienced developers. Python's ssl module is limited to core 
 functionality by design and choice.

 However I might be intrigue to implement support for
 SSL_CTX_set_cert_verify_callback() or SSL_CTX_set_verify().

SSL_CTX_set_verify() seems (mostly) redundant SSLContext.verify_mode.  
Or am I missing something?

 SSL_CTX_set_cert_verify_callback() has more potential, e.g.
 
 def cert_verify_callback(sslsocket, storectx, verify_ok):
 context = sslsocket.context

 storectx is a minimal X509_STORE_CTX object and verify_ok the boolean
 return value of X509_verify_cert(). Without a cert verify callback
 OpenSSL just uses the return value of X509_verify_cert()
 (ssl/ssl_cert.c:ssl_verify_cert_chain()).

I believe support for SSL_CTX_set_cert_verify_callback() would allow
customized certificate verification, which is what I am looking for.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18293
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18293] ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated certificates

2013-06-24 Thread mpb

New submission from mpb:

At present (Python 2.7.[45] and 3.3.[12]), the cert_reqs parameter of 
ssl.wrap_socket can be one of:

ssl.CERT_NONE
ssl.CERT_OPTIONAL
ssl.CERT_REQUIRED

I would find the following additional modes to be useful:
ssl.CERT_OPTIONAL_NO_VERIFY
ssl.CERT_REQUIRED_NO_VERIFY

In these cases, the server's certificate would be available via the 
.getpeercert () method, even if the certificate does not pass verification.

The use case for these modes would be connecting to servers, some of which may 
use valid certificates, and others of which may be using self signed 
certificates.

Another use case might be an ssh-like mode of operation.  ssh will warn the 
user of possible man-in-the-middle attacks if a server's public key has changed.

Thanks!

--
components: Library (Lib)
messages: 191796
nosy: mpb
priority: normal
severity: normal
status: open
title: ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated 
certificates
type: enhancement
versions: Python 2.7, Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18293
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18293] ssl sni

2013-06-24 Thread mpb

Changes by mpb mpb.m...@gmail.com:


--
title: ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated 
certificates - ssl sni

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18293
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18293] ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated certificates

2013-06-24 Thread mpb

mpb added the comment:

(Oops, I changed the title when I meant to do a search.  Changing it back now.)

--
title: ssl sni - ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated 
certificates

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18293
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8109] Server-side support for TLS Server Name Indication extension

2013-06-24 Thread mpb

Changes by mpb mpb.m...@gmail.com:


--
nosy: +mpb

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8109
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18293] ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated certificates

2013-06-24 Thread mpb

mpb added the comment:

Hi Christian, thanks for the prompt response.

Sorry about choosing the wrong versions - I wasn't thinking that enhancements 
should target future versions, but of course that makes sense.

After submitting the enhancement request, I did dig into the OpenSSL docs, and, 
as Christian points out, I discovered that OpenSSL is not designed in a way 
that makes it easy to implement the enhancement.

Aside: Interestingly, it looks easier to implement the enhancement in PolarSSL, 
and probably also in MatrixSSL and CyaSSL.  Of course, that's not really an 
option.  I did not look at GnuTLS.

Thanks for the pointer about being able to get the server's DER certificate.  
That will be useful.  Is there some reason to return DER but not PEM?  Or is 
this perhaps a bug that could be fixed in a future version of Python's ssl 
module?

Christian wrote: Optional and required verification makes only a differen[ce] 
for client side certs.

I believe there is one small exception:  With SSL_VERIFY_NONE, a client will 
continue talking with a server with an invalid certificate.  With 
SSL_VERIFY_PEER, when a client fails to verify the server's certificate, the 
client will terminate the connection.

Ideally, I would like a client to be able to get both of  the following from 
the API: (a) the server's certificate (and chain?), and (b) whether or not the 
certificate (and chain?) is valid (against a given sets of root certs).

Similarly, I would like a Python server to be able to get both of: (a) the 
client's certificate, and (b) whether the certificate is valid (against a given 
set of root certs).

In the latter case, it seems that OpenSSL is even more restrictive!  With 
SSL_VERIFY_NONE, the server will not request (and presumably therefore not even 
receive) a certificate.  With SSL_VERIFY_PEER, the server will terminate the 
connection if the client's certificate does not validate!  Very inconvenient!

Interestingly, I believe I have worked around this limitation in OpenSSL using 
M2Crypto (which is built on top of OpenSSL), by installing my own verifier that 
overrides the built-in verifier.  This is done as follows:

import M2Crypto.SSL
ctx  = M2Crypto.SSL.Context ()
ctx.load_cert ('var/cert.pem')
def verify (*args):  return True
ctx.set_verify (M2Crypto.SSL.verify_peer, 10, verify)

After doing this, both the client and the server can see each other's 
certificates, even if those certificates are invalid.  (Of course I'll have to 
write my own verifier.  return True is only useful for testing purposes.)

I'm not sure how much of this functionality the Python developers might be 
interested in putting into Python 3.4?  Given that M2Crypto does not work with 
Python 3.x at all (at least not yet?), I am interested in finding something 
that will work with Python 3.x and give me the functionality I want.

I can probably help with the C OpenSSL code, if needed.  However, I have no 
experience writing Python bindings.

Your thoughts?  Thanks!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18293
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18293] ssl.wrap_socket (cert_reqs=...), getpeercert, and unvalidated certificates

2013-06-24 Thread mpb

mpb added the comment:

Oh, I see.  getpeercert (binary_form) is not DER vs. PEM, it is DER vs. dict.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18293
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com