Re: Django Integration

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 9:28 PM, Anssi Kääriäinen  wrote:

> On Thursday, May 5, 2016, Andrew Godwin  wrote:
>>
>> Do you have a link to the presentation about them removing it?
>>
>
> https://youtu.be/839rskyJaro around 34 minutes and onwards, another
> version is https://youtu.be/2dG5HeM7MvA at 24 minutes.
>

Summarising their issues here for future people to save on video watching:

- Large response bodies/streamed/chunked bodies are hard to serialise into
Redis
- Working out the order to restart things in is tricker than standard HTTP
loadbalancing
- Healthchecking the Rails process was difficult as they didn't listen on
HTTP (just to Redis)
- Hard to do rate-limiting and monitoring as it lacked context
- HAProxy got a lot more featureful and grew the features they needed
(hot-swapping of backends), and was a lot more battle-tested

In general, it seems they were mainly using it to achieve HA, and it didn't
quite pull through for them on that front. Some of these issues also apply
to Channels, some less so as it's not designed as a HA solution and just
being used for that.

In particular, they were doing a lot more to keep requests around, whereas
Channels will drop them if it's unsure what's going on, which gives it a
lot more leeway (but makes it less "perfect" at HA).


>
> They were tackling a bit different problem, so their lessons might not
> apply. Most importantly they weren't aiming for websockets at all, just for
> high availability and throughput for normal HTTP traffic. On the other
> hand, the architecture of broxy itself is pretty much the same as Daphne,
> and they felt they had problems with it.
>

I need to start writing up my preferred large-scale deployment architecture
for Channels more, which is "small clusters of interfaces and workers with
an external loadbalancer across all clusters"; all of the stories I've
heard, this included, the common theme is trying to push all of the site
traffic through just the one bus and having great issues when it keels over
under unexpected load or machine failure.

I'm also starting to think we should have a local shared-memory channel
layer that only works between processes on the one machine, so one could in
theory just run clusters like this on decently sized multicore boxes; that
would make a lot of people happier who don't want to have to run Redis in
production but still want the multi-process behaviour that Channels gives.
Running Channels would then just mean two processes, Daphne and workers.


>
> The way we have approached recent feature additions is that we let them
> prove themselves outside of core. I think the crucial question is why
> Channels can't do this, that is why can't we wait some time and let actual
> production deployments prove the design?
>
> I know South is an example of why some features have a hard time living
> outside core. But DRF is an example that a feature crucial to modern web
> development can work very well outside of core.
>
>
Channels sits somewhere between these two; it's not quite as deeply
involved in hacking around core parts of Django as South was, but it's a
bit deeper than DRF gets since it technically inserts itself under the
entire view/URL routing system and sits alongside the WSGI handling code.

My reason for pushing to get it in is that a lot of the utility of Channels
is not in the code itself but what it enables; most of the things I would
like to build can sit on top as third-party packages and get pulled in
later if they make sense (e.g. a nice set of generic websocket consumers,
or a high-level model-changes streaming solution).

If it lives as some third-party package for another 8 months, it's a lot
harder to sound that rally cry to build up the necessary community of
layers on top of it that complete the ecosystem; plus, there's the obvious
human factor that it will likely place the same workload on a much smaller
set of main contributors.

The situation is helped by the fact that the code that will likely need
most polish and iteration - Daphne and asgi_redis - will still live outside
Django's core codebase, giving us some more flexibility in how we handle
changes (though I would like to still provide similar levels of stability
and security assurances on them) - and that we'll be labelling it as
"provisional" in 1.10 (similar to how PEP 411 describes it for Python),
meaning we get one last chance to tweak things in 1.11 before we lock the
API in for good.

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1upU%2B_jm4VHP9

Re: Django Integration

2016-05-05 Thread Anssi Kääriäinen
On Thursday, May 5, 2016, Andrew Godwin  wrote:
>
> Do you have a link to the presentation about them removing it?
>

https://youtu.be/839rskyJaro around 34 minutes and onwards, another version
is https://youtu.be/2dG5HeM7MvA at 24 minutes.

They were tackling a bit different problem, so their lessons might not
apply. Most importantly they weren't aiming for websockets at all, just for
high availability and throughput for normal HTTP traffic. On the other
hand, the architecture of broxy itself is pretty much the same as Daphne,
and they felt they had problems with it.

The way we have approached recent feature additions is that we let them
prove themselves outside of core. I think the crucial question is why
Channels can't do this, that is why can't we wait some time and let actual
production deployments prove the design?

I know South is an example of why some features have a hard time living
outside core. But DRF is an example that a feature crucial to modern web
development can work very well outside of core.

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CALMtK1FDXaa2kMAo69NO7ji-Cpd4sdTwDWgaysx9NRGH%2BGH%2BSw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 5:13 PM, Mark Lavin  wrote:

> Yes I agree with the value of a standardized way of communicating between
> these processes and I listed that as a highlight of Channels, though it
> quickly shifted into criticism. I think that's where we are crossing paths
> with relation to Kombu/AMQP as well. I find the messaging aspect of
> Channels far more interesting and valuable than ASGI as a larger
> specification. Messaging I do think needs to be network transparent. I just
> don't like that aspect tied into the HTTP handling. At this point I'm not
> sure how to decouple the messaging aspect from the HTTP layer since I feel
> they are very tightly bound in ASGI.
>

I see what you mean; HTTP is definitely less of a fit to ASGI than
WebSockets, and it wasn't even in there at all initially, but I felt that
the ability to unify everything inside Django to be a consumer was too
strong to pass up (plus the fact that it allowed long-polling HTTP which I
still use a lot in lieu of WebSocket support, mostly for work reasons).


>
> Honestly I don't think Django *needs* tightly integrated websocket support
> but I do see the value in it so we aren't at a complete impasse. I suppose
> that's why it's my general preference to see a third-party solution gain
> traction before it's included. I played with integrating Django + aiohttp a
> few months ago. Nothing serious and I wouldn't call it an alternate
> proposal. It's barely a proof of concept:
> https://github.com/mlavin/aiodjango. My general inclination is that
> (insert wild hand waving)
> django.contrib.aiohttp/django.contrib.twisted/django.contrib.tornado would
> be the way forward for Django + websockets without a full scale rewrite of
> the WSGI specification.
>
>
The other track for this was definitely to go the South route and have it
run externally, but based on my previous experience with that route it is
not scalable from a people perspective.

I personally see this as something where any single third-party solution is
not going to gain enough traction to be tested and tried enough unless it's
defacto recommended by Django itself, at which point it's close to being a
core module with provisional status.

I feel like we're never going to quite agree on the approach here; I've
explained my stance, you have explained yours, and I think we both have a
good idea where we stand. I agree with some of your concerns, especially
around introducing more moving parts, but then modern websites have so many
already my concerns are perpetually high.

Given your feedback, I do want to work on a local, cross-process ASGI
backend and write up a full deployment story that uses WSGI servers for
HTTP and Daphne+worker servers for WebSockets, and have it as a top example
for what larger sites should do to deploy WebSockets initially; I think
that's an important piece of communication to show that this is only as
opt-in as you want it to be.

I'm also hopeful that the introduction of chat, email and other protocol
(e.g. IoT) interface servers to further highlight the flexibility of a
general messaging + worker system will help move us towards a future with
less moving parts; ASGI and Channels was always meant to be something to be
built upon, a basis for making Django more capable in different arenas.

Your point about the DEP process being circumvented was well made, too, and
I'll do my best from now on to make sure any large project I see being
attempted gets one in sooner rather than later.

That said, though, I don't know that I can really change Channels in line
with your feedback and still achieve the same goals; we've already come a
long way with it in terms of proving it in real world scenarios and fixing
nasty bugs, and I'm still convinced the core design is sound - we'll take
it to a DEP and run it through the actual process before merge to make sure
that Django is at least on board with that opinion, I owe you that much at
least.

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1uqqcCpmOw0Xa03FYekt_um7Y%2Bds3k0Sbb4Pf5TD-%2Bs_3w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 4:22 PM, Carl Meyer  wrote:
>
> I've no desire either to aggravate your RSI or kick you in the teeth! I
> understand the multiple competing pressures here and won't stand in the
> way of a merge into 1.10 sans DEP if that still seems like the best path
> forward to you. It's not like a merge into alpha is the end of the line
> in terms of possible design changes or updates (or even possibly
> reverts). A DEP could even happen post-merge; that would be unusual but
> perhaps still better than none at all.
>
> I have a couple more comments, more in the line of general thoughts
> about the whys and hows of DEPs.
>
> I do think that DEPs have a significant value that goes beyond just
> providing information that could be found elsewhere (e.g. in the
> channels documentation). They collect that information (or references to
> it) in one place, in a standard digestible format, and formally present
> it to the community as a requested change, with rationale and rejected
> alternatives (including a fair representation of the objections that
> have been raised and your answers to them), and present a formal
> opportunity for anyone with concerns to raise them (and give you a
> reasonable place to later say "this is precisely when you should have
> raised your concerns if you had them") and then also store that in a
> stable place for future reference when someone comes by in two years and
> can't understand why we did things the way we did.
>
> (I'm not saying this to put further pressure on, just to defend the DEP
> process against the implicit charge that it's possibly-useless make-work
> when other documentation has already been written.)
>
> There's been no clear delineation of what size features should have a
> DEP. I think channels, multiple-template-engines, and
> reworked-middleware (and migrations, for that matter) are all
> rethinkings of long-standing core aspects of how Django works, which in
> my mind makes them prime DEP candidates, whereas FTS and password
> validation both seem to me like small-to-medium peripheral features that
> I wouldn't necessarily have expected to have one.
>

I think you're entirely right, Carl - I'm just getting frustrated with
myself at this point for not realising sooner and trying to find ways to
not do it - people only pay real attention to a patch as you're close to
merging and emotionally invested in it, and it's a little exasperating.

Jacob has graciously stepped in to help write one, and I am going to have a
much-needed evening off from doing Channels stuff, I haven't had a break in
a while.

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1ur%2B%3Dtw0A2azd%3Dxbqk0PDno%3D3hXuhFXNijOVsa_6QgQb0Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-05 Thread Jacob Kaplan-Moss
On Thu, May 5, 2016 at 7:22 PM, Carl Meyer  wrote:

> I think channels, multiple-template-engines, and
> reworked-middleware (and migrations, for that matter) are all
> rethinkings of long-standing core aspects of how Django works, which in
> my mind makes them prime DEP candidates,
>

There seems to be pretty strong consensus on this point. I'm writing one,
please give me a bit of time to get a draft up.

Jacob

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAK8PqJFr-Wnheqp3R-7oSekfKm6-vUsiWsq%3D%3DH8S0YY-wrVnfw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-05 Thread Mark Lavin
Yes I agree with the value of a standardized way of communicating between 
these processes and I listed that as a highlight of Channels, though it 
quickly shifted into criticism. I think that's where we are crossing paths 
with relation to Kombu/AMQP as well. I find the messaging aspect of 
Channels far more interesting and valuable than ASGI as a larger 
specification. Messaging I do think needs to be network transparent. I just 
don't like that aspect tied into the HTTP handling. At this point I'm not 
sure how to decouple the messaging aspect from the HTTP layer since I feel 
they are very tightly bound in ASGI.

Honestly I don't think Django *needs* tightly integrated websocket support 
but I do see the value in it so we aren't at a complete impasse. I suppose 
that's why it's my general preference to see a third-party solution gain 
traction before it's included. I played with integrating Django + aiohttp a 
few months ago. Nothing serious and I wouldn't call it an alternate 
proposal. It's barely a proof of 
concept: https://github.com/mlavin/aiodjango. My general inclination is 
that (insert wild hand waving) 
django.contrib.aiohttp/django.contrib.twisted/django.contrib.tornado would 
be the way forward for Django + websockets without a full scale rewrite of 
the WSGI specification.

Not sure if I touched on all of your questions so please let me know if it 
seems like I'm skipping over something.

- Mark

On Thursday, May 5, 2016 at 6:31:05 PM UTC-4, Andrew Godwin wrote:
>
>
>
> On Thu, May 5, 2016 at 2:19 PM, Mark Lavin  > wrote:
>
>> Thank you for your comments and I have some brief replies.
>>
>>
>> If I'm understanding it correctly, groups are an emulated broadcast. I'm 
>> saying it would be an advantage for it to use pub/sub but it does not.
>>
>
> You are correct; the reason Redis pub/sub is not used is because the ASGI 
> API allows applications to not listen continuously on channels and instead 
> check in every so often, so it uses lists so there's some persistence; this 
> could be changed, though. I do want to improve the group send function so 
> it runs on Lua inside Redis rather than multi-sending from outside, however.
>  
>
>>  
>>
>>>
>>> I've always tried to be clear that it is not a Celery replacement but 
>>> instead a way to offload some non-critical task if required.
>>>
>>
>> I don't agree that this has been clear. That is my primary criticism 
>> here. I don't think this should be encouraged. Ryan's reply continues with 
>> this confusion.
>>
>
> I would love to work with you on clearing this up, then; trying to 
> communicate what the design is intended to be is one of the hardest parts 
> of this project, especially considering there are so many avenues people 
> hear about this stuff through (and the fact that I do think _some_ 
> non-critical tasks could be offloaded into channels consumers, just not the 
> sort Celery is currently used for).
>  
>
>>
>> Yes the lock-in is an exaggeration, however, given the poor 
>> support/upkeep for third-party DB backends, I doubt the community will have 
>> better luck with Channel backends not officially supported by the Django 
>> core team. I'd be happy to be wrong here.
>>
>
> Yes, that's a fair comparison. There was even an effort to try and get a 
> second one going and ready to use before merge but unfortunately it didn't 
> get anywhere yet.
>  
>
>>
>> Kombu is not to be confused with Celery. Kombu is a general purpose 
>> AMQP/messaging abstraction library. I don't think we agree on its potential 
>> role here. Perhaps it's better stated that I think Channel's minimalist API 
>> is too minimalist. I would prefer if additional AMQP-like abstractions 
>> existed such as topic routing and QoS.
>>
>
> I understand what Kombu is (though it's maintained by the Celery team from 
> what I understand, which is why I refer to them collectively). I still 
> maintain that the design of AMQP and Kombu is unsuited for what I am trying 
> to accomplish here; maybe what I am trying to accomplish is wrong, and I'm 
> happy to argue that point, but based on what I'm trying to do, AMQP and 
> similar abstractions are not a good fit - and I did write one of the 
> earlier versions of Channels on top of Celery as an experiment.
>  
>
>>
>>> ASGI is essentially meant to be an implementation of the CSP/Go style of 
>>> message-passing interprocess communication, but cross-network rather than 
>>> merely cross-thread or cross-process as I believe that network transparency 
>>> makes for a much better deployment story and the ability to build a more 
>>> resilient infrastructure.
>>>
>>
>> Again I don't agree with this argument and I don't see anything in 
>> Channels which backs up this claim. I believe this is where we likely have 
>> a fundamental disagreement. I see this network transparency as additional 
>> latency. I see the addition of the backend/broker as another moving part to 
>> break.
>>
>
> Yes, I think this is fundamentally where

Re: Add HTML5 required attribute on form widgets

2016-05-05 Thread Collin Anderson
Hi Jon,

They're regular input fields that I'm using jQuery to .hide() and .show()
form fields on the front end of my ecommerce sites, based on different
radio buttons. It's all my jQuery code. I then have custom logic in the
backend to delete fields from forms, or select which form to validate
against. (Login vs Create account, Checkout vs Apply promo code, Use a
saved address or add new address, etc.)

Thanks,
Collin


On Thu, May 5, 2016 at 2:34 PM, Jon Dufresne  wrote:

> On Thu, May 5, 2016 at 11:29 AM, Collin Anderson 
> wrote:
>
>> If anyone is running into hidden required fields preventing forms from
>> submitting (like me), I've been using this jQuery code for a somewhat-hacky
>> quickfix:
>>
>> $(':hidden[required]').removeAttr('required')
>>
>>
> The changes made on master should not be adding required to hidden inputs.
> I added a check to prevent this. Are you experiencing different?
>
> Are these inputs hidden by Django or from another source?
>
> Cheers,
> Jon
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at https://groups.google.com/group/django-developers.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-developers/CADhq2b44N4VxvtoB-QYykapRoQtCXHbrNpOzxbay0KzmpSjHqQ%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFO84S7SPxiSy1Esfn04D7qoQP91dJ5Vs3XA6RA%3DrfPpzoRUgQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-05 Thread Carl Meyer
On 05/05/2016 04:37 PM, Andrew Godwin wrote:
> To be honest, I had entirely forgotten the DEP process existed until
> this thread started up; I'm not sure what to blame this on, but as a
> member of the tech board I haven't got an email about approving a DEP
> since last October, so it's been a while.

There has been more recent activity on several in-progress DEPs on this
mailing list, but it has been a while since one was accepted.

> Part of me does not want to aggravate my RSI by having to write and rush
> through a DEP in the next 10 days, but I can't deny that you are likely
> correct that it sends the right signal given that we have the process in
> place.
> 
> That said, a couple of decently-sized features (full text search,
> password validators) have landed recently without one, so I can't
> entirely feel justified dropping this from 1.10 given that it is fully
> written, has extensive documentation, a mostly-complete test suite and
> several fully-worked examples - far more context than a DEP would ever
> provide. It would feel like a bit of a kick in the teeth, to be honest.

I've no desire either to aggravate your RSI or kick you in the teeth! I
understand the multiple competing pressures here and won't stand in the
way of a merge into 1.10 sans DEP if that still seems like the best path
forward to you. It's not like a merge into alpha is the end of the line
in terms of possible design changes or updates (or even possibly
reverts). A DEP could even happen post-merge; that would be unusual but
perhaps still better than none at all.

I have a couple more comments, more in the line of general thoughts
about the whys and hows of DEPs.

I do think that DEPs have a significant value that goes beyond just
providing information that could be found elsewhere (e.g. in the
channels documentation). They collect that information (or references to
it) in one place, in a standard digestible format, and formally present
it to the community as a requested change, with rationale and rejected
alternatives (including a fair representation of the objections that
have been raised and your answers to them), and present a formal
opportunity for anyone with concerns to raise them (and give you a
reasonable place to later say "this is precisely when you should have
raised your concerns if you had them") and then also store that in a
stable place for future reference when someone comes by in two years and
can't understand why we did things the way we did.

(I'm not saying this to put further pressure on, just to defend the DEP
process against the implicit charge that it's possibly-useless make-work
when other documentation has already been written.)

There's been no clear delineation of what size features should have a
DEP. I think channels, multiple-template-engines, and
reworked-middleware (and migrations, for that matter) are all
rethinkings of long-standing core aspects of how Django works, which in
my mind makes them prime DEP candidates, whereas FTS and password
validation both seem to me like small-to-medium peripheral features that
I wouldn't necessarily have expected to have one.

Carl

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/572BD5AB.1050007%40oddbird.net.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: My Take on Django Channels

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 2:39 PM, Carl Meyer  wrote:

> Hi Andrew,
>
> On 05/05/2016 02:19 PM, Andrew Godwin wrote:
> > I will put my hand up and say that this sidestepped the DEP process, and
> > that's entirely my fault. It was not my intention; I've been working on
> > this for over two years, and only last year did I go public with my
> > semi-final design and start asking for feedback; I should probably have
> > taken it into a DEP then, but failed to.
>
> This isn't a past-tense question; it's not too late to write a DEP, and
> I personally think that a DEP should be written and approved by the
> technical board before the channels patch is merged. I actually assumed
> that one was still on its way; perhaps I missed some communication at
> some point that said it wouldn't be.
>

To be honest, I had entirely forgotten the DEP process existed until this
thread started up; I'm not sure what to blame this on, but as a member of
the tech board I haven't got an email about approving a DEP since last
October, so it's been a while.

I think my own experience merging migrations is to blame, which went very
like how this is currently going and so I probably gravitated towards it.


>
> I'm sensitive to the fact that you've already put lots of work into this
> and time is short if you want to get it into 1.10. On the other hand,
> this is precisely why the DEP process exists: to ensure that significant
> changes to Django are carefully considered, in public, in a way that
> allows those without time to dig into all the details to absorb and
> consider the salient high-level points. I think that is precisely what
> the channels work needs (in community/process terms), and I think we'd
> be very poorly advised to push forward on merging it without an approved
> DEP.
>
> I don't think a channels DEP would need to delve into the details of
> precisely which channel backends are currently available, etc; it would
> mostly be about justifying the high-level design (and comparing it to
> rejected alternatives for solving the same problems). It would focus on
> the changes to Django itself, rather than implementation choices of the
> initially-available external components. It could probably copy
> liberally from (or just be short and heavily reference) the ASGI spec
> and/or Channels docs; that's not a problem.
>
> I'm excited about the potential of channels and ASGI, but I'm also
> suspicious of arguments that it is urgent to merge into 1.10 at all
> costs. I'm not necessarily opposed to that, if it's ready on time and
> the community discussion around a DEP seems to have reached a
> satisfactory conclusion. (I also think that the important thing is to
> make sure the changes to Django itself aren't painting us into any
> corners: as long as ASGI is optional in Django, the external support
> components don't need to be fully mature yet; especially if we can do an
> import dance to make them optional dependencies, which I think is
> preferable.)
>
> But I also think it would be far better to wait than to rush it in in
> the face of reasonable unresolved concerns from the community, and
> without an approved DEP. The argument has been that merging it "sends
> the right signal to the community," but I have some concern that rushing
> the merge could send negative signals about process consistency and
> fairness that could easily outweigh any positive signals about Django
> "having an async solution."
>

Part of me does not want to aggravate my RSI by having to write and rush
through a DEP in the next 10 days, but I can't deny that you are likely
correct that it sends the right signal given that we have the process in
place.

That said, a couple of decently-sized features (full text search, password
validators) have landed recently without one, so I can't entirely feel
justified dropping this from 1.10 given that it is fully written, has
extensive documentation, a mostly-complete test suite and several
fully-worked examples - far more context than a DEP would ever provide. It
would feel like a bit of a kick in the teeth, to be honest.

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1ur1yOczrOppAKw2b8GqnCsK5CjkBcbRMCOLXKwCnXpCuw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 2:19 PM, Mark Lavin  wrote:

> Thank you for your comments and I have some brief replies.
>
>
> If I'm understanding it correctly, groups are an emulated broadcast. I'm
> saying it would be an advantage for it to use pub/sub but it does not.
>

You are correct; the reason Redis pub/sub is not used is because the ASGI
API allows applications to not listen continuously on channels and instead
check in every so often, so it uses lists so there's some persistence; this
could be changed, though. I do want to improve the group send function so
it runs on Lua inside Redis rather than multi-sending from outside, however.


>
>
>>
>> I've always tried to be clear that it is not a Celery replacement but
>> instead a way to offload some non-critical task if required.
>>
>
> I don't agree that this has been clear. That is my primary criticism here.
> I don't think this should be encouraged. Ryan's reply continues with this
> confusion.
>

I would love to work with you on clearing this up, then; trying to
communicate what the design is intended to be is one of the hardest parts
of this project, especially considering there are so many avenues people
hear about this stuff through (and the fact that I do think _some_
non-critical tasks could be offloaded into channels consumers, just not the
sort Celery is currently used for).


>
> Yes the lock-in is an exaggeration, however, given the poor support/upkeep
> for third-party DB backends, I doubt the community will have better luck
> with Channel backends not officially supported by the Django core team. I'd
> be happy to be wrong here.
>

Yes, that's a fair comparison. There was even an effort to try and get a
second one going and ready to use before merge but unfortunately it didn't
get anywhere yet.


>
> Kombu is not to be confused with Celery. Kombu is a general purpose
> AMQP/messaging abstraction library. I don't think we agree on its potential
> role here. Perhaps it's better stated that I think Channel's minimalist API
> is too minimalist. I would prefer if additional AMQP-like abstractions
> existed such as topic routing and QoS.
>

I understand what Kombu is (though it's maintained by the Celery team from
what I understand, which is why I refer to them collectively). I still
maintain that the design of AMQP and Kombu is unsuited for what I am trying
to accomplish here; maybe what I am trying to accomplish is wrong, and I'm
happy to argue that point, but based on what I'm trying to do, AMQP and
similar abstractions are not a good fit - and I did write one of the
earlier versions of Channels on top of Celery as an experiment.


>
>> ASGI is essentially meant to be an implementation of the CSP/Go style of
>> message-passing interprocess communication, but cross-network rather than
>> merely cross-thread or cross-process as I believe that network transparency
>> makes for a much better deployment story and the ability to build a more
>> resilient infrastructure.
>>
>
> Again I don't agree with this argument and I don't see anything in
> Channels which backs up this claim. I believe this is where we likely have
> a fundamental disagreement. I see this network transparency as additional
> latency. I see the addition of the backend/broker as another moving part to
> break.
>

Yes, I think this is fundamentally where we disagree, and most of the other
points stem from this.

The only solution for in-process multithreading in Python that is anywhere
near effective are reactor-based or greenlet-based async solutions -
asyncio, Twisted, gevent, etc. I don't think that, given the state and
trend of modern CPU and memory limitations, that we are anywhere near
having one process on a single core able to handle a randomly-loadbalanced
portion of modern site load; any one big calculation or bad request is
enough to bring that core down. In my opinion and experience, any single
thing you loadbalance to has to be capable of handling multiple large
requests at once, a situation we happily have today with the architecture
of things like uwsgi and gunicorn with worker threads/processes.

Based on that already-proven model of worker threads, I then extended it
out to be truly multi-process (the first version of Channels had
machine-only interprocess communication for transport), and finally given
the engineering challenges involved in building a good local-only
interprocess layer that works successfully - a situation that ended up
using Redis as the local broker anyway rather than playing unstable games
with shared memory, files or similar - it seemed that taking it across a
network and letting small clusters of machines coordinate made sense,
especially in modern cloud hosting environments where any single machine is
very subject to bad-neighbour issues.

You are right that it is yet another moving part, though. Would you have
less objection if ASGI was merely a cross-process communication interface
and just worked on a local machine using shared memory or the

Re: My Take on Django Channels

2016-05-05 Thread Carl Meyer
Hi Andrew,

On 05/05/2016 02:19 PM, Andrew Godwin wrote:
> I will put my hand up and say that this sidestepped the DEP process, and
> that's entirely my fault. It was not my intention; I've been working on
> this for over two years, and only last year did I go public with my
> semi-final design and start asking for feedback; I should probably have
> taken it into a DEP then, but failed to.

This isn't a past-tense question; it's not too late to write a DEP, and
I personally think that a DEP should be written and approved by the
technical board before the channels patch is merged. I actually assumed
that one was still on its way; perhaps I missed some communication at
some point that said it wouldn't be.

I'm sensitive to the fact that you've already put lots of work into this
and time is short if you want to get it into 1.10. On the other hand,
this is precisely why the DEP process exists: to ensure that significant
changes to Django are carefully considered, in public, in a way that
allows those without time to dig into all the details to absorb and
consider the salient high-level points. I think that is precisely what
the channels work needs (in community/process terms), and I think we'd
be very poorly advised to push forward on merging it without an approved
DEP.

I don't think a channels DEP would need to delve into the details of
precisely which channel backends are currently available, etc; it would
mostly be about justifying the high-level design (and comparing it to
rejected alternatives for solving the same problems). It would focus on
the changes to Django itself, rather than implementation choices of the
initially-available external components. It could probably copy
liberally from (or just be short and heavily reference) the ASGI spec
and/or Channels docs; that's not a problem.

I'm excited about the potential of channels and ASGI, but I'm also
suspicious of arguments that it is urgent to merge into 1.10 at all
costs. I'm not necessarily opposed to that, if it's ready on time and
the community discussion around a DEP seems to have reached a
satisfactory conclusion. (I also think that the important thing is to
make sure the changes to Django itself aren't painting us into any
corners: as long as ASGI is optional in Django, the external support
components don't need to be fully mature yet; especially if we can do an
import dance to make them optional dependencies, which I think is
preferable.)

But I also think it would be far better to wait than to rush it in in
the face of reasonable unresolved concerns from the community, and
without an approved DEP. The argument has been that merging it "sends
the right signal to the community," but I have some concern that rushing
the merge could send negative signals about process consistency and
fairness that could easily outweigh any positive signals about Django
"having an async solution."


Carl

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/572BBD7D.10302%40oddbird.net.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: My Take on Django Channels

2016-05-05 Thread Mark Lavin
Thank you for your comments and I have some brief replies.

On Thursday, May 5, 2016 at 4:20:06 PM UTC-4, Andrew Godwin wrote:
>
>
>
> On Thu, May 5, 2016 at 12:34 PM, Mark Lavin  > wrote:
>
> The main gains are (in my opinion):
>  - The same server process can serve both HTTP and WebSockets without path 
> prefixing (auto-negotiation based on the Upgrade header); without this you 
> need an extra web layer in front to route requests to the right backend 
> server
>  - HTTP long-polling is supported via the same mechanism (like WebSockets, 
> it does not fit inside the WSGI paradigm in a performant way)
>  - You get to run less processes overall
>

As noted I don't see serving them both as an advantage. Also given that 
daphne is single-thread/eventloop based, it's likely that frontend proxy 
already would be needed to handle balance multiple processes or SSL 
termination. I don't think this reduces the number of processes. As stated 
I think it's the same.
 

>
> Firstly, nothing in channels uses pub/sub - channels deliver to a single 
> reader of a queue, and thus cannot be built on a broadcast solution like 
> pub/sub.  
>
 
>

If I'm understanding it correctly, groups are an emulated broadcast. I'm 
saying it would be an advantage for it to use pub/sub but it does not.
 

>
> I've always tried to be clear that it is not a Celery replacement but 
> instead a way to offload some non-critical task if required.
>

I don't agree that this has been clear. That is my primary criticism here. 
I don't think this should be encouraged. Ryan's reply continues with this 
confusion.
 

>
>> So Channels is at best on par with the existing available approaches and 
>> at worst adds a bunch of latency, potentially dropped messages, and new 
>> points of failure while taking up more resources and locks everyone into 
>> using Redis. It does provide a clear message framework but in my opinion 
>> it’s too naive to be useful. Given the complexity in the space I don’t 
>> trust anything built from the ground up without having a meaningful 
>> production deployment to prove it out. It has taken Kombu many years to 
>> mature and I don’t think it can be rewritten easily.
>>
>
> a) ASGI does not lock everyone into using Redis; it just so happens that 
> is the first backend I have written. It is designed to run against other 
> suitable datastores or socket protocols and we have the money to fund such 
> an endeavour.
>
> b) Kombu solves a different problem - that of abstracting task queues - 
> and it would still be my first choice for that; I have used it for many 
> years and it would continue to be my choice for task queuing.
>

Yes the lock-in is an exaggeration, however, given the poor support/upkeep 
for third-party DB backends, I doubt the community will have better luck 
with Channel backends not officially supported by the Django core team. I'd 
be happy to be wrong here.

Kombu is not to be confused with Celery. Kombu is a general purpose 
AMQP/messaging abstraction library. I don't think we agree on its potential 
role here. Perhaps it's better stated that I think Channel's minimalist API 
is too minimalist. I would prefer if additional AMQP-like abstractions 
existed such as topic routing and QoS.
 

>
> ASGI is essentially meant to be an implementation of the CSP/Go style of 
> message-passing interprocess communication, but cross-network rather than 
> merely cross-thread or cross-process as I believe that network transparency 
> makes for a much better deployment story and the ability to build a more 
> resilient infrastructure.
>

Again I don't agree with this argument and I don't see anything in Channels 
which backs up this claim. I believe this is where we likely have a 
fundamental disagreement. I see this network transparency as additional 
latency. I see the addition of the backend/broker as another moving part to 
break.

 
>
>> It’s hard for me to separate this work from the process by which it was 
>> created. Russ touched on my previous experience with the DEP process and I 
>> will admit that has jaded many of my interactions with the core team. 
>> Building consensus is hard and I’m posting this to help work towards the 
>> goal of community consensus. Thanks for taking the time to read this all 
>> the way through and I welcome any feedback.
>>
>
> I will put my hand up and say that this sidestepped the DEP process, and 
> that's entirely my fault. It was not my intention; I've been working on 
> this for over two years, and only last year did I go public with my 
> semi-final design and start asking for feedback; I should probably have 
> taken it into a DEP then, but failed to.
>
> The problem is likely that I kept discussing channels with various members 
> of the core team and other people I know in the Django community, and 
> always received implicit approval, which is a terrible way to go about 
> being transparent. 
>
> That said, I hope that my efforts over the last year to publicise and

Re: My Take on Django Channels

2016-05-05 Thread Mark Lavin
Andrew,

I worked very hard to edit the tone of this message and I'm sorry if you 
felt anything in here was a personal attack. That certainly was not my 
intent. My natural speaking tendency leans toward hyperbole and I think 
there may have been places which got away from me here.

Best,

Mark

On Thursday, May 5, 2016 at 4:20:06 PM UTC-4, Andrew Godwin wrote:
>
>
>
> On Thu, May 5, 2016 at 12:34 PM, Mark Lavin  > wrote:
>
>> After somewhat hijacking another thread 
>> https://groups.google.com/d/msg/django-developers/t_zuh9ucSP4/eJ4TlEDMCAAJ 
>> I thought it was best to start fresh and clearly spell out my feelings 
>> about the Channels proposal. To start, this discussion of “Django needs a 
>> websocket story” reminds me very much of the discussions about NoSQL 
>> support. There were proof of concepts made and the sky is falling arguments 
>> about how Django would fail without MongoDB support. But in the end the 
>> community concluded that `pip install pymongo` was the correct way to 
>> integrate MongoDB into a Django project. In that same way, it has been 
>> possible for quite some time to incorporate websockets into a Django 
>> project by running a separate server dedicated for handling those 
>> connections in a framework such as Twisted, Tornado, Aiohttp, etc and 
>> establishing a clear means by which the two servers communicate with one 
>> another as needed by the application. Now this is quite vague and ad-hoc 
>> but it does work. To me this is the measuring stick by which to judge 
>> Channels. In what ways is it better or worse than running a separate server 
>> process for long-lived vs short-lived HTTP connections?
>>
>
> The main gains are (in my opinion):
>  - The same server process can serve both HTTP and WebSockets without path 
> prefixing (auto-negotiation based on the Upgrade header); without this you 
> need an extra web layer in front to route requests to the right backend 
> server
>  - HTTP long-polling is supported via the same mechanism (like WebSockets, 
> it does not fit inside the WSGI paradigm in a performant way)
>  - You get to run less processes overall
>
> That said, I don't see everyone running over to use Daphne in production, 
> which is why it's entirely reasonable to run two servers; one for HTTP and 
> one for WebSockets. Channels fully supports this, whether you run the HTTP 
> servers as self-contained WSGI servers or make them forward onto the ASGI 
> layer via the adapter.
>  
>
>>
>> At the application development level, Channels has the advantage of a 
>> clearly defined interprocess communication which would otherwise need to be 
>> written. However, The Channel API is built more around a simple queue/list 
>> rather than a full messaging layer. The choices of backends are currently 
>> limited to in-memory (not suitable for production), the ORM DB (not 
>> suitable for production), and Redis. While Redis PUB/SUB is nice for 
>> fanout/broadcast messaging, it isn’t a proper message queue. It also 
>> doesn’t support TLS out of the box. For groups/broadcast the Redis Channel 
>> backend also doesn’t use PUB/SUB but instead emulates that feature. It 
>> likely can’t use PUB/SUB due to the choice of sharding. This seemingly 
>> ignores robust existing solutions like Kombu, which is designed around AMQP 
>> concepts. Kombu supports far more transports than the Channel backends 
>> while emulating the same features, such as groups/fanout, and more such as 
>> topic exchanges, QoS, message acknowledgement, compression, and additional 
>> serialization formats.
>>
>
> Firstly, nothing in channels uses pub/sub - channels deliver to a single 
> reader of a queue, and thus cannot be built on a broadcast solution like 
> pub/sub.
>
> asgi_redis, the backend you're discussing, instead uses Redis lists 
> containing the names of expiring Redis string keys with data encoded using 
> msgpack, using LPOP or BLPOP to wait on the queue and get messages. It has 
> built-in sharding support based on consistent hashing (and with separate 
> handling for messages to and from workers).
>
> AMQP (or similar "full message queues") doesn't work with Channels for two 
> main reasons:
>
>  a) Running protocols through a queue like this requires incredibly low 
> latency; the Redis solution is on the order of milliseconds, which is a 
> speed I have personally not seen an AMQP queue reach
>
>  b) The return channels for messages require delivery to a specific 
> process, which is very difficult routing story given the AMQP design 
> structure. There's some solutions, but at the end of the day you need to 
> find a way to route dynamically-generated channel names to their correct 
> interface servers where the channel names change with each client.
>
> There was some work to try and get a fourth, AMQP-based backend for 
> channels a little while back, but it proved difficult as AMQP servers are 
> much more oriented around not losing tasks and going a bit slower, while 
> Chann

Re: My Take on Django Channels

2016-05-05 Thread Ryan Hiebert
Thank you, Mark, for starting this discussion. I, too, found myself simply 
accepting that channels was the right way to go, despite having the same 
questions you do. I realize this shouldn't be, so I've chimed in on some of 
your comments.

> On May 5, 2016, at 2:34 PM, Mark Lavin  wrote:
> 
> [snip]
> 
> The Channel API is built more around a simple queue/list rather than a full 
> messaging layer. [snip] Kombu supports  [snip].

The API was purposefully limited, because channels shouldn't need all those 
capabilities. All this is spelled out in the documentation, which I know you 
already understand because you've mentioned it elsewhere. I think that the 
choice to use a more limited API makes sense, though that doesn't necessarily 
mean that it is the right choice.
> 
> [snip description of architecture]

First off, the concerns you mention make a lot of sense to me, and I've been 
thinking along the same lines.

I've been considering if having an alternative to Daphne that only used 
channels for websockets, but used WSGI for everything else. Or some alternative 
split where some requests would be ASGI and some WSGI. I've tested a bit the 
latency overhead that using channels adds (on my local machine even), and it's 
not insignificant. I agree that finding a solution that doesn't so drastically 
slow down the requests that we've already worked hard to optimize is important. 
I'm not yet sure the right way to do that.

As far as scaling, it is apparent to me that it will be very important to have 
the workers split out, in a similar way to how we have different celery 
instances processing different queues. This allows us to scale those queues 
separately. While it doesn't appear to exist in the current implementation, the 
channel names are obviously suited to such a split, and I'd expect channels to 
grow the feature of selecting which channels a worker should be processing 
(forgive me if I've just missed this capability, Andrew).
> 
> [[ comments on how this makes deployment harder ]]

ASGI is definitely more complex that WSGI. It's this complexity that gives it 
power. However, to the best of my knowledge, there's not a push to be dropping 
WSGI. If you're doing a simple request/response site, then you don't need the 
complexity, and you probably should be using WSGI. However, if you need it, 
having ASGI standardized in Django will help the community build on the power 
that it brings.
> 
> Channels claims to have a better zero-downtime deployment story. However, in 
> practice I’m not convinced that will be true. [snip]

I've been concerned about this as well. On Heroku my web dynos don't go down, 
because the new ones are booted up while the old ones are running, and then a 
switch is flipped to have the router use the new dynos. Worker dynos, however, 
do get shut down. Daphne won't be enough to keep my site functioning. This is 
another reason I was thinking of a hybrid WSGI/ASGI server.
> 
> There is an idea floating around of using Channels for background jobs/Celery 
> replacement. It is not/should not be. [snip reasons]

It's not a Celery replacement. However, this simple interface may be good 
enough for many things. Anything that doesn't use celery's `acks_late` is a 
candidate, because in those cases even Celery doesn't guarantee delivery, and 
ASGI is a simpler interface than the powerful, glorious behemoth that is Celery.

There's an idea that something like Celery could be built on top of it. That 
may or may not be a good idea, since Celery uses native protocol features of 
AMQP to make things work well, and those may not be available or easy to 
replicate accurately with ASGI. I'll be sticking with Celery for all of those 
workloads, personally, at least for the foreseeable future.
> 
> [snip] locks everyone into using Redis.

Thankfully, I know you're wrong about this. Channel layers can be built for 
other things, but Redis is a natural fit, so that's what he's written. I expect 
we'll see other channel layers for queues like AMQP before too long.
> 
> I see literally no advantage to pushing all HTTP requests and responses 
> through Redis.

It seems like a bad idea to push _all_ HTTP requests through Redis given the 
latency it adds, but long-running requests can still be a good idea for this 
case, because it separates the HTTP interface from the long-running code. This 
can be good, if used carefully.

> What this does enable is that you can continue to write synchronous code. To 
> me that’s based around some idea that async code is too hard for the average 
> Django dev to write or understand. Or that nothing can be done to make parts 
> of Django play nicer with existing async frameworks which I also don’t 
> believe is true. Python 3.4 makes writing async Python pretty elegant and 
> async/await in 3.5 makes that even better.

Async code is annoying, at best. It can be done, and it's getting much more 
approachable with async/await, etc. But even when you've done all th

Re: My Take on Django Channels

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 12:34 PM, Mark Lavin  wrote:

> After somewhat hijacking another thread
> https://groups.google.com/d/msg/django-developers/t_zuh9ucSP4/eJ4TlEDMCAAJ
> I thought it was best to start fresh and clearly spell out my feelings
> about the Channels proposal. To start, this discussion of “Django needs a
> websocket story” reminds me very much of the discussions about NoSQL
> support. There were proof of concepts made and the sky is falling arguments
> about how Django would fail without MongoDB support. But in the end the
> community concluded that `pip install pymongo` was the correct way to
> integrate MongoDB into a Django project. In that same way, it has been
> possible for quite some time to incorporate websockets into a Django
> project by running a separate server dedicated for handling those
> connections in a framework such as Twisted, Tornado, Aiohttp, etc and
> establishing a clear means by which the two servers communicate with one
> another as needed by the application. Now this is quite vague and ad-hoc
> but it does work. To me this is the measuring stick by which to judge
> Channels. In what ways is it better or worse than running a separate server
> process for long-lived vs short-lived HTTP connections?
>

The main gains are (in my opinion):
 - The same server process can serve both HTTP and WebSockets without path
prefixing (auto-negotiation based on the Upgrade header); without this you
need an extra web layer in front to route requests to the right backend
server
 - HTTP long-polling is supported via the same mechanism (like WebSockets,
it does not fit inside the WSGI paradigm in a performant way)
 - You get to run less processes overall

That said, I don't see everyone running over to use Daphne in production,
which is why it's entirely reasonable to run two servers; one for HTTP and
one for WebSockets. Channels fully supports this, whether you run the HTTP
servers as self-contained WSGI servers or make them forward onto the ASGI
layer via the adapter.


>
> At the application development level, Channels has the advantage of a
> clearly defined interprocess communication which would otherwise need to be
> written. However, The Channel API is built more around a simple queue/list
> rather than a full messaging layer. The choices of backends are currently
> limited to in-memory (not suitable for production), the ORM DB (not
> suitable for production), and Redis. While Redis PUB/SUB is nice for
> fanout/broadcast messaging, it isn’t a proper message queue. It also
> doesn’t support TLS out of the box. For groups/broadcast the Redis Channel
> backend also doesn’t use PUB/SUB but instead emulates that feature. It
> likely can’t use PUB/SUB due to the choice of sharding. This seemingly
> ignores robust existing solutions like Kombu, which is designed around AMQP
> concepts. Kombu supports far more transports than the Channel backends
> while emulating the same features, such as groups/fanout, and more such as
> topic exchanges, QoS, message acknowledgement, compression, and additional
> serialization formats.
>

Firstly, nothing in channels uses pub/sub - channels deliver to a single
reader of a queue, and thus cannot be built on a broadcast solution like
pub/sub.

asgi_redis, the backend you're discussing, instead uses Redis lists
containing the names of expiring Redis string keys with data encoded using
msgpack, using LPOP or BLPOP to wait on the queue and get messages. It has
built-in sharding support based on consistent hashing (and with separate
handling for messages to and from workers).

AMQP (or similar "full message queues") doesn't work with Channels for two
main reasons:

 a) Running protocols through a queue like this requires incredibly low
latency; the Redis solution is on the order of milliseconds, which is a
speed I have personally not seen an AMQP queue reach

 b) The return channels for messages require delivery to a specific
process, which is very difficult routing story given the AMQP design
structure. There's some solutions, but at the end of the day you need to
find a way to route dynamically-generated channel names to their correct
interface servers where the channel names change with each client.

There was some work to try and get a fourth, AMQP-based backend for
channels a little while back, but it proved difficult as AMQP servers are
much more oriented around not losing tasks and going a bit slower, while
Channels is (and must be) designed the opposite way, closer almost to a
socket protocol.



>
> Architecturally, both of these approaches require running two processes.
> The current solution would run a WSGI server for short lived connections
> and an async server for long lived connections. Channels runs a front-end
> interface server, daphne, and the back-end worker servers. Which is more
> scalable? That’s hard to say. They both scale the same way: add more
> processes.
>

I'd like to point out again that you can still run two servers with
Ch

My Take on Django Channels

2016-05-05 Thread Mark Lavin


After somewhat hijacking another thread 
https://groups.google.com/d/msg/django-developers/t_zuh9ucSP4/eJ4TlEDMCAAJ 
I thought it was best to start fresh and clearly spell out my feelings 
about the Channels proposal. To start, this discussion of “Django needs a 
websocket story” reminds me very much of the discussions about NoSQL 
support. There were proof of concepts made and the sky is falling arguments 
about how Django would fail without MongoDB support. But in the end the 
community concluded that `pip install pymongo` was the correct way to 
integrate MongoDB into a Django project. In that same way, it has been 
possible for quite some time to incorporate websockets into a Django 
project by running a separate server dedicated for handling those 
connections in a framework such as Twisted, Tornado, Aiohttp, etc and 
establishing a clear means by which the two servers communicate with one 
another as needed by the application. Now this is quite vague and ad-hoc 
but it does work. To me this is the measuring stick by which to judge 
Channels. In what ways is it better or worse than running a separate server 
process for long-lived vs short-lived HTTP connections?

At the application development level, Channels has the advantage of a 
clearly defined interprocess communication which would otherwise need to be 
written. However, The Channel API is built more around a simple queue/list 
rather than a full messaging layer. The choices of backends are currently 
limited to in-memory (not suitable for production), the ORM DB (not 
suitable for production), and Redis. While Redis PUB/SUB is nice for 
fanout/broadcast messaging, it isn’t a proper message queue. It also 
doesn’t support TLS out of the box. For groups/broadcast the Redis Channel 
backend also doesn’t use PUB/SUB but instead emulates that feature. It 
likely can’t use PUB/SUB due to the choice of sharding. This seemingly 
ignores robust existing solutions like Kombu, which is designed around AMQP 
concepts. Kombu supports far more transports than the Channel backends 
while emulating the same features, such as groups/fanout, and more such as 
topic exchanges, QoS, message acknowledgement, compression, and additional 
serialization formats.

Architecturally, both of these approaches require running two processes. 
The current solution would run a WSGI server for short lived connections 
and an async server for long lived connections. Channels runs a front-end 
interface server, daphne, and the back-end worker servers. Which is more 
scalable? That’s hard to say. They both scale the same way: add more 
processes. It’s my experience that handling long-lived vs short-lived HTTP 
connections have different scaling needs so it is helpful to be able to 
scale them independently as one might do without Channels. That distinction 
can’t be made with Channels since all HTTP connections are handled by the 
interface servers. Channels has an explicit requirement of a backend/broker 
server which requires its own resources. While not required in the separate 
server setup, it’s likely that there is some kind of message broker between 
the servers so at best we’ll call this a wash in terms of resources. 
However, the same is not true for latency. Channels will handle the same 
short-lived HTTP connections by serializing the request, putting it into 
the backend, deserializing request, processing the response in the worker, 
serializing the response, putting it into the backend, deserializing 
response, and sending it to the client. This is a fair bit of extra work 
for no real gain since there is no concept of priority or backpressure. 
This latency also exists for the websocket message handling. While Channels 
may try to claim that it’s more resilient/fault tolerant because of this 
messaging layer, it claims “at most once” delivery which means that a 
message might never be delivered. I don’t think that claim has much merit. 
As noted in previous discussions, sending all HTTP requests unencrypted 
through the Channel backend (such as Redis) raises a number of potential 
security/regulatory issues which have yet to be addressed.

One key difference to me is that pushing Channels as the new Django 
standard makes Django’s default deployment story much more complicated. 
Currently this complication is the exception not the rule. Deployment is a 
frequent complaint, not just from people new to Django. Deployment of 
Python apps is a pain and this requires running two of them even if you 
aren’t using websockets. To me that is a huge step in the wrong direction 
for Django in terms of ease of deployment and required system resources.

Channels claims to have a better zero-downtime deployment story. However, 
in practice I’m nTot convinced that will be true. A form of graceful reload 
is supported by the most popular WSGI servers so it isn’t really better 
than what we currently have. The Channel docs note that you only need to 
restart the workers when deploy

Re: Add HTML5 required attribute on form widgets

2016-05-05 Thread Jon Dufresne
On Thu, May 5, 2016 at 11:29 AM, Collin Anderson 
wrote:

> If anyone is running into hidden required fields preventing forms from
> submitting (like me), I've been using this jQuery code for a somewhat-hacky
> quickfix:
>
> $(':hidden[required]').removeAttr('required')
>
>
The changes made on master should not be adding required to hidden inputs.
I added a check to prevent this. Are you experiencing different?

Are these inputs hidden by Django or from another source?

Cheers,
Jon

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CADhq2b44N4VxvtoB-QYykapRoQtCXHbrNpOzxbay0KzmpSjHqQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Add HTML5 required attribute on form widgets

2016-05-05 Thread Collin Anderson
If anyone is running into hidden required fields preventing forms from 
submitting (like me), I've been using this jQuery code for a somewhat-hacky 
quickfix:

$(':hidden[required]').removeAttr('required')

On Saturday, April 2, 2016 at 12:27:28 PM UTC-4, Jon Dufresne wrote:
>
> On Wed, Mar 30, 2016 at 7:01 AM, Alex Riina  > wrote:
>
>> What's the plan for formsets with extra?
>>
>> I could see the required only getting applied to the first min forms but 
>> I'm not sure there is an actual workable case there. It seems like it will 
>> get too messy with adding and deleting at the same time.
>>
>> If can_delete is false and extra is 0, it seems like the required 
>> attribute could at least be used. Because of this, I think it should 
>> probably be an initialization argument, default to false, or be overridden 
>> when constructing forms in formsets.
>>
>> https://github.com/gregmuellegger/django-floppyforms/issues/75
>>
>>
> Thanks for highlighting this.
>
> I'll investigate implementing the suggestion "If can_delete is false and 
> extra is 0, it seems like the required attribute could at least be used. 
> Because of this, I think it should probably be an initialization argument, 
> default to false, or be overridden when constructing forms in formsets." 
> Thanks.
>
> I think with this concern, this feature can't be solved entirely by 
> template-base widgets as something other then Field.required is necessary 
> for the formset case.
>
> Cheers,
> Jon
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/38bb7257-7d81-4197-bc9d-feb686e8508c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: failure to load fixtures during unit tests

2016-05-05 Thread Rich Rauenzahn

Thanks, Tim.

Unfortunately I can't move past Django 1.7 yet -- dependencies.  I've been 
marching my way up one revision at a time hopefully up to 1.9 as a way to 
keep the scope of what breaks under control as I move through each major 
revision and stabilize my project.  Then I attack replacing dependencies.

I really think I've found a bug here ... which I hope to suggest a patch 
for and submit, hence the post to the developers channel, but I can go back 
to the users group for now... My recent experience with that list doesn't 
bode well, however, and I don't have high hopes with anyone there able to 
respond at the internals level I may need to track down the issue.  I've 
almost rewritten my tests to just load raw sql, but if there is a bug here 
I'd like to help find it rather than work around/ignore it.

As I step through the code, it really looks like the _save_table() method 
in Model is trying to insert a row even though the object has already been 
restored/inserted.  At the moment, I'm reproducing it with the auth.User 
Model.

I'm getting closer to seeing what is happening 

I have a user, rich, which expects to be pk=1 per the fixture.  

> 
/opt/perfcat/virtualenv-2.7.11/lib/python2.7/site-packages/django/db/models/base.py(686)_save_table()uest_by_build_workflow_fail
 
  
685 import ipdb; ipdb.set_trace()
--> 686 if not updated:
687 if meta.order_with_respect_to:

ipdb> self.id
1
ipdb> self.__class__.objects.all()
[]
ipdb> self.__class__.objects.all()[0].id
5
ipdb> self.username
u'rich'
ipdb> 


But In this particular run I'm currently tracing, rich is already in the db 
(as the only entry) as pk=5 (via fixture loading process).   For one, this 
tells me the sequence generators aren't always resetting between fixture 
loads/tests.

So I think the code is trying to reassign it to pk=1.  

We did drop into the update code,

ipdb> pk_set and not force_insert
True

But updated is False

ipdb> updated
False

So now it tries to drop into an insert, but it is going to get an Integrity 
error because username has to be unique.

Not sure what this means, yet, but my current step through looks like this:

ipdb> 
IntegrityError: Integrit...sts.\n',)
> 
/opt/perfcat/virtualenv-2.7.11/lib/python2.7/site-packages/django/db/models/base.py(700)_save_table()
699 update_pk = bool(meta.has_auto_field and not pk_set)
--> 700 result = self._do_insert(cls._base_manager, using, 
fields, update_pk, raw)
701 if update_pk:

ipdb> update_pk
False
ipdb> meta.has_auto_field
True
ipdb> pk_set
True
ipdb> 

...if we don't need to update the pk, and it is set .. why are we inserting 
it? 

Walking through a second time with this knowledge ... and stepping into 
_do_update(),

I end up with filtered = base_qs.filter(pk=pk_val) being equal to [] 
because the entry in the db has a pk=5, and it is filtering for pk=1

So when return filtered._update(values) > 0 returns, it returns false 
because nothing was updated because the pk's didn't match.

Where I am stuck at now is not understanding how fixture loading manages 
the pks... 

Rich

On Wednesday, May 4, 2016 at 4:22:33 PM UTC-7, Tim Graham wrote:
>
> Hi Rich, django-users is the appropriate place to ask "is it a bug?" type 
> questions. We try not to use this mailing list as a second level support 
> channel, otherwise it'd get really noisy. Thanks for understanding.
>
> By the way, Django 1.7 is no longer supported. Please make sure you can 
> reproduce the issue on Django master so we don't spend time debugging 
> issues that have since been fixed.
>
> On Wednesday, May 4, 2016 at 7:13:42 PM UTC-4, Rich Rauenzahn wrote:
>>
>>
>> I'm in the middle of trying to track down a problem with loading fixtures 
>> during unit tests -- I'm hesitant to call it a bug in Django 1.7, but the 
>> inconsistent behavior is really stumping me.
>>
>> Essentially I've made a fixture via
>>
>>manage dumpdata --indent=3 -e sessions -e admin -e contenttypes -e 
>> auth.Permission > test-fixtures.json
>>
>> If I add that fixtures to my TestCase, it sometimes works if I run each 
>> test individually (using Django Nose)  -- 
>>
>>manage test --failfast test_it:TestClass.test_detail
>>manage test --failfast test_it:TestClass.test_list
>>
>> But if I run them together, 
>>
>>manage test --failfast test_it:TestClass
>>
>> I get errors about duplicate/unique problems.  Essentially a row is 
>> attempted to be added twice. 
>>
>> IntegrityError: Problem installing fixture 'test-fixtures.json': 
>> Could not load app.Branch(pk=1): duplicate key value violates unique 
>> constraint "app_branch_name_49810fc21046d2e2_uniq"
>> DETAIL:  Key (name)=(mock) already exists.
>>
>> (I've also posted this earlier today on django-users, where I also 
>> included some postgres output).  The tests within the TestCase (or 
>> TransactionTestCase) can be

Re: Django Integration

2016-05-05 Thread Andrew Godwin
On Thu, May 5, 2016 at 12:55 AM, Anssi Kääriäinen 
wrote:

> On Thursday, May 5, 2016, Russell Keith-Magee 
> wrote:
>
>> I will admin that I haven’t been paying *close* attention to Andrew’s
>> work - I’m aware of the broad strokes, and I’ve skimmed some of the design
>> discussions, but I haven’t been keeping close tabs on things. From that
>> (admittedly weak) position, the only counterargument that I’ve heard are
>> performance concerns.
>>
>
> I haven't brought this up before, but security is something we should
> discuss pre-merge.
>
> What I'm mainly worried about is malicious clients intentionally trying to
> choke the channels layer. I guess the approaches for DoS attack would fall
> under these categories:
>   1. Try to generate large responses and read those response slowly.
>

This would likely lead to either the response packets expiring in redis
after one minute, or redis running out of memory as it overfills with
packets and doing whatever its configured OOM response is (which I will
suggest be "expire things early"). asgi_redis could likely be improved so
that the channel lists don't get overfull in this situation.


>   2. Fire a large request, don't read the response.
>

Same as above. ASGI has no backpressure on channels per se, so you can't
tell if the response is being read at all.


>   3. Try to cause exceptions in various parts of the stack - if the worker
> never writes to the response channel, what will happen?
>

Daphne will time out the request after 120 seconds from request start with
a 503 Service Unavailable in the default configuration, but I'm tempted to
drop that to 60 and reset the clock every time a response chunk turns up,
and have a second absolute timeout that handles the slow-reader DoS case.


>
> There are always DoS vectors, but checking there aren't easy ones should
> be done. The main concern with channels is potential resource leak.
>
> I found accidentally some presentations that seem relevant to thus
> discussion. I recently watched some presentations about high availability
> at Braintree. There are two presentations available, one from 2013 by Paul
> Gross, where he explains their approach to HA, and one from 2015 by Lionel
> Barrow, explaining what changed. Both are very interesting and highly
> recommended.
>
> The 2013 presentation introduces one key piece to HA at braintree, called
> Broxy. Broxy basically serves HTTP the same way as Daphne - write requests
> to tedis, and wait for response again through Redis.
>
> The 2015 representation explains what changed. They removed Broxy because
> it turned out to be conceptually complex and fragile. It might be their
> implementation. But there is certain level of both complexity and possible
> fragility about the design. On the other hand their story pretty much
> verifies that the design does scale.
>

Do you have a link to the presentation about them removing it? I've tried
to solve the problems I had last time I wrote a reverse proxy like this by
making the thing drop requests and responses whenever a problem comes up
(previous ones I've worked with suffered from a bad recovery after high
traffic as they tried to get through the queue)

The main problem with the design that I can potentially forsee is the lack
of backpressure, which was raised before. While it keeps the design simple,
it also means that anyone writing into a channel has no idea about the
current state of it; the problem is, however, that what constitutes "full"
varies by channel (e.g. for http.request it could be 1000 packets, for
http.response.clientid it could be 10), and so you start needing
per-channel, per-project configuration.

I'm tempted to add this as a configurable option on the channel layer
(defaulting every channel to, say, 500), though, and extend the ASGI spec
just slightly to allow send() to return a ChannelFull exception, which
clients can treat how they like (daphne might drop the request, django
might just wait while writing a response and give up after X seconds).


>
> All in all I'd feel a lot more confident if there were large production
> sites using the channels system, so that we shouldn't theorize on the
> excellence of the approach. Are there already production deployments out
> there?
>
>
Jacob mentioned he'd been running it on something, but I don't know what
exactly, and I only hear mention, not details, of people trying it in
production. Personally, it's only running on my own sites, which are
anything but high traffic.

I had discussed with several people before about getting Channels running
as a parallel to their site and redirecting some load to it via a hidden
iframe to do a slightly better load test; maybe now is the time to start
that process?

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr..

Re: Django Integration

2016-05-05 Thread Anssi Kääriäinen
On Thursday, May 5, 2016, Russell Keith-Magee 
wrote:

> I will admin that I haven’t been paying *close* attention to Andrew’s work
> - I’m aware of the broad strokes, and I’ve skimmed some of the design
> discussions, but I haven’t been keeping close tabs on things. From that
> (admittedly weak) position, the only counterargument that I’ve heard are
> performance concerns.
>

I haven't brought this up before, but security is something we should
discuss pre-merge.

What I'm mainly worried about is malicious clients intentionally trying to
choke the channels layer. I guess the approaches for DoS attack would fall
under these categories:
  1. Try to generate large responses and read those response slowly.
  2. Fire a large request, don't read the response.
  3. Try to cause exceptions in various parts of the stack - if the worker
never writes to the response channel, what will happen?

There are always DoS vectors, but checking there aren't easy ones should be
done. The main concern with channels is potential resource leak.

I found accidentally some presentations that seem relevant to thus
discussion. I recently watched some presentations about high availability
at Braintree. There are two presentations available, one from 2013 by Paul
Gross, where he explains their approach to HA, and one from 2015 by Lionel
Barrow, explaining what changed. Both are very interesting and highly
recommended.

The 2013 presentation introduces one key piece to HA at braintree, called
Broxy. Broxy basically serves HTTP the same way as Daphne - write requests
to tedis, and wait for response again through Redis.

The 2015 representation explains what changed. They removed Broxy because
it turned out to be conceptually complex and fragile. It might be their
implementation. But there is certain level of both complexity and possible
fragility about the design. On the other hand their story pretty much
verifies that the design does scale.

All in all I'd feel a lot more confident if there were large production
sites using the channels system, so that we shouldn't theorize on the
excellence of the approach. Are there already production deployments out
there?

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CALMtK1FoOSiGr0w_3JP8UYE-_owqOnS59d%2BOjp8eTWt4Qh48SQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Django Integration

2016-05-05 Thread Aymeric Augustin
FWIW I’m in the same boat as Russell:

- limited familiarity with channels: I read the docs cover-to-cover but never 
ran the code
- sufficient trust in their design: I heard Andrew talk about it and I thought 
it made sense
- reasonable confidence that it won’t introduce regressions, including 
performance regressions (at least with the in-memory backend)

Regarding risks for the 1.10 release, I don’t expect channels to be as 
problematic as migrations, if only because they’re entirely determined by the 
current version of the code. In contrast, migrations could be affected by all 
previous versions of the code, adding a whole new dimension of complexity.

Regarding the process, I’m familiar with the situation :-( When I worked on 
transactions and app-loading, I argued heavily and merged large 
backwards-incompatible patches at interim steps, just to keep things 
manageable, under the assumption that I’d manage to figure out a consistent 
design by the next release. That turned out OK but it was less than satisfying.

I tried to improve by getting funding for the “multiple template engines”. A 
secret goal of that project was to establish a standard of public 
accountability for funded projects. I received positive feedback on the method 
— detailed DEP , 
weekly updates  — 
and I’d love if others adopted it. The topic was nowhere nearly as ambitious as 
channels, though.

Finally I’m interested in hearing more about the “things [Mark doesn’t] like 
about channels”. Channels has been mostly a solo effort by Andrew. However, 
until now, public discussion has usually reinforced by trust in his design. 
We'll see if that trend holds ;-)

-- 
Aymeric.


> On 05 May 2016, at 07:53, Mark Lavin  wrote:
> 
> Thank you Russ. I'll reconsider expressing my full thoughts on Channels more 
> likely in another thread. For now I do think it's worth addressing this issue 
> of benchmarks/performance which keeps being brought up. The argument is that 
> since this is optional we don't need to see the benchmarks because there 
> won't be any regressions, which is true. However, if it is also being said 
> that this is so fundamentally important to Django and everyone will use it so 
> it cannot live as an external project and must land in 1.10 then I don't 
> think that argument can be made without ensuring there are no huge 
> regressions moving an existing application from WSGI to ASGI. If nothing else 
> those benchmarks seem important for Django users to make an informed choice 
> about WSGI vs ASGI for their deployment. How can we not care about how this 
> "fundamental change to Django" might impact performance or say that isn't a 
> requirement to even measure, regardless of outcome, before its inclusion?
> 
> - Mark
> 
> On Wednesday, May 4, 2016 at 10:00:15 PM UTC-4, Russell Keith-Magee wrote:
> Hi Mark,
> 
> On Thu, May 5, 2016 at 8:41 AM, Mark Lavin > 
> wrote:
> Major features have never been perfect, no, but they have in the past 
> typically gone through two paths to prove out their design/API/usefulness. 
> One is as an established and mature third-party app such as messages, 
> staticfiles, and django-secure. More recently the other has been through the 
> DEP process: multiple templates (Jinja) and query expressions. Channels has 
> done neither.
> 
> Sorry if it seems that I've raised these issues late but I don't feel like 
> there has been a good place for this discussion since the DEP process was 
> circumvented. Most of the development for this has been in Andrew's space. I 
> don't feel welcome to raise a dissenting opinion as a mere lowly member of 
> the Django community.
> 
> If that’s your perception, then we as a community clearly have a problem that 
> needs to be addressed. 
> 
> You’ve been around the Django community since (AFAICT) 2009. You’re a 
> technical director at a well known and well respected Django consultancy. 
> You’ve given talks at DjangoCons. You’ve co-written a book about Django for 
> O’Reilly. If you’re not someone who is in a position to give an informed 
> opinion on issues with Channels, then I don’t know who is. If you feel like 
> you’re on the outer and your opinion is not welcome, then that’s *our* 
> failure, not yours.
> 
> I can’t argue with the fact that the DEP process has been circumvented here. 
> I also acknowledge that this would be doubly frustrating given your 
> difficulties shepherding the content negotiation DEP. I don’t think I can 
> give a good answer for why this has been done, other than enthusiasm and 
> momentum overriding a not-entirely-well-established process.
> 
> This thread (and email in general) probably isn’t the best place to flesh out 
> the solution to these process issues, but they definitely need to be 
> resolved. Discussion at PyCon and DjangoCon US is definitely called for - 
> I’ll be at both, and I’d defini

Re: Rearding session_key validation

2016-05-05 Thread Aymeric Augustin
Hello Samarjeet,

“Can I do this?” and “How can I do this?” questions also go to django-users.

May I kindly suggest writing to the django-users mailing list until someone 
tells you an issue you’re facing is likely a bug in Django that hasn’t been 
reported yet ?

Then you can create a ticket on https://code.djangoproject.com/ or a thread on 
the django-developers mailing list, whichever seems more appropriate.

There are thousands of subscribers to the django-developers list, which is 
dedicated to the development of Django itself, as opposed to development with 
Django

 I hope you can understand that not all of them are interested in following 
your first steps with Django. Thanks!

-- 
Aymeric.

> On 05 May 2016, at 08:18, Samarjeet Singh  wrote:
> 
> Right now what django is doing is :-
> 
> 1. It checks the session id is correct or not by checking the user associated 
> with the session id
> 2. If the session id is not present it send the user as anonomous and because 
> of which it redirects one to the login page .
> 3. But my concern is that if i want to log this event in the audit log of my 
> application using django to get the false session id request.Is it possible 
> to write a middleware for this??
> 
> 
> regards 
> samarjeet singh
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to django-developers+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to django-developers@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/django-developers 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/django-developers/132029df-d095-47e9-801f-4b8283010f37%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/FA98600D-DAED-42D2-A4FA-8050CFB38FB1%40polytechnique.org.
For more options, visit https://groups.google.com/d/optout.