*I'm not sure about understanding you, where are you trying to call it 
again? from ipython?*

I am calling from Django, the code is inside the view.

*def test(request):*
*  try:*
*    add.delay(2, 2)*
*  except add.OperationalError as exc:*
*    print('error')*
*return HttpResponse('working')*

Point 1 - Stop RabbitMQ Server from terminal manually. (sudo service 
rabbitmq-server stop).
Point 2 - Reload the view from the browser, it will immediately throw 
connection refused error which i could catch easily using try except block 
as provided in above example.
Point 3 - If you try again reloading the page it hangs up over there. and 
it won't send any http response because it is still waiting and waiting to 
get response.
Point 4 - You can see celery in background trying to reconnect to the 
broker every 5-10 seconds.


*For what I understood, your problem is assuming that if a celery task 
fails, it won't be retried :)*

It should be retried but i can't make a user wait for it because signup 
process is very quick and we cannot halt any user because our connection is 
lost. I just simply want if connection lost try for 10 seconds if connected 
its good otherwise just move on don't get stuck.

Even if i tried add.apply_async((2, 2), retry=False) but it is still  not 
working and the page is not giving back the HTTP Response. 

I think i have cleared everything what you wanted to know @Matemática A3K.


On Wednesday, January 3, 2018 at 2:08:24 PM UTC+5:30, Matemática A3K wrote:
>
>
>
> On Tue, Jan 2, 2018 at 11:14 PM, Mukul Mantosh <mukulma...@gmail.com 
> <javascript:>> wrote:
>
>> I am not using result backend my question is that when the broker 
>> connection is lost it throws a connection refused exception which i could 
>> normally catch through the following given below code.
>>
>> *try:*
>> *  add.delay(2, 2)*
>> *except add.OperationalError as exc:*
>> *  print('error');*
>>
>> Reference 1: 
>> http://docs.celeryproject.org/en/latest/userguide/calling.html#connection-error-handling
>> Reference 2: https://github.com/celery/celery/issues/3933
>>
>>
>> This try except block works only one time and next time when i try to 
>> call again add.delay(2,2)....the code is waiting to execute because celery 
>> is re-trying to establish the connection with the broker.
>>
>>
> I'm not sure about understanding you, where are you trying to call it 
> again? from ipython?
>  
>
>> I just simply don't want to do, for example: There is a website where a 
>> user signup as a new user and we have to send an verification email through 
>> celery and suddenly the connection gets lost then the code will be in 
>> waiting state because celery is again retrying to establish a connection 
>> with the lost broker.
>>
>>
> I agree that this may be better for a celery mailing list, as "low level" 
> celery is probably not the main expertise of most of the people reading 
> here, as Jason said, for what I understand 
> http://docs.celeryproject.org/en/latest/userguide/calling.html#calling-retry 
> should be sufficient for the "normal" cases.
>
> Have in mind that exception will be raised at the celery level, not django 
> because it's async, 
> http://docs.celeryproject.org/en/latest/userguide/calling.html#linking-callbacks-errbacks.
>  
> Django delegates the task to celery and celery takes care of executing it. 
> If you need to perfect warranty of execution, do it sync, inside the django 
> view send the verification mail, where you can show to the user that the 
> confirmation mail hasn't been sent.
>  
>
>> How can we solve this problem ?
>>
>>
> For what I understood, your problem is assuming that if a celery task 
> fails, it won't be retried :)
>  
>
>>
>>
>> On Wednesday, January 3, 2018 at 12:45:10 AM UTC+5:30, Jason wrote:
>>>
>>> With a broker connection loss, the only thing that will happen is your 
>>> workers won't pick up new tasks.
>>>
>>> If you're posting to a result backend like redis and lose the 
>>> connection, then an exception will be raised in the logs and the task will 
>>> shut down.
>>>
>>> Remember tasks are independent processes and you can tell each worker 
>>> how many tasks to execute before its process is killed.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Django users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to django-users...@googlegroups.com <javascript:>.
>> To post to this group, send email to django...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/django-users.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/django-users/41026c20-3ca0-46bd-8bab-361eba75aadc%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/django-users/41026c20-3ca0-46bd-8bab-361eba75aadc%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/8aeefc30-a5db-4088-8d86-330c10efb5e2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to