Versions:

   - Ruby 2.3
   - Capistrano 3.6.1
   - Rake / Rails / etc 11.3.0 / 4.2.7.1
   
Platform:

   - Working on OSX
   - Deploying to Ubuntu/CentOS


Hello,


We are currently running into a weird behaviour with Capistrano that 
started appearing when our database backup exceeded the 4GB in size.

I've attached the rake task for generating a backup to show you how it 
works, it's basically creating a pg_dump.


The problem we encounter, is that before doing the deploy, we ask whether a 
database backup is needed.

If the answer is yes, then we invoke the task that is in the backup.rake.

Now when the backup is larger then 4GB, Capistrano seems to lose the 
connection with the server.

We do not get any feedback from Capistrano whether the task succeeded, 
failed or is still running.

So after roughly one hour, we kill the task locally by pressing ctrl +C, 
and run the deploy again without backup.


The result is that we then succeed at deploying, and the backup is actually 
being made.

So we know that the backup task itself actually works, it's just that 
Capistrano doesn't send us feedback.


Is there some settings we could look into?

Like a TTL, Timeout or have the task report back the progress/output from 
the pg_dump or something?


Kind regards,

Arne De Herdt

-- 
You received this message because you are subscribed to the Google Groups 
"Capistrano" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capistrano+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/capistrano/5bc45616-432f-4ab1-9907-39a076d95f46%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Attachment: backup.rake
Description: Binary data

Reply via email to