Hey

So after a long while I've been able to pick up my work on using the 
pipeline plugin.  Anyways, I've started hacking at hooking it up to our 
staging environment.  One step is run on our master node, and the other 
step is run on a dynamically provisioned EC2 instance.  I'm archiving the 
entire workspace and copying it to the other node.  It looks like the 
folder size is 1.8 gb.

How exactly does archiving work?  Are you zipping, copying over and 
extracting?  Or are you syncing file by file?

The speed to do this unarchiving step is pretty long.  It takes > 10 mins 
to run.

John

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/b00bb9bd-061d-4bc7-9a25-7f290ca9cc9e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to