Ok I was able to reproduce, I believe this happens when booting a VM with a sheepdog volume fails (because the sheep daemon was down). Here's the output:
node 1: # sheep -f /data/sheep/ sheep: jrnl_recover(2221) Openning the directory /data/sheep//journal/00000009/. sheep: set_addr(1595) addr = 172.16.1.1, port = 7000 sheep: main(144) Sheepdog daemon (version 0.2.3) started sheep: get_cluster_status(403) sheepdog is waiting with older epoch, 10 9 172.16.1.2:7000 node 2: # sheep -f /data/sheep/ sheep: jrnl_recover(2221) Openning the directory /data/sheep//journal/00000010/. sheep: set_addr(1595) addr = 172.16.1.2, port = 7000 sheep: main(144) Sheepdog daemon (version 0.2.3) started sheep: send_join_request(1048) 33624236 22579 sheep: update_cluster_info(568) failed to join sheepdog, 65 # collie cluster info -a 172.16.1.1 Waiting for other nodes joining Ctime Epoch Nodes 2011-06-15 11:50:16 9 [172.16.1.1:7000, 172.16.1.2:7000] # collie cluster info -a 172.16.1.2 The node had failed to join sheepdog Ctime Epoch Nodes 2011-06-15 11:50:16 10 [172.16.1.2:7000]
-- sheepdog mailing list [email protected] http://lists.wpkg.org/mailman/listinfo/sheepdog
