On 10/09/12 21:54 +0000, Chip Burke wrote:
> Well, after a few days of testing it seems luci/ricci is still flakey.

At first, there seems to be more than "XML entities" problem involved.
This is because luci, unlike ccs and ccs_sync ("cman_tool version" uses
ccs_sync under the hood) should not suffer from that one.
So probably yet another separate issue...

> If I update things via luci, the cluster.conf on the local machine with
> luci updates, but nothing pushes out to other nodes. If I then manually run
> ccs_sync on the node with the new configuration, things push out to the
> other nodes fine. While this is wonky, at least it is consistent and
> repeatable so I can do things in this manner until fixes are in. Though,
> certainly let me know if you want any more logs or whatever from me.

To get a better error message in luci.log, something to start with, could
you apply a workaround for that "no translator" issue, please?

The recipe, based on the real patch to fix the mentioned bug, is as
follows (as root on the host with luci installed):

---

pushd $(rpm --eval %python_sitearch)
cat <<EOF | patch --fuzz=3 -b .std
diff --git a/luci/lib/ricci_helpers.py b/luci/lib/ricci_helpers.py
--- a/luci/lib/ricci_helpers.py
+++ b/luci/lib/ricci_helpers.py
@@ -7,6 +7,9 @@
 
 import threading
 
+import pylons
+from pylons.i18n.translation import _get_translator
+
 from luci.lib.helpers import ugettext as _
 
 from luci.model import DBSession
@@ -29,6 +33,14 @@ class PWorker(threading.Thread):
         self.cluster_members_only = cluster_members_only
 
     def run(self):
+        # see http://comments.gmane.org/gmane.comp.web.turbogears/46896
+        # this is stolen from the pylons test setup;
+        # it will make sure the gettext-stuff is working, that is
+        # we inject translator object to this private thread similarly
+        # as it is done by the framework in per-request threads
+        translator = _get_translator(None)
+        pylons.translator._push_object(translator)
+
         while True:
             self.mutex.acquire()
             if len(self.triples) == 0:
@@ -67,6 +79,8 @@ class PWorker(threading.Thread):
             self.ret[triple[0][0]] = r
             self.mutex.release()
 
+        pylons.translator._pop_object()
+
 def send_batch_parallel(triples, max_threads, cluster_members_only=False):
     mutex = threading.RLock()
     threads = list()
EOF
popd

---

If luci.log still does not provide any better information, please run this
sed one-liner (once the luci was started at least once so the respective
file is assuredly in place; luci restart needed afterwards):

---
sed -i.std "/logger_luci/bone;b;:one s|INFO|DEBUG|;ttwo;n;bone;:two n;btwo" \
  /var/lib/luci/etc/luci.ini
---


Thanks,
Jan

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to