Didn't someone say there was a setting that was a stickler when it came to 
hostnames? What was that? 

It amazes me how over complicated this whole cluster mess is imho. The amount 
of time we have been testing and guessing could be solved by Andrew simply 
taking 15 minutes out of his day to give us the right instructions.

- Mark

On Feb 25, 2013, at 4:57 PM, Raymond Norton <[email protected]> wrote:

> I set up another vm with postgres, baruwa, memcahed and rabbit. Pointed both 
> scanners to it, but still have the same type of problems.
> 
> I am getting a new error that I have not gotten before from memcached: error: 
> [Errno 32] Broken pipe
> 
> I am sure we are just in error in the way we have things set up.
> 
> This is what I have:
> 
> Scanners A & B
> macro.conf > 10.10.1.16
> Production.ini (everything except for smtp server) > 10.10.1.16
> Sphinx.conf: Listens on 127.0.0.1, psql=10.10.1.16
> MailScanner.conf > 10.10.1.16
> BS.pm > 10.10.1.16
> 
> Database/Memcached/Rabbit/Baruwa/ VM:
> Production.ini (default, except that I changed the broker to 10.10.1.16 in 
> trouble shooting) Works either way.
> Sphinx.conf: Listens on 10.10.1.16
> 
> 
> Also manually created clusters with rabbitmq between the 3 vms, but that 
> didn't fix anything.
> 
> Getting more webApp errors from memcached then I did before, but I still get 
> the stats of the local node I am, but cannot get them from the remote box.
> 
> restarting rabbitmq-server seems to bring things back to life if things stahl.
> 
> Had tried a 2nd  vhost for rabbitmq the other day, configuring each scanner 
> to use a different one. Didn't seem to change anything.
> 
> Waiting for someone who knows more than I do to chime in :)
> 
> 
> 
> 
> 
> 
> 
> 
> On 02/24/2013 01:20 AM, Mark Chaney wrote:
>> 
>> Made any progress? I havent really looked at it in the last 24 hours. Had to 
>> take a break from it. 
>> 
>> On 2013-02-22 13:14, Raymond Norton wrote: 
>>> I asked the question because I am concluding that information is only 
>>> available on the localhost of my scanners, which explains the way node 
>>> stats are working for me. I'm presuming the connection to whatever 
>>> produces those stats is where my misconfiguration is. 
>>> 
>>> 
>>> 
>>> 
>>> On 02/22/2013 01:08 PM, Raymond Norton wrote: 
>>>> The celery log 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On 02/22/2013 12:00 PM, Mark Chaney wrote: 
>>>>> Where are you seeing that entry? 
>>>>> 
>>>>> On 2013-02-22 11:52, Raymond Norton wrote: 
>>>>>> Where does celery pull this info from: 
>>>>>> 
>>>>>> Task get-system-status[2dc1142e-b526-4f2c-816e-0cb798362d83] 
>>>>>> succeeded in 0.169036865234s: {'load': (0.6, 0.51, 0.42), 'uptime': 
>>>>>> '18... 
>>>>>> 
>>>>>> 
>>>>>> On 02/22/2013 11:14 AM, Mark Chaney wrote: 
>>>>>>> Are you talking about /var/log/rabbitmq/[email protected] on Server A? 
>>>>>>> If so, its correctly showing connections from Server B. But it seems to 
>>>>>>> be very basic logging. 
>>>>>>> 
>>>>>>> On 2013-02-22 11:11, Raymond Norton wrote: 
>>>>>>>> Does rabbitmq log provide any hints? 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On 02/22/2013 11:08 AM, Mark Chaney wrote: 
>>>>>>>>> Hmm, I must have missed a step as when I reinstalled everything after 
>>>>>>>>> I switched to Baruwa enterprise (I wanted upgrades/fixes to be easier 
>>>>>>>>> to apply), I am no longer getting status to work on Server B. Not 
>>>>>>>>> only does it not show the correct status on Server A for Server B. 
>>>>>>>>> But Server B does nto show the correct status of itself. Though       
>>>>>>>>>               I just realized that I get an error when I try to 
>>>>>>>>> release a message stored on Server A and Im doing it from Server B. 
>>>>>>>>> Need to look into that first I guess. 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On 2013-02-22 09:31, Raymond Norton wrote: 
>>>>>>>>>> Does anyone know what I might be missing or have configured wrong 
>>>>>>>>>> here? 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> I set server A & B to use memcached, rabbitmq, and postgres of 
>>>>>>>>>> server A. 
>>>>>>>>>> 
>>>>>>>>>> Server A I added it as a node to itself. Check status and celery 
>>>>>>>>>> shows the request was properly processed. 
>>>>>>>>>> 
>>>>>>>>>> Server B I added it as a node to itself. Check status and all comes 
>>>>>>>>>> back fine on Server A celery log, so we know rabitmq is working. 
>>>>>>>>>> 
>>>>>>>>>> Added Server B as a node on Server A, but it comes up faulty and 
>>>>>>>>>> nothing is logged in celery. Same the other way around (Adding 
>>>>>>>>>> Server 
>>>>>>>>>> A node to B) 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> However, I can release messages from either server, if I am on the 
>>>>>>>>>> local web interface that the message passed through. 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> It seems like its the way rabbitmq is called from the local box, vs 
>>>>>>>>>> a 
>>>>>>>>>> remote connection. Either that, or I am missing a config change on 
>>>>>>>>>> Server B.
>>>>>>>>> 
>>>>>>>>> _______________________________________________ 
>>>>>>>>> Keep Baruwa FREE - http://pledgie.com/campaigns/12056
>>>>>>> 
>>>>>>> _______________________________________________ 
>>>>>>> Keep Baruwa FREE - http://pledgie.com/campaigns/12056
>>>>> 
>>>>> _______________________________________________ 
>>>>> Keep Baruwa FREE - http://pledgie.com/campaigns/12056
>> 
>> _______________________________________________ 
>> Keep Baruwa FREE - http://pledgie.com/campaigns/12056 
>> 
>> -- 
>> This message was scanned by ESVA and is believed to be clean. 
>> Click here to report this message as spam. 
>> http://h0stname/cgi-bin/learn-msg.cgi?id=7A8D827DD1.589B4
> 
> -- 
> Raymond Norton
> LCTN
> 952.955.7766
> _______________________________________________
> Keep Baruwa FREE - http://pledgie.com/campaigns/12056
_______________________________________________
Keep Baruwa FREE - http://pledgie.com/campaigns/12056

Reply via email to