My apologies, it's something on the system, not in the code. That'll teach
me to post before checking source. I'll post the fix once I find it for
anyone else that might come across it.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/World-writable-KahaDB-files-tp4603663p
I had already found and used the ACTIVEMQ_USER variable in the init script,
and it's working properly.
Thanks for confirming things will work fine by tightening down those
permissions.
I can't help but think this is at least a bug, if not a security issue. In
the interest of least privileges, sh
You can change the perms to be more restrictive so long as the user id
of the broker java process can access them.
it makes total sense to have a specific user identity to run the
broker and restrict access to just that user for the data directory.
There is some support for this in the current ac
I am setting up a network of master slave brokers. I used
https://github.com/FuseByExample/getting_started_with_activemq as template
for brokers configuration. I ran several tests and I have a few questions. I
hope you guys don't find them unreasonable and unanswerable
On a Network of Master-Slave
It really makes me nervous knowing that anyone with any filesystem access to
my ActiveMQ machine can delete, overwrite, or corrupt my KahaDB files.
While we as users should do our best to secure our servers, I don't see why
666 perms are needed on the db files and 777 perms are needed on the paren
There is an important caveat when using static:failover - the failover
feature should only be used to pick one url from the list of
candidates, it should not be used to failover the transport. To do
this you must use ? maxReconnectAttempts =0
This is important, because only the static discovery me