Issue with Upgrade from 4.7.1 to 4.11

2018-06-02 Thread Osay Osman Yuuni
Hi all,
I'm having an issue with my ACS upgrade.
After reading through the documentation carefully and performing the
pre-requisites, I upgraded my ACS installation from 4.7.0 to 4.7.1 and then
to 4.11.  However after the upgrade ACS won't start.  On Centos 7 and using
systemctl I get the following error:

cloudstack-management.service - CloudStack Management Server
   Loaded: loaded (/usr/lib/systemd/system/cloudstack-management.service;
enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2018-05-30 16:48:22 SAST;
15s ago
  Process: 3199 ExecStart=/usr/bin/jsvc -home ${JAVA_HOME} -user
${CLOUDSTACK_USER} -cp ${JARS}:${CLASSPATH} -errfile ${LOGDIR}/${NAME}.err
-cwd ${LOGDIR} -pidfile ${CLOUDSTACK_PID} ${JAVA_OPTS} ${BOOTSTRAP_CLASS}
(code=exited, status=1/FAILURE)
  Process: 3190 ExecStartPre=/bin/bash -c /bin/systemctl set-environment
JARS=$(ls /usr/share/cloudstack-management/lib/*.jar | tr '
' ':' | sed s'/.$//') (code=exited, status=0/SUCCESS)
  Process: 3183 ExecStartPre=/bin/bash -c /bin/systemctl set-environment
JAVA_HOME=$( readlink -f $( which java ) | sed s:bin/.*$:: ) (code=exited,
status=0/SUCCESS)

May 30 16:48:22 cloudmgr1.afdb.org systemd[1]: Starting CloudStack
Management Server...
May 30 16:48:22 cloudmgr1.afdb.org jsvc[3199]: Invalid user name ''
specified
May 30 16:48:22 cloudmgr1.afdb.org systemd[1]:
cloudstack-management.service: control process exited, code=exited status=1
May 30 16:48:22 cloudmgr1.afdb.org systemd[1]: Failed to start CloudStack
Management Server.
May 30 16:48:22 cloudmgr1.afdb.org systemd[1]: Unit
cloudstack-management.service entered failed state.
May 30 16:48:22 cloudmgr1.afdb.org systemd[1]:
cloudstack-management.service failed.
>From what I can see this is linked to the Java service.

I'm not sure why the invalid user name '' and what it's referring to.
There is no log in the cloudstack management log directory.  This means it
dies immediately it is launched.

Is this something that anyone has come across?  Any help?

Kind regards,

-- 
*Osay Osman YUUNI*  | Techno-Geek


*Old Kent Drive | Midstream Estate *Office: +27 12 003 6900 | Ext: 8402 |
E-Mail: o yu...@gmail.com

Mobile: +27 78 090 5501 | Fax: +27866737198 Web: http://www.
yuuniqueenterprises.com


Re: related to CLOUDSTACK-10310 Fix KVM reboot on storage issue need workaround

2018-06-02 Thread hanumant borwandkar
Hi,

We are using NFS as primary storage shared across 8 compute hosts

Will share agent log of affected computes

Is it possible to downgrade 4.9 to 4.8

Regards,
Hanumant

On 17-May-2018 11:17 am, "ilya musayev" 
wrote:

> We maybe missing a bit of context here - are you using NFS as shared
> storage in the cluster?
>
> If so - are you certain you aren’t loosing connectivity to NFS primary
> storage?
>
> Please upload the agent.log to pastebin or similar site and share the link.
>
>
>
> On Tue, May 15, 2018 at 3:13 AM hanumant borwandkar <
> hanumant.borwand...@gmail.com> wrote:
>
> > Hi,
> >
> > We are using cloudstack inhouse with the version
> >
> > cloudstack-common-4.9.2.0-1.el7.centos.x86_64
> > cloudstack-agent-4.9.2.0-1.el7.centos.x86_64
> >
> > But unfortunately sometime or after every few day compute host getting
> > rebooted by cloudstack-agent and all VM running on that compute get
> > affected.
> >
> > It seems that I m facing issue related to* CLOUDSTACK-10310 Fix KVM
> reboot
> > on storage issue.*
> >
> > I tried to modify
> > /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh
> > as per github follows
> >
> > *  /usr/bin/logger -t heartbeat "kvmheartbeat.sh stopped cloudstack-agent
> > because it was unable to write the heartbeat to the storage."*
> > *  sync &*
> > *  sleep 5*
> > *  #echo b > /proc/sysrq-trigger*
> > * service cloudstack-agent stop*
> >
> > But no improvement still compute getting rebooted .
> >
> > Can someone able to provide workaround or fix for this issue
> >
> > Regards,
> > Hanumant Borwandkar
> >
>