Also i have seen instances where the memory jumps up when unqualified
searches are done concurrently by different users. This can be disabled.
Also if email engine is processing huge amount of emails with larger
attachments.




On Wed, Feb 25, 2009 at 1:45 PM, Walters, Mark <mark_walt...@bmc.com> wrote:

>  By default the maximum memory arserver can access on 32-bit Windows is
> 2GB.  If it tries to grow beyond this then it will fail.  This is an OS
> limitation that can be changed to 3GB by the addition of the /3GB switch to
> the appropriate line in the boot.ini file.  See
> http://www.microsoft.com/whdc/system/platform/server/PAE/PAEmem.mspx and
> many of the other pages returned by a Google for “windows 3gb boot.ini”.
>
>
>
> The arserver is compiled with the large address aware flag that enables it
> to make use of the additional 1GB of RAM provided by this switch.
>
>
>
> However, I’d be interested to understand why your arserver process is
> getting so large that it is reaching the 2GB limit.  How much memory does
> arserver.exe consume after startup – at the point that users can login?  How
> many concurrent users?  The initial size of the process is largely
> determined by the amount of forms and workflow that you have on the system
> as these are all read in to the server to create the cache.  If you have  a
> full ITSM system with multiple language packs the initial size could be in
> excess of 700MB.  Once it is up and running the server will increase in size
> as it allocates memory to handle it’s day-to-day work – processing query
> results and so on.  One of the advantages of the Windows platform is that
> once the server releases the memory it is returned to the OS and the
> footprint should shrink again.  If the maximum process size (2 or 3 GB
> depending on the flag above) minus the current size or arserverd is LESS
> than the startup size a recache operation is likely to fail.
>
>
>
> Things that you could do;
>
>
>
> ·         Enable the /3GB option
>
> ·         If your startup size is very large look to remove unused views,
> forms, workflow from the system
>
> ·         Set Large-Result-Logging-Threshold: 100000 in ar.cfg and enable
> thread logging on the secondary servers – this will show you if you have
> users running queries returning large datasets and consuming memory.
>
> ·         Set Copy-Cache-Logging: T too – this will record the recache
> operations in the thread log.  You want to make sure that you see the
> freeservercache that indicates that the server has released the original
> copy of the cache.  If you have long running API calls it is possible for
> the server to end up with more than 2 copies of the cache – if this is a
> large cache you can very quickly hit the memory limit.
>
> Eg This is bad – multiple copies – you want to see a begin, end and free
> before the next begin.
>
> CopyCache Begin: rpcCallProc=10002 user="Remedy Application Service" tid=5
> rpcId=0
>
> CopyCache End
>
> CopyCache Begin: rpcCallProc=10002 user="Remedy Application Service" tid=5
> rpcId=0
>
> CopyCache End
>
> FreeServerCache: rpcCallProc=10018 user="Remedy Application Service" tid=5
> rpcId=1178442632
>
>
>
> Incidentally, if you have are using 64-bit Windows I believe the maximum
> size of a large address aware enabled 32-bit application is 4GB by default -
> http://msdn.microsoft.com/en-us/library/ms791558.aspx
>
>
>
> Mark Walters
>
>
>
> The opinions, statements, and/or suggested courses of action expressed in
> this E-mail do not necessarily reflect those of BMC Software, Inc.  My
> voluntary participation in this forum is not intended to convey a role as a
> spokesperson, liaison or support representative for BMC Software, Inc.
>
>
>
>
>
> *From:* Action Request System discussion list(ARSList) [mailto:
> arsl...@arslist.org] *On Behalf Of *Anthony K R
> *Sent:* 25 February 2009 07:17
>
> *To:* arslist@ARSLIST.ORG
> *Subject:* Re: ARS 7.1 server group issue
>
>
>
> Joe,
>
>
>
> The chunk setting should not cause malloc error. There is no timeout issue
> either.
>
>
>
> Today I saw memory consumption report when the recache triggered on
> secondary servers. It is crossing 2GB before the malloc error, a memory
> limitation on OS or arserver process?
>
>
>
>
>
> Regards,
>
> Anthony
>
>
>
>
>
>
>
> *From:* Action Request System discussion list(ARSList) [mailto:
> arsl...@arslist.org] *On Behalf Of *Joe DeSouza
> *Sent:* Wednesday, February 25, 2009 7:50 AM
> *To:* arslist@ARSLIST.ORG
> *Subject:* Re: ARS 7.1 server group issue
>
>
>
> **
>
> Its a known issue where ARS on Windows connected to a Remote Oracle
> database, takes forever to recache and that it takes forever to restart
> if the services have been stopped and is restarted. This is because of the
> way that data is read in chunks of 100 rows. It is as designed and Remedy
> has nothing to do with the design as its more how the Oracle client
> communicates to remote oracle databases when the client is on Windows..
>
>
>
> I didn't experience the kinds of problems you are talking about on UNIX ARS
> Servers connected to remote Oracle databases.
>
>
>
> So I guessed your configurations by the symptoms you described.
> Unfortunately you got to live with it unless you decide to move to UNIX.
>
>
>
> Joe
>
>
>  ------------------------------
>
> *From:* Lyle Taylor <tayl...@ldschurch.org>
> *To:* arslist@ARSLIST.ORG
> *Sent:* Tuesday, February 24, 2009 6:02:40 PM
> *Subject:* Re: ARS 7.1 server group issue
>
> Correct……
>
>
>
> *From:* Action Request System discussion list(ARSList) [mailto:
> arsl...@arslist.org] *On Behalf Of *Joe DeSouza
> *Sent:* Tuesday, February 24, 2009 3:20 PM
> *To:* arslist@ARSLIST.ORG
> *Subject:* Re: ARS 7.1 server group issue
>
>
>
> **
>
> Your AR Servers are probably on windows and connect to Oracle setup as
> a Remote database?
>
>
>
> Joe
>
>
>  ------------------------------
>
> *From:* Lyle Taylor <tayl...@ldschurch.org>
> *To:* arslist@ARSLIST.ORG
> *Sent:* Tuesday, February 24, 2009 4:27:56 PM
> *Subject:* Re: ARS 7.1 server group issue
>
> **
>
> I see server groups as being more useful for load balancing and
> redundancy.  While you can indeed have users on the other systems while you
> perform the updates, the other servers become nearly unusable as the cache
> updates, especially for anything other than very minor changes.  I’ve simply
> had less issues if I simply bring down the other servers during the changes
> and then bring them back up again after.  In my experience, that actually
> provides a better user experience, because knowing that it’s down for a
> short time is easier to deal with than extremely slow performance during a
> cache update.
>
>
>
> Lyle
>
>
>
> *From:*
>

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: RMI Solutions ARSlist: "Where the Answers Are"

Reply via email to