Thanks for the input. I placed my replies inline. Have a look please.

On Tue, Jan 29, 2013 at 7:29 PM, Steven Carr <sjc...@gmail.com> wrote:

> Regarding system IDs... you couldn't give each system it's own system ID,
> otherwise they would function as distinct systems (each system wouldn't be
> able to act on tickets generated by the other etc.). Shared systems would
> have to share the same system ID.
>

bogdan: Thanks for this info. I was under the impression the SystemID is
used by OTRS only when deciding which emails to process from an assigned
inbox via PostMaster


>
> And OTRS sessions can be stored either in the DB (by default) or the
> filesystem, so not sure why that would be an issue?
>
>
bogdan: Let me just clarify what I mean. I assumed OTRS *requires* "sticky
user sessions" (aka all http request/responses from a given user session
need to pass through the same app server). This is required by web apps
that hold user session data in memory or on disk between http
request/responses. I assumed OTRS requires "sticky user sessions" because I
saw a lot of generated files in /var subdirs.

Can I conclude OTRS actually stores *all* user session specific data in the
database, between response/requests? My impression was that it stores just
a list of sessions so that an admin can see them in the admin UI.

With the "does last DB write always win", if the write was committed then
> yes, it's committed to the database, I don't really understand what your
> question is, that is how a database works? whether or not the last commit
> was successfully synchronised to other database servers will entirely
> depend on your database clustering configuration.
>

bogdan: I'll explain what I meant. Many apps supporting heavy concurrent
usage implement entity versioning. This means that when the app reads a
document from the db (such as a ticket) it also reads its version. The user
changes the entity in the web UI and when the user saves the change the app
first checks the version of the entity to ensure it is still the same as
when it was initially read. If the versions differ, the app either throws
an error or does something more elegant to reconcile the differences.
What's important is that in this way the app protects against silently
losing changes. I was aiming to find if OTRS does entity versioning when I
asked "Does last db write always win?" and I mainly asked because I saw
many tables saving a change time for each row. The change time could be
used for entity versioning.


>
> I don't see why you couldn't have multiple front end "web" servers and a
> clustered database in the backend, and I'm not aware of any hard and fast
> rules that dictate you can't store articles in the DB once you reach a
> certain size, yes you are possibly going to run into some performance
> issues (maybe, depends on how tuned your DB is) and it will be quite a
> chunk of data to backup.  Generally for the front end you will need at
> least 4 servers (2 load balancers sharing an additional virtual IP and the
> 2 web servers you want to load balance) queries hit the virtual IP and the
> LB node proxies the connection to one of the backend web servers.
>
>
bogdan: I think it's self obvious it's not practical to store attachments
in the db when you obtain a db that's 90% attachments and 10% everything
else. I would like to store this db on fast storage and it's hard to obtain
budget for high capacity ssds when most of the data on them is dead weight.
Backup management would become a pain, some queries would be heavily
impacted and all db operations (moving to qa/dev envs, etc) would be
unreasonably slowed down by such a large db. OTRS's official docs also
acknowledge this practical matter.


> Steve
>
>
>
>
> On 29 January 2013 16:58, Bogdan Iosif <bogdan.io...@gmail.com> wrote:
>
>> Hi,
>>
>> While I appreciate the general advice, please note I'm not trying to
>> reinvent anything. Instead, I want to prepare for natural problems that
>> OTRS will run into when reaching a size that requires load balancing.
>>
>> For example, articles can't be stored in the database for installations
>> where articles size exceeds a couple of gb. In my case, the initial OTRS
>> installation has 25 GB of attachments. Storing those in the db would blow
>> query performance and lead me to having daily backups of unmanageable size.
>> So shared storage is a must. The problem is what needs to be shared and
>> what needs to *never* be shared.
>>
>> I don't know about the code in Linux-HA but the load balancers I worked
>> with require some minimal support from applications behind them when the
>> app requires sticky app sessions (not http sessions, OTRS keeps a user
>> session alive across multiple http sessions).
>>
>> Do you have load balanced OTRS installation?
>>
>> /bogdan
>>
>>
>>
>> On Tue, Jan 29, 2013 at 6:34 PM, David Boyes <dbo...@sinenomine.net>wrote:
>>
>>>  Thoughts: ****
>>>
>>> ** **
>>>
>>> Rather than invent a application-specific solution, look at Linux-HA (
>>> www.linux-ha.org). They’ve solved most of these problems in a neatly
>>> packaged way. ****
>>>
>>> There’s existing code to handle session affinity and most of the request
>>> distribution process. ****
>>>
>>> ** **
>>>
>>> If you store everything in the database (including attachments), you can
>>> easily separate the application logic from the database server; that
>>> introduces a bit more database management, but it easily allows multiple
>>> otrs instances to use the same data safely. It also lets you take advantage
>>> of the clustering features in the dbms software. You also eliminate any
>>> need for shared storage. ****
>>>
>>> ** **
>>>
>>> If you use shared storage, you MUST use a cluster-aware filesystem like
>>> GFS2 or OCFS. NFS won’t work reliably. ****
>>>
>>> ** **
>>>
>>> *From:* otrs-boun...@otrs.org [mailto:otrs-boun...@otrs.org] *On Behalf
>>> Of *Bogdan Iosif
>>> *Sent:* Tuesday, January 29, 2013 5:21 AM
>>> *To:* OTRS User Mailing List
>>> *Subject:* [otrs] NLB (load balancing) OTRS****
>>>
>>> ** **
>>>
>>> Hi,****
>>>
>>> Can anyone help with some obvious issues around setting up a load
>>> balanced OTRS?****
>>>
>>> - Does last db write always win?
>>>   I imagine there's no built in protection against it.****
>>>
>>> - Are HTTP sticky sessions required and if so, how can they be
>>> configured?
>>>   I imagine OTRS needs some built in support to allow identification of
>>> user sessions in the balancer so that it maps them on the same app server.
>>> ****
>>>
>>> ** **
>>>
>>> - To what extent is shared storage required? ****
>>>
>>>   An older mailing list message proposes sharing the whole /var dir
>>> through a storage that support file locks (mainly to safely use
>>> TicketCounter.log but this could be worked around by setting up different
>>> SystemIDs (via SysConfig) on each app server). While sharing storage is
>>> required for /var/article (when attachments are stored on disk) I don't
>>> know if it's required or even safe for the other subfolders in /var
>>> (especially /tmp).****
>>>
>>> Thanks,****
>>>
>>> Bogdan****
>>>
>>> ---------------------------------------------------------------------
>>> OTRS mailing list: otrs - Webpage: http://otrs.org/
>>> Archive: http://lists.otrs.org/pipermail/otrs
>>> To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs
>>>
>>
>>
>> ---------------------------------------------------------------------
>> OTRS mailing list: otrs - Webpage: http://otrs.org/
>> Archive: http://lists.otrs.org/pipermail/otrs
>> To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs
>>
>
>
> ---------------------------------------------------------------------
> OTRS mailing list: otrs - Webpage: http://otrs.org/
> Archive: http://lists.otrs.org/pipermail/otrs
> To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs
>
---------------------------------------------------------------------
OTRS mailing list: otrs - Webpage: http://otrs.org/
Archive: http://lists.otrs.org/pipermail/otrs
To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs

Reply via email to