David,

> I think we're starting from different assumptions.

You may be right. I understand your rationale, let me elaborate on
mine once more :-)

I don't expect that any of the filesystems would or should change their
behavior, or Linux changing its caching policies. My point was that if
you can't afford loosing the data you better assure to write the data
synchronously or programmatically flush the buffers from time to time
assuring to have consistent data to recover from.

I guess by saying so I'm inspired by similar thoughts that causes you
to suggest using a file backup program from within Linux.

It is not necessarily about backups exclusively. XRC and PPRC aren't
just snapshot technologies, but are meant to permanently replicate all
data stored locally to a remote storage subsystem. In case of PPRC with
GDPS/HyperSwap this allows e.g. to non-disruptively to your workload
swap storage subsystems either locally or remotely. While more expensive
certainly more convenient than falling back to a backup you don't have
any well defined data recovery scheme identifying what you've lost since
then ...

Regards,
Ingo

Linux on 390 Port <LINUX-390@VM.MARIST.EDU> wrote on 04.05.2005 18:39:27:

> > If there is transactional data the transaction monitor and/or the
> > database would typically worry about data being written to persistent
> > data store so that in case of an outage you have roll-back or
> > roll-forward capabilities.
>
> I think we're starting from different assumptions. I start from the
> assumption that the majority of Linux users are storing data on normal
> filesystems rather than in database applications.
>
> Technically one could argue that a journaling filesystem may play this
> role, but I've seen too many cases where a Linux system does not recover
> from a snapshot-based backup.
>
> > If there is "on the fly" generated data that nobody worries about
> > if being synced to disk at a particular point in time, a Linux
> > crash would certainly cause data loss for data not flushed to
> > disk yet.
> > If anyone worries about such possible data loss, you better talk to
> > your application provider articulating your concern about its data
> > hardening policy for critical data.
>
> I doubt we'll get changes in ext2/ext3/reiserfs/xfs/etc. All use the
> buffering in memory technique extensively for performance reasons on the
> Intel side, and we're not going to change that on 390.
>
> > If the customer uses GDPS/XRC for data replication of the Linux
> > environment (VM doesn't time stamp its data, though it honours Linux
> > writing timestamps) or alternatively GDPS/PPRC with or without
> > HyperSwap for both, Linux and z/VM they probably get as close as ever
> > possible, minimizing any critical data loss you probably can. And
> > in case of transactional data you typically also have full data
> > consistency assurance.
>
> For transactional applications, I might agree. However, looking at the
> dozen or so running Linux instances I have easy access to here, I have
> approximately 100-200M of unwritten data in storage that I have no
> guarantee that it has been flushed to disk. The only way to be *certain*
> that that data is committed to a recoverable backup is to run a client
> inside those Linux machines, so that the backup application is aware of
> the *actual* state of the data and represents it correctly in the
> backup.
>
> Another good reason for IBM SSD to publish the control interfaces...8-)
>
> > Therewith depending on your data loss objective in case of a
> > disaster scenario a backup like the one TSM usually delivers might
> > be good enough, or more sophisticated concurrent data replication
> > DR setup like the one delivered by like GDPS/XRC or GDPS/PPRC
> > is mandatory - if you have the opportunity integrating Linux into
> > your z/OS BR setup that GDPS provides support for.
>
> I'm not saying that GDPS is a bad idea -- if your applications can take
> advantage of it, great. I simply maintain that it does not produce
> reliable backups for a large class of users.
>
>
>
>
>
> >
> > Best regards,
> > Ingo
> >
> > --
> > Ingo Adlung,
> > zSeries Linux and Virtualization Architecture
> > mail: [EMAIL PROTECTED] - phone: +49-7031-16-4263
> >
> > Linux on 390 Port <LINUX-390@VM.MARIST.EDU> wrote on
> > 04.05.2005 16:36:30:
> >
> > > > I
> > > > was thinking
> > > > to do a "Sync and Drop" process on the Linux volumes , the
> > > > LPAR would remain
> > > > up during this process. Has anyone else already looked at
> > > > this ?
> > >
> > > You will not get a usable backup. This process does not take into
> > > account the data cached in memory.
> > >
> > > > Any ideas what the
> > > > change would
> > > > be if z/VM was in place .
> > >
> > > You still can't use Sync and Drop reliably. You need a
> > backup solution
> > > that is aware of what's going on inside the Linux system,
> > eg something
> > > with a Linux client like Bacula or TSM.
> > >
> > >
> > ----------------------------------------------------------------------
> > > For LINUX-390 subscribe / signoff / archive access instructions,
> > > send email to [EMAIL PROTECTED] with the message: INFO
> > LINUX-390 or
> > visit
> > > http://www.marist.edu/htbin/wlvindex?LINUX-390
> >
> > ----------------------------------------------------------------------
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: INFO
> > LINUX-390 or visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> >
> >
> >  ** ACCEPT: CRM114 PASS Markovian Matcher **
> > CLASSIFY succeeds; success probability: 1.0000  pR: 212.0310
> > Best match to file #0 (nonspam.css) prob: 1.0000  pR: 212.0310
> > Total features in input file: 13344
> > #0 (nonspam.css): features: 3775612, hits: 3219461, prob:
> > 1.00e+00, pR: 212.03
> > #1 (spam.css): features: 3788298, hits: 2820935, prob:
> > 9.31e-213, pR: -212.03
> >
> >
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to