Re: Increasing the DISKPOOL
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jones, Eric J >We are running TSM 5.2.2 server on AIX 5.2. We have always had the same >size diskpool from the day it was built. We now have the need to >increase the diskpool. How do we increase the current diskpool(110GB to >220GB). We have additional space on the drive to do the increase. Run HELP DEFINE VOLUME from the command line. -- Mark Stapleton ([EMAIL PROTECTED]) Senior TSM consultant
Re: Increasing the DISKPOOL
Jones, Eric J wrote: > Good Afternoon. > We are running TSM 5.2.2 server on AIX 5.2. We have always had the same > size diskpool from the day it was built. We now have the need to > increase the diskpool. How do we increase the current diskpool(110GB to > 220GB). We have additional space on the drive to do the increase. > > You should do something like this: def vol stgpool-name path-to-new-volume formatsize=11 -- -- Skylar Thompson ([EMAIL PROTECTED]) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
Increasing the DISKPOOL
Good Afternoon. We are running TSM 5.2.2 server on AIX 5.2. We have always had the same size diskpool from the day it was built. We now have the need to increase the diskpool. How do we increase the current diskpool(110GB to 220GB). We have additional space on the drive to do the increase. Thanks Eric Jones PLATFORM AND SERVER SOLUTIONS 1701 North Street Building 976 Endicott, NY 13760 MD 0800 Never let the fear of striking out prevent you from playing the game.
Re: Collocation groups and offsite reclamation
>> On Thu, 14 Dec 2006 10:01:26 -0500, Thomas Denier <[EMAIL PROTECTED]> said: > There were just the two tape pools mentioned in my original posting. > Copy pool volumes were sent to an offsite vault. Primary pool > volumes remained onsite. I believe what TSM does is order the primary mounts by which volumes have the most data to be moved in the current reclamation process. - Allen S. Rout
Re: ADSM on z/OS Mainframe Tape Handling
You have to update volsers that have a status of empty to readwrite in order from them to return to the storage pool. Try running the following command in your daily processing to send the empty carts to the scratch pool. Fill in the x's with the storage pool name of the carts. UPDATE VOL * ACCESS=READWRITE WHERESTGPOOL='' WHERESTATUS='EMPTY' Shannon Bach Madison Gas & Electric Co Werner Nussbaumer <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 12/14/2006 09:44 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject ADSM on z/OS Mainframe Tape Handling Hi list We have an installed TSM Storage Manager Server on a z/OS Mainframe. We have defined a Tape Pool with a maximum of 200 Tapes which can be taken from a scratch pool. If TSM takes a Scratch tape it is automatically put on RMM as the owner "ADSM". The problem is that these tapes if they are empty they never are beeing returned to the scratch pool. What must be done on TSM Server that it empties the tapes? TSM has 124 tapes used in the defined TSM tape pool. However in RMM there are 277 tapes defined as master for TSM. 1) What must be done that the tapes which are not anymore in the TSM tape Pool but still are in RMM that they are returned to the scratch pool? 2) In the integrated solutions console under "Servers" -> "Libraries for All Servers" -> "Device Classes for ADSM" -> "TAPEPOOL Properties (ADSM)" there are the options - "File retention period" Is it from the creation date or from the date the tape was empty? - "Tape expiration date (ddd)" What is the meaning of these 2 parameters? Thanks for any help Regards Werner Nussbaumer
Re: ADSM on z/OS Mainfram
Top of message >>--> 12-14-06 08:03 S.SHEPPARD (SHS)RE: ADSM on z/OS Mainfram You need to specify the RMM deletion exit in your TSM server options file: DELetionexitEDGTVEXT Server will have to be restarted. Sam Sheppard San Diego Data Processing Corp (858)-581-9668 ---` Top of message >>--> 12-14-06 07:18 ..NETMAIL () RE: ADSM on z/OS Mainfram You need to specify the RMM deletion exit in your TSM server options file: Date: Thu, 14 Dec 2006 16:17:04 +0100 From: "Werner Nussbaumer" <[EMAIL PROTECTED]> Subject: [ADSM-L] ADSM on z/OS Mainframe Tape Handling To: ADSM-L@VM.MARIST.EDU _Top_of_Message_ Hi list We have an installed TSM Storage Manager Server on a z/OS Mainframe.=20 We have defined a Tape Pool with a maximum of 200 Tapes which can be taken from a scratch pool. If TSM takes a Scratch tape it is automatically put on RMM as the owner "ADSM". The problem is that these tapes if they are empty they never are beeing returned to the scratch pool. What must be done on TSM Server that it empties the tapes? TSM has 124 tapes used in the defined TSM tape pool. However in RMM there are 277 tapes defined as master for TSM.=20 1) What must be done that the tapes which are not anymore in the TSM tape Pool but still are in RMM that they are returned to the scratch pool? 2) In the integrated solutions console under "Servers" -> "Libraries for All Servers" -> "Device Classes for ADSM" -> "TAPEPOOL Properties (ADSM)" there are the options=20 - "File retention period"=20 Is it from the creation date or from the date the tape was empty?=20 - "Tape expiration date (ddd)" What is the meaning of these 2 parameters? Thanks for any help Regards Werner Nussbaumer ---`
Performance with move data and LTO3
Hi, I wonder what transfer rates (move data from drive to drive) I am supposed to see with LTO3. I have two TSM servers, one 32-bit Win2k3 and one 64-bit 2.6.9-11.Elsmp, with a SL500 and FC LTO3 drives. Similar HW (HP DL585) except for one server have HP- and the other have IBM drives. Drives are on separate PCI busses. I used a dataset of 50Gb with large files, same file type on both systems. Only scratch tapes and no expiration on the datasets. No other tape activity on the systems during the tests. I tested disk->mt0->mt1->mt2->mt3->mt1->mt0->disk >From disk to tape I get a throughput of 74-76Mb/s with IBM drives, (migration). >From tape to tape, (move data), with HP drives I get a throughput of 30-46Mb/s and with IBM drives I get 39-59Mb/s. >From disk to tape, (move data), with IBM drives I get a throughput of 44Mb/s. Apperently write speed seems OK but read spead is an issue?! Or is this normal? Thanks Henrik --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you.
ADSM on z/OS Mainframe Tape Handling
Hi list We have an installed TSM Storage Manager Server on a z/OS Mainframe. We have defined a Tape Pool with a maximum of 200 Tapes which can be taken from a scratch pool. If TSM takes a Scratch tape it is automatically put on RMM as the owner "ADSM". The problem is that these tapes if they are empty they never are beeing returned to the scratch pool. What must be done on TSM Server that it empties the tapes? TSM has 124 tapes used in the defined TSM tape pool. However in RMM there are 277 tapes defined as master for TSM. 1) What must be done that the tapes which are not anymore in the TSM tape Pool but still are in RMM that they are returned to the scratch pool? 2) In the integrated solutions console under "Servers" -> "Libraries for All Servers" -> "Device Classes for ADSM" -> "TAPEPOOL Properties (ADSM)" there are the options - "File retention period" Is it from the creation date or from the date the tape was empty? - "Tape expiration date (ddd)" What is the meaning of these 2 parameters? Thanks for any help Regards Werner Nussbaumer
Re: Collocation groups and offsite reclamation
-Allen S. Rout wrote: - >>> On Wed, 13 Dec 2006 13:55:51 -0500, Thomas Denier > >> My site once had a primary tape pool and a copy tape pool used for >> some of our larger clients, with both of these pools collocated by >> node. Offsite reclamation was a painful experience. > >Can you describe your config in slightly greater detail? You say >'offsite reclamation', but I'm not clear where the offsite happened; >if the copy pool was offsite, that's one thing. If the offsite pool >was yet-another pool, that's different. There were just the two tape pools mentioned in my original posting. Copy pool volumes were sent to an offsite vault. Primary pool volumes remained onsite.
Re: Collocation groups and offsite reclamation
>> On Wed, 13 Dec 2006 13:55:51 -0500, Thomas Denier <[EMAIL PROTECTED]> said: > My site once had a primary tape pool and a copy tape pool used for > some of our larger clients, with both of these pools collocated by > node. Offsite reclamation was a painful experience. Can you describe your config in slightly greater detail? You say 'offsite reclamation', but I'm not clear where the offsite happened; if the copy pool was offsite, that's one thing. If the offsite pool was yet-another pool, that's different. - Allen S. Rout
Re: ANR999D shmcomm.c(1826) error
Thanks Richard! When doing further investigation, the TSM server's client, which backs up to our other TSM server, was backing up it's storage pool volumes which I had just created. I have since excluded those filesystems from it's backup and the memory has freed up. Joni Moyer Highmark Storage Systems, Senior Systems Programmer Phone Number: (717)302-9966 Fax: (717) 302-9826 [EMAIL PROTECTED] "Richard Sims" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 12/14/2006 08:10 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@vm.marist.edu cc Subject Re: ANR999D shmcomm.c(1826) error Sounds like a client that is co-resident in the TSM server system, using the Shared Memory Communications method, as though the client is at a very different level than the server, so the server doesn't understand what the client is saying; else there may be problems in the shared memory area. I would start by looking for system changes which may have been effected, which precipitated the problem condition, as possibly the TSM client being boosted to a high 5.3 level relative to the 5.2 server. Richard Sims
Re: dsmserv.lock and adsmserv.lock empty
On Dec 14, 2006, at 8:38 AM, Bernaldo de Quiros, Iban 1 wrote: Hi Richard, It is a TSM server that have two instances. Ah, the previously undisclosed vital information! :-) I gather that you mean two TSM server instances in one computing system instance. It sounds like you are encountering collisions due to configuration values. Make sure you are following the instructions in your Solaris 5.2 Quick Start manual, topic "Running Multiple Servers on a Single Machine": rigorous procedures are necessary when doing this. As noted in ADSM QuickFacts topic "Servers, multiple, on one machine", there is IBM Technote 1052631 which summarizes the needed details. You can use 'lsof' to verify what each server process is going after for files. Richard Sims
Re: dsmserv.lock and adsmserv.lock empty
Hi Richard, It is a TSM server that have two instances. Looking at the installation path we found the files dsmserv.lock and adsmserv.lock but the size is 0, and the content is empty when TSM is running. The same is happening with the second instance. But one curious thing is that if we try to launch another TSM from the path's that other TSM exists previously(and is up), it won't let you because the file adsmserv.lock and dsmserv.lock exist -> but the content is empty and filesize is 0. The FS got enough free space. In other way, what do you think about killing TSM with kill -9 script or a "dsmadmc -se= -id=admin -pa=admin halt" script -> with this script someone could find the admin password... :-((( Any comments will be appreciated !! Thanks !! Ibán -Original Message- From: ADSM: Dist Stor Manager on behalf of Richard Sims Sent: Thu 14/12/2006 13:34 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] dsmserv.lock and adsmserv.lock empty Your posting doesn't say where you looked for the lock file. In Solaris, it is documented as being in /opt/tivoli/tsm/server/bin/, as dsmserv.lock. Note that a zero-length file can be a consequence of a full file system at the time of attempted file creation, so check that. Also check the mtime timestamp on your lock files relative to the time that your server started, to corroborate. Richard Sims
Re: ANR999D shmcomm.c(1826) error
Sounds like a client that is co-resident in the TSM server system, using the Shared Memory Communications method, as though the client is at a very different level than the server, so the server doesn't understand what the client is saying; else there may be problems in the shared memory area. I would start by looking for system changes which may have been effected, which precipitated the problem condition, as possibly the TSM client being boosted to a high 5.3 level relative to the 5.2 server. Richard Sims
ANR999D shmcomm.c(1826) error
Hello All, Has anyone ever seen this type of error in the activity log? And if so, would you happen to know what the issue is? I've looked up this message on IBM as well as Google and I just don't seem to be finding it anywhere. Any help is appreciated. Thanks! This is on a TSM 5.2.7.1 AIX 5.3 server. 12/14/06 03:30:42 ANRD shmcomm.c(1826): ThreadId<173> shm_init: Invalid data received on socket...2155479299 Callchain of previous message: 0x000100017e90 outDiagf <- 0x0001005b40dc shm_init <- 0x0001005b4f5c SessionThread <- 0x000180c4 StartThread <- 0x093ff2f8 _pthread_body <- 12/14/06 03:30:44 ANR0440W Protocol error on session 354112 for node () - invalid verb header received. (SESSION: 354112) 12/14/06 03:30:47 ANR0440W Protocol error on session 354116 for node () - invalid verb header received. (SESSION: 354116) Joni Moyer Highmark Storage Systems, Senior Systems Programmer Phone Number: (717)302-9966 Fax: (717) 302-9826 [EMAIL PROTECTED]
Re: dsmserv.lock and adsmserv.lock empty
Your posting doesn't say where you looked for the lock file. In Solaris, it is documented as being in /opt/tivoli/tsm/server/bin/, as dsmserv.lock. Note that a zero-length file can be a consequence of a full file system at the time of attempted file creation, so check that. Also check the mtime timestamp on your lock files relative to the time that your server started, to corroborate. Richard Sims
AW: [ADSM-L] AW: [ADSM-L] Fw: JBB Question(s)
Sorry, but I don't know. We are also interested where to get dbviewb. We've got our copy in case of a Service Request ... -Ursprüngliche Nachricht- Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Larry Peifer Gesendet: Mittwoch, 13. Dezember 2006 19:46 An: ADSM-L@VM.MARIST.EDU Betreff: Re: [ADSM-L] AW: [ADSM-L] Fw: JBB Question(s) Where can I find the tool dbviewb for JBB 5.3? I'd like to check our journal states with it. Otto Chvosta <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 12/13/2006 07:13 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] AW: [ADSM-L] Fw: JBB Question(s) Hi again, First, thanks for all of your advices in this case. In my department the prefered server operating system is AIX but unfortunately I've no influence on the choice of OS at our client servers :-( I also heard about the new disk chaching mechnism announced for TSM 5.4 and hope that is a possible solution for this problem ... But first I'd to find a solution in 5.3 ... We did a successful incremental with 'MEMORYEFFICIENTBACKUP YES' last night (processingtime ~ 18 hours) . MemEFF seems to be the right way against ANS1030E, but dbviewb still reports an invalid journal state. So my primary problem is to get the journal state valid.
dsmserv.lock and adsmserv.lock empty
Hi all, I am trying to create some stop's scripts for my TSM's. I have found in documentation, and from other people that taking a look for files dsmserv.lock and adsmserv.lock we can find the actual process ID to do kill -9 to stop TSM. But my blocking files are empty, with no content and 0 size... Anyone knows something about this subject ¿? Any help will be appreciate !! Details: TSM Server 5.2.6.3 on solaris 9 Thanks in advance, Ibán.
AW: [ADSM-L] AW: [ADSM-L] Fw: JBB Question(s)
Hi again, Yes, i did ! That's the reason why i sent the output of dbviewb and q fi to show that the journal was activated before the incremental forever (with option memoryefficientbackup yes) started/finished. Any suggestion why there activation after that ? TIA, Otto -Ursprüngliche Nachricht- Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Laurent Bendavid Gesendet: Mittwoch, 13. Dezember 2006 20:06 An: ADSM-L@VM.MARIST.EDU Betreff: Re: [ADSM-L] AW: [ADSM-L] Fw: JBB Question(s) Otto Chvosta a écrit : > Hi again, > > First, thanks for all of your advices in this case. > > In my department the prefered server operating system is AIX but > unfortunately I've no influence on the choice of OS at our client > servers :-( > > I also heard about the new disk chaching mechnism announced for TSM > 5.4 and hope that is a possible solution for this problem ... > > But first I'd to find a solution in 5.3 ... > > We did a successful incremental with 'MEMORYEFFICIENTBACKUP YES' last > night (processingtime ~ 18 hours) . MemEFF seems to be the right way > against ANS1030E, but dbviewb still reports an invalid journal state. > So my primary problem is to get the journal state valid. > > Journal Database Information: > > Database File q:\tsm_journal_db\tsmQ__.jdb.jbbdb > Database File Disk Size 674 Bytes > Journal File SystemQ: > Journal Activation Date Tue Dec 12 16:45:39 2006 > Journal Validation Date (not set) > Maximum Journal Size8191 PB (9223372036854775807 Bytes) > Journal TypeChange Journal > Journal State Not Valid > Valid for Server(not set) > Valid for Node (not set) > Number of DB Entries0 > > Filespace: > Filespace Name: \\fileserver\q$ > Hexadecimal Filespace Name: 5c5c6673316c756265635c7124 > FSID: 8 > Platform: WinNT > Filespace Type: NTFS >Is Filespace Unicode?: Yes >Capacity (MB): 1,907,538.3 > Pct Util: 35.8 > Last Backup Start Date/Time: 12/12/2006 16:55:08 > Days Since Last Backup Started: 1 > Last Backup Completion Date/Time: 12/13/2006 11:25:35 Days Since Last > Backup Completed: <1 > > Any suggestion ? > > Thanks in advance > > Otto > > > -Ursprüngliche Nachricht- > Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag > von Pete Tanenhaus > Gesendet: Dienstag, 12. Dezember 2006 21:09 > An: ADSM-L@VM.MARIST.EDU > Betreff: [ADSM-L] Fw: JBB Question(s) > > Actually this isn't really true, > > The process address space on any 32-bit operating system (Windows or > not) is limited for 4 gigabytes for obvious reasons, and the amount of > usable virtual memory depends on the specific platform (for Windows it > is 2 gig, on some Unix platforms it is as little as 1 gig). > > Since there is a finite amount of addressable virtual memory, a TSM > incremental backup is limited in number of files which can be processed. > > TSM 5.4 is introducing a new variation of incremental backup which > utilizes disk caching which should allow backing up file systems of > any size at the expense of being somewhat slower and requireing disk > space for the cache file. > > This new backup method should allow the initial backup to complete > which is required to enable journal backup. > > Hope this helps. > > > > Pete Tanenhaus > Tivoli Storage Manager Client Development > email: [EMAIL PROTECTED] > tieline: 320.8778, external: 607.754.4213 > > "Those who refuse to challenge authority are condemned to conform to it" > > > -- Forwarded by Pete Tanenhaus/San Jose/IBM on > 12/12/2006 03:02 PM --- Please respond to "ADSM: > Dist Stor Manager" > Sent by:"ADSM: Dist Stor Manager" > To: ADSM-L@VM.MARIST.EDU > cc: > Subject:Re: JBB Question(s) > > > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf > Of Otto Chvosta > >> After that dbviewb reports that the journal state is 'not valid'. So >> we tried a further inremental backup (scheduled) to get a valid state >> of the journal database. >> This incremental was stopped with >> >> ANS1999E Incremental processing of '\\fileserver\q$' stopped. >> ANS1030E The operating system refused a TSM request for memory >> allocation. >> >> We tried it again and again ... same result :-((( >> > > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf > Of Schaub, Steve > >> Add this to the dsm.opt file and run the incremental again: >> * >> == >> > * > >> * Reduce memory usage by processing a directory at a time (slower) >> > * > >> * >> == >> > * > >> memoryefficientbackup yes >> >> Lar
Re: lanfree / storage agent idea or whateva :-)
On the TSM-server you define the paths of system A, the storage-agent system. System B will then make will send the backup-data via the LAN to the storage-agent on system A. (as specified with "LANFREETCPServeraddress") >From system A the data will be sent over the SAN to a tapedrive. This mechanism with a remote storage-agent is nice when you have a fast internal/virtual network, like in the IBM's P570's. You only need 1 system with an HBA to tape while the other lpars in the system use the internal network to sent the data to the system with the HBA. I played with it and got a reasonable performance on a P570, when using a virtual internal network with an high MTU size. Jeroen -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of goc Sent: Wednesday, December 13, 2006 2:48 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] lanfree / storage agent idea or whateva :-) hi all, i've been thinking ... is one storage agent enough to serve all lanfree clients we wish to back up ? for example: on server A which is STA i have running dsmsta with following configuration [EMAIL PROTECTED]:/opt/tivoli/tsm/StorageAgent/bin # more dev* set staname STA_2 set stapassword password set stahladdress sta_2.domain.hr set stalladdress 1511 define server MAIN_TSM serverpassword=password hladdress=main_tsm.domain..hr lladdress=1500 & [EMAIL PROTECTED]:/opt/tivoli/tsm/StorageAgent/bin # more dsmsta.opt SERVERNAME MAIN_TSM DEVCONFIG devconfig.out COMMmethod TCPIP TCPPort 1511 this server A sees the LTO drives as local drives /dev/rmt*, hard linked as per manual on client (server B) in dsm.sys i have ... ENABLELANFREE yes COMMmethod TCPip LANFREETCPPort 1511 LANFREECOMMMETHOD TCPip LANFREETCPServeraddress sta_2.domain.hr TCPPort1500 TCPServeraddress main_tsm.domain.hr ... now my question is : when i define /dev/rmt drives i must do it on STA server (A) which sees the drives phisicaly and hardlinked to /dev directory exactly as seen by the MAIN_TSM server OR can i place dummy /dev/rmt* on client server since it's using information from STA server which sees the drives (rmt's) for real ? how do you manage it ? i got a feeling after some implementation that STA must be installed and configured on every server which has BA client ready to backup/archove on it ... (this is maybe even true) it would be nice that 1 STA i enough and lanfree BA clients can use him ... i think i need a vacation, i talk too much thanks. my new record :-) --- Total number of objects inspected: 253 Total number of objects backed up: 253 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 2.96 TB LanFree data bytes:2.96 TB Data transfer time:3,948.46 sec Network data transfer rate:805,348.31 KB/sec Aggregate data transfer rate: 110,039.74 KB/sec Objects compressed by:0% Elapsed processing time: 08:01:37 TSM 5.3.4 AIX 5.2 LTO3 bla bla ... ÿþD i t b e r i c h t i s v e r t r o u w e l i j k e n k a n g e h e i m e i n f o r m a t i e b e v a t t e n e n k e l b e s t e m d v o o r d e g e a d r e s s e e r d e . I n d i e n d i t b e r i c h t n i e t v o o r u i s b e s t e m d , v e r z o e k e n w i j u d i t o n m i d d e l l i j k a a n o n s t e m e l d e n e n h e t b e r i c h t t e v e r n i e t i g e n . A a n g e z i e n d e i n t e g r i t e i t v a n h e t b e r i c h t n i e t v e i l i g g e s t e l d i s m i d d e l s v e r z e n d i n g v i a i n t e r n e t , k a n A t o s O r i g i n n i e t a a n s p r a k e l i j k w o r d e n g e h o u d e n v o o r d e i n h o u d d a a r v a n . H o e w e l w i j o n s i n s p a n n e n e e n v i r u s v r i j n e t w e r k t e h a n t e r e n , g e v e n w i j g e e n e n k e l e g a r a n t i e d a t d i t b e r i c h t v i r u s v r i j i s , n o c h a a n v a a r d e n w i j e n i g e a a n s p r a k e l i j k h e i d v o o r d e m o g e l i j k e a a n w e z i g h e i d v a n e e n v i r u s i n d i t b e r i c h t . O p a l o n z e r e c h t s v e r h o u d i n g e n , a a n b i e d i n g e n e n o v e r e e n k o m s t e n w a a r o n d e r A t o s O r i g i n g o e d e r e n e n / o f d i e n s t e n l e v e r t z i j n m e t u i t s l u i t i n g v a n a l l e a n d e r e v o o r w a a r d e n d e L e v e r i n g s v o o r w a a r d e n v a n A t o s O r i g i n v a n t o e p