Hi Tyler:

Thanks for the documentation!  Do you feel like adding that to our Wiki so that 
other users will be able to browse/add to it?

Mike, where do you suppose these kind of stuff should go?  The Documentation 
Wiki or Trac Wiki...?  We should probably decide soon to avoid confusion.

Cheers,

Bernard


-----Original Message-----
From: [EMAIL PROTECTED] on behalf of Tyler Cruickshank
Sent: Mon 14/08/2006 15:30
To: oscar-users@lists.sourceforge.net
Subject: Re: [Oscar-users] Oscar-users Digest, Vol 3, Issue 12
 
Filesystems/NFS:
 
Thanks for the helpful discussion all.  I set up NFS to mount the dir
that I need.  If I wasnt having other problems with the model I could
report on how NFS is working out in my situation.  Bernard, I will
have
a look at the article that you suggest.  Resources like that are
always
helpful.  

Speaking of resources, I bought O'Reilly's High Performance Linux
Clusters (2005) book this weekend.  It has a significant section on
OSCAR.  In general, the book seems to provide basic overview
information
with some specifics.  So far, it has been very helpful to me.  I feel
as
though I have a better grasp on what a cluster really is (all the
various configurations) and what OSCAR really is.  Germain to this
email
thread are "overview" sections on NFS and PVFS.  While I have not read
the entire book, I can say that I wish I had read it earlier.  

MPICH2:

I had a specific request for details on how I built MPICH2 using PGI
compilers for the cluster so I have pasted my installation  "notes" 
file at the bottom of my message (I attached a file but the message
bounced).  
The notes file follows along with the MPICH2 installation guide and has
my other
random install notes included.  So far I have tested it using the
MPICH2
installation guide tests but have not actually tested it with my
intended application.  When I do that I will try and write up a clean
clear
"guide".
 
Basically, I got compiler options thru communication with PGI.  They
have not released an official doc on it yet, but they sent what they
use.  So I built and compiled MPICH2 on my server node and then tar'd
up
the directory and used the C3 tool 'cpush' to move it over to the
nodes.
This was easy and worked quickly.  On the nodes, I untar'd in the same
location as on the server node (using ssh to work on the nodes).  I
then
used the MPICH2 docs to start the daemon up on the server and nodes
and
followed the very detailed MPICH2 guide to complete and test the setup.

More details are included in my attached notes file.
  
 I will still try to write up a better set of "how tos" once I have
the
whole thing (including my model) working.
    
-ty

#---------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------

MPICH2 Installation Notes.  Command line input and resultant output is
included in this file.
 
#----------------------------------------------------
August 1, 2006 16:34
#----------------------------------------------------
 
Downloaded MPICH2 to: /temp/mpich2-1.0.4
Created installation directory: /opt/mpich2-pgi-1.0.4
Created build directory: /temp/build-mpich2
 
Used the PGI documentation in combination with the July 30, 2006
MPICH2
installation documentation to come up with the following command that
executes the configure script:
-------------------------------------------------
 
Step 5: -------------------------------------------------
 
[EMAIL PROTECTED] /temp/build-mpich2]$ env CFLAGS="-fast -fpic"
CXXFLAGS="-fast -fpic" FFLAGS="-fast -fpic" \ F90FLAGS="-fast -fpic"
LDFLAGS="-fast -fpic" OPTFLAGS="-fast -fpic" \
CC="pgcc" CXX="pgCC" F90="pgf90" FC="pgf77" \
CPP="pgCC -E" /temp/mpich2-1.0.4/configure \
-prefix=/opt/mpich2-pgi-1.0.4 |& tee configure-8-8-06-8:15.log
Where \ indicates command line line continutation.
**The above represents the latest from PGI for configuring MPICH2.  See
PGI email to Tyler.
 
Step 6. Now make: -------------------------------------------------
 
[EMAIL PROTECTED] /temp/build-mpich2/lib]$ make | & tee
make-8-8-06-08:25.log
 
Step 7. Now make install:
-------------------------------------------------
 
[EMAIL PROTECTED] /temp/build-mpich2]$  make install | & tee
install-8-8-06-08:36.log
 
To remove installation:
-------------------------------------------------
 
/opt/mpich2-pgi-1.0.4/sbin/mpeuninstall may be used to remove the
installation
 
Step 8. Add to PATH: -------------------------------------------------
 
I did this in my .cshrc.
 
Step 9. Set up MPD: -------------------------------------------------
 
[EMAIL PROTECTED] ~]$ cd /home/tcruicks
[EMAIL PROTECTED] ~]$ touch .mpd.conf
[EMAIL PROTECTED] ~]$ chmod 600 .mpd.conf
[EMAIL PROTECTED] ~]$ vi .mpd.conf
    Where .mpd.conf contains a single line:  secretword=secretsofoscar
*Note, /home is mounted on the nodes so this file is automatically
present on each node.
 
Step 9b.  See Step cpush below:
---------------------------------------------
 
Step 10. Bring MPD Ring Up:
-------------------------------------------------
 
[EMAIL PROTECTED] ~]$ mpd &
[1] 8634
[EMAIL PROTECTED] ~]$ mpdtrace
Redrock
[EMAIL PROTECTED] ~]$ mpdallexit
[1]  + Done                          mpd
 
Step 11. Test MPD Ring:
-------------------------------------------------
 
[EMAIL PROTECTED] ~]$ mpd &
[1] 8656
[EMAIL PROTECTED] ~]$ mpiexec -n 1 /bin/hostname
Redrock
[EMAIL PROTECTED] ~]$ mpdallexit
[1]  + Done                          mpd
 
Step 12. Communicate with Nodes?:
-------------------------------------------------
 
First, I created the file mpd.conf in my home directory.  The file
contains 2 lines:
aqoscarnode1
aqoscarnode2
 
[EMAIL PROTECTED] ~]$ ssh aqoscarnode1 date
Warning: Permanently added 'aqoscarnode1,10.0.0.10' (RSA) to the list
of known hosts.
Warning: No xauth data; using fake authentication data for X11
forwarding.
Tue Aug  8 08:48:56 MDT 2006
[EMAIL PROTECTED] ~]$ ssh aqoscarnode2 date
Warning: Permanently added 'aqoscarnode2,10.0.0.11' (RSA) to the list
of known hosts.
Warning: No xauth data; using fake authentication data for X11
forwarding.
Tue Aug  8 08:51:26 MDT 2006
 
Step 13. Start daemons on nodes:
-------------------------------------------------
 
[EMAIL PROTECTED] ~]$ mpdboot -n 1 -f mpd.hosts
[C:[EMAIL PROTECTED]:~]>mpdtrace
Redrock
So, the nodes didnt show.  Have to do work around.
[C:[EMAIL PROTECTED]:~]>mpd &
[C:[EMAIL PROTECTED]:~]>mpdtrace -l
Redrock_43028 (10.0.0.2)
 
Step cpush (C3): -------------------------------------------
ssh'ed into the nodes and tried to run the mpd command.  Well, the
nodes 
dont have the new mpich2 directory.  I need to use cpush to sync up the
nodes with the server.  First, I tar'd the mphic2 directory and then
used cp[ush to get them to the nodes:
[EMAIL PROTECTED] /opt]$ cpush mpich2.tar -/opt
building file list ... done
mpich2.tar
building file list ... done
mpich2.tar
 
sent 1815481 bytes  received 40 bytes  1210347.33 bytes/sec
total size is 7823360  speedup is 4.31
 
sent 1815481 bytes  received 40 bytes  1210347.33 bytes/sec
total size is 7823360  speedup is 4.31
 
[EMAIL PROTECTED] /opt]$ ssh aqoscarnode1
 
Now, Ill need to untar them over there (both nodes).
[EMAIL PROTECTED] /opt]$ ssh aqoscarnode2
Last login: Mon Jul 10 11:33:52 2006
[EMAIL PROTECTED] ~]# cd /opt
[EMAIL PROTECTED] opt]# ls
c3-4  env-switcher  lam-7.0.6  lam-switcher-modulefile-7.0.6  modules 
mpich-ch_p4-gcc-1.2.7  mpich2.tar  pbs  pvm3
[EMAIL PROTECTED] opt]# tar -xvf mpich2.tar -C /opt
Done.
 
[EMAIL PROTECTED] temp]# cpush build-mpich2.tar /temp
building file list ... done
build-mpich2.tar
building file list ... done
build-mpich2.tar
 
sent 2805879 bytes  received 40 bytes  1122367.60 bytes/sec
total size is 15790080  speedup is 5.63
 
sent 2805879 bytes  received 40 bytes  801691.14 bytes/sec
total size is 15790080  speedup is 5.63
[EMAIL PROTECTED] temp]# tar -xvf build-mpich2.tar -C /temp
 
[C:[EMAIL PROTECTED]:~]>mpdboot -n 1 -f mpd.hosts
[C:[EMAIL PROTECTED]:~]>mpdtrace
aqoscarnode1
 
[C:[EMAIL PROTECTED]:~]>mpdboot -n 1 -f mpd.hosts
[C:[EMAIL PROTECTED]:~]>mpdtrace
aqoscarnode2
 
Step 14. --------------------------------------------
 
[C:[EMAIL PROTECTED]:~]>mpdtrace
Redrock
aqoscarnode2
aqoscarnode1
[C:[EMAIL PROTECTED]:~]>mpdringtest
time for 1 loops = 0.00134992599487 seconds
 
Step 15. -------------------------------------------------
 
[C:[EMAIL PROTECTED]:~]>mpiexec -l -n 30 hostname
2: aqoscarnode1.aqoscardomain
5: aqoscarnode1.aqoscardomain
1: aqoscarnode2.aqoscardomain
3: Redrock
8: aqoscarnode1.aqoscardomain
6: Redrock
12: Redrock
11: aqoscarnode1.aqoscardomain
4: aqoscarnode2.aqoscardomain
9: Redrock
18: Redrock
24: Redrock
0: Redrock
29: aqoscarnode1.aqoscardomain
14: aqoscarnode1.aqoscardomain
17: aqoscarnode1.aqoscardomain
7: aqoscarnode2.aqoscardomain
15: Redrock
23: aqoscarnode1.aqoscardomain
26: aqoscarnode1.aqoscardomain
25: aqoscarnode2.aqoscardomain
20: aqoscarnode1.aqoscardomain
16: aqoscarnode2.aqoscardomain
10: aqoscarnode2.aqoscardomain
19: aqoscarnode2.aqoscardomain
27: Redrock
21: Redrock
22: aqoscarnode2.aqoscardomain
13: aqoscarnode2.aqoscardomain
28: aqoscarnode2.aqoscardomain
 
Step 16. -------------------------------------------------
 
[C:[EMAIL PROTECTED]:~]>mpiexec -n 5 /temp/build-mpich2/examples/cpi
/temp/build-mpich2/examples/cpi: error while loading shared libraries:
libpgc.so: cannot open shared object file: No such file or directory
/temp/build-mpich2/examples/cpi: error while loading shared libraries:
libpgc.so: cannot open shared object file: No such file or directory
/temp/build-mpich2/examples/cpi: error while loading shared libraries:
libpgc.so: cannot open shared object file: No such file or directory
 
I think we need the PGI portability package on all machines?  Download
from PGI for free.  Installing that now.  OK, installed the portability
package (tar file on Redrock at /temp).  I used cpush to move the file
over and then untar'd and moved the 3 resultant directories to /usr/pgi
(which I created).  I then cd into /usr/pgi and did the following on
both nodes:
cp ./lib-linux86-g225/*so ./
Before trying again, I added the LD_LIBRARY_PATH to my .cshrc file. 
Now, try again:
 
[C:[EMAIL PROTECTED]:~]>mpiexec -n 5 /temp/build-mpich2/examples/cpi
Process 0 of 5 is on Redrock
Process 2 of 5 is on aqoscarnode1.aqoscardomain
Process 3 of 5 is on Redrock
Process 1 of 5 is on aqoscarnode2.aqoscardomain
Process 4 of 5 is on aqoscarnode2.aqoscardomain
pi is approximately 3.1415926544231230, Error is 0.0000000008333298
wall clock time = 0.043239
 
It worked.  ******The portability package needs to be on all the
nodes.******
>>> <[EMAIL PROTECTED]> 8/11/2006 2:11 AM >>>

Send Oscar-users mailing list submissions to
    oscar-users@lists.sourceforge.net

To subscribe or unsubscribe via the World Wide Web, visit
    https://lists.sourceforge.net/lists/listinfo/oscar-users
or, via email, send a message with subject or body 'help' to
    [EMAIL PROTECTED]

You can reach the person managing the list at
    [EMAIL PROTECTED]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Oscar-users digest..."


Today's Topics:

   1. MPICH2 Follow-Up (Tyler Cruickshank)
   2. Re: MPICH2 Follow-Up (Steven Blackburn)
   3. Re: MPICH2 Follow-Up (Michael Edwards)
   4. Re: MPICH2 Follow-Up (Bernard Li)


----------------------------------------------------------------------

Message: 1
Date: Thu, 10 Aug 2006 14:36:35 -0600
From: "Tyler Cruickshank" <[EMAIL PROTECTED]>
Subject: [Oscar-users] MPICH2 Follow-Up
To: <oscar-users@lists.sourceforge.net>
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset="us-ascii"

Hello.

I have 2 items:

1) I believe that I have successfully built, installed, and pushed
MPICH2 using PGI compilers.  Once I am sure that it is working Ill
write
it up and send it on.

2) I have a question that illustrates my depth of understanding of
clusters (lack of depth).  I am trying to run a model where the
compute
nodes need access to the same input/output dirs and executables
(perhaps
this is always the case?).  Right now, when the nodes try to do a job,
they cant access the executable that lives on the server.  How do I
set
the nodes up so that they are able to access the server node
directories?  I can imagine using cpush in some way or fully mounting
the file systems?

Thanks for listening/reading.

-Tyler
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://sourceforge.net/mailarchive/forum.php?forum=oscar-users/attachments/20060810/b27a3d11/attachment.html


------------------------------

Message: 2
Date: Thu, 10 Aug 2006 22:03:18 +0100 (BST)
From: Steven Blackburn <[EMAIL PROTECTED]>
Subject: Re: [Oscar-users] MPICH2 Follow-Up
To: oscar-users@lists.sourceforge.net
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset=iso-8859-1

I am a novice with clusters.... but I was planning to
solve the same problem as you in one of two ways:

a) The home directories are automatically shared by
Oscar, so a user could log on to the head node, ssh to
a client node and see her directories (including any
executables she has built there). I get the impression
this is the model used if your cluster is used by lots
of people (e.g. in a commercial setting). After all, a
normal user can probably only write to their home
directory and /tmp.

b) Parallel file systems exist, such as PVFS, which
could be used to distribute a volume across several
nodes. I was considering installing PVFS on all four
nodes of my 'toy' cluster. The way I was hoping to
install this was going to end up with a file system
each node could access locally but which would be
spread across (and shared by) all nodes in the system.

Because the Oscar PVFS package is not currently
maintained, I went for using the shared home dirs. If
I get a bit more comfortable with the cluster, I might
give the package a go and see if I can fix whatever
might be broken in it.

Remember that with either option, the I/O would be
across the network, so file access might be
inefficient (i.e. reading the same file over and
over). I was thinking of copying any such files to
/tmp but, as you say, cpush might be useful here. Is
there a programatic interface to cpush, or just
exec()?

But I am only a novice at this and could have got
entirely the wrong idea...

Steve.


--- Tyler Cruickshank <[EMAIL PROTECTED]> wrote:

> Hello.
>  
> I have 2 items:
>  
> 1) I believe that I have successfully built,
> installed, and pushed
> MPICH2 using PGI compilers.  Once I am sure that it
> is working Ill write
> it up and send it on.
>  
> 2) I have a question that illustrates my depth of
> understanding of
> clusters (lack of depth).  I am trying to run a
> model where the compute
> nodes need access to the same input/output dirs and
> executables (perhaps
> this is always the case?).  Right now, when the
> nodes try to do a job,
> they cant access the executable that lives on the
> server.  How do I set
> the nodes up so that they are able to access the
> server node
> directories?  I can imagine using cpush in some way
> or fully mounting
> the file systems?
>  
> Thanks for listening/reading.
>  
> -Tyler







------------------------------

Message: 3
Date: Thu, 10 Aug 2006 22:57:31 -0500
From: "Michael Edwards" <[EMAIL PROTECTED]>
Subject: Re: [Oscar-users] MPICH2 Follow-Up
To: oscar-users@lists.sourceforge.net
Message-ID:
    <[EMAIL PROTECTED]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Many people, in my experience, either run off their NFS mounted home
directories or they copy any needed files to some local directory
(/tmp is popular).  The first way is easy, but  I have had occasional
problems when the clocks on the nodes get out of sync, because then
the files on a given node will not neccesarily get updated if the copy
on one of the other nodes changes.  This shouldn't be an issue since
OSCAR uses ntp to keep the clocks in sync (the system that had this
problem had ntp turned off for some reason), but I guess it depends a
bit on how often you are hitting your files.  NFS also isn't really
designed for high bandwith stuff I don't think.

Copying your files to the local drive is a nice solution if the files
are not extremely large (what that means exactly depends a lot on your
network).  Then you get the file transfer overhead out of the way at
the begining and you are always sure of what files you have, because
you put them there yourself.  This also avoids file locking problems,
and write timing issues that can creep into code made by lazy mpi
programers like me :)

If you have very big data files, or very high I/O bandwith for some
reason, it becomes a very difficult problem.  Very large clusters are
tricky too.

On 8/10/06, Steven Blackburn <[EMAIL PROTECTED]> wrote:
> I am a novice with clusters.... but I was planning to
> solve the same problem as you in one of two ways:
>
> a) The home directories are automatically shared by
> Oscar, so a user could log on to the head node, ssh to
> a client node and see her directories (including any
> executables she has built there). I get the impression
> this is the model used if your cluster is used by lots
> of people (e.g. in a commercial setting). After all, a
> normal user can probably only write to their home
> directory and /tmp.
>
> b) Parallel file systems exist, such as PVFS, which
> could be used to distribute a volume across several
> nodes. I was considering installing PVFS on all four
> nodes of my 'toy' cluster. The way I was hoping to
> install this was going to end up with a file system
> each node could access locally but which would be
> spread across (and shared by) all nodes in the system.
>
> Because the Oscar PVFS package is not currently
> maintained, I went for using the shared home dirs. If
> I get a bit more comfortable with the cluster, I might
> give the package a go and see if I can fix whatever
> might be broken in it.
>
> Remember that with either option, the I/O would be
> across the network, so file access might be
> inefficient (i.e. reading the same file over and
> over). I was thinking of copying any such files to
> /tmp but, as you say, cpush might be useful here. Is
> there a programatic interface to cpush, or just
> exec()?
>
> But I am only a novice at this and could have got
> entirely the wrong idea...
>
> Steve.
>
>
> --- Tyler Cruickshank <[EMAIL PROTECTED]> wrote:
>
> > Hello.
> >
> > I have 2 items:
> >
> > 1) I believe that I have successfully built,
> > installed, and pushed
> > MPICH2 using PGI compilers.  Once I am sure that it
> > is working Ill write
> > it up and send it on.
> >
> > 2) I have a question that illustrates my depth of
> > understanding of
> > clusters (lack of depth).  I am trying to run a
> > model where the compute
> > nodes need access to the same input/output dirs and
> > executables (perhaps
> > this is always the case?).  Right now, when the
> > nodes try to do a job,
> > they cant access the executable that lives on the
> > server.  How do I set
> > the nodes up so that they are able to access the
> > server node
> > directories?  I can imagine using cpush in some way
> > or fully mounting
> > the file systems?
> >
> > Thanks for listening/reading.
> >
> > -Tyler
>
>
>
>
>
>
-------------------------------------------------------------------------
> Using Tomcat but need to do more? Need to support web services,
security?
> Get stuff done quickly with pre-integrated technology to make your
job easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache
Geronimo
>
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> _______________________________________________
> Oscar-users mailing list
> Oscar-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/oscar-users
>



------------------------------

Message: 4
Date: Fri, 11 Aug 2006 01:10:41 -0700
From: "Bernard Li" <[EMAIL PROTECTED]>
Subject: Re: [Oscar-users] MPICH2 Follow-Up
To: <oscar-users@lists.sourceforge.net>
Message-ID:
    <[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"

Hi guys:

I will add to this thread by pointing you all to this article available
at ClusterMonkey:

http://www.clustermonkey.net//content/view/142/32/

Cheers,

Bernard

________________________________

From: [EMAIL PROTECTED] on behalf of Michael
Edwards
Sent: Thu 10/08/2006 20:57
To: oscar-users@lists.sourceforge.net
Subject: Re: [Oscar-users] MPICH2 Follow-Up



Many people, in my experience, either run off their NFS mounted home
directories or they copy any needed files to some local directory
(/tmp is popular).  The first way is easy, but  I have had occasional
problems when the clocks on the nodes get out of sync, because then
the files on a given node will not neccesarily get updated if the copy
on one of the other nodes changes.  This shouldn't be an issue since
OSCAR uses ntp to keep the clocks in sync (the system that had this
problem had ntp turned off for some reason), but I guess it depends a
bit on how often you are hitting your files.  NFS also isn't really
designed for high bandwith stuff I don't think.

Copying your files to the local drive is a nice solution if the files
are not extremely large (what that means exactly depends a lot on your
network).  Then you get the file transfer overhead out of the way at
the begining and you are always sure of what files you have, because
you put them there yourself.  This also avoids file locking problems,
and write timing issues that can creep into code made by lazy mpi
programers like me :)

If you have very big data files, or very high I/O bandwith for some
reason, it becomes a very difficult problem.  Very large clusters are
tricky too.

On 8/10/06, Steven Blackburn <[EMAIL PROTECTED]> wrote:
> I am a novice with clusters.... but I was planning to
> solve the same problem as you in one of two ways:
>
> a) The home directories are automatically shared by
> Oscar, so a user could log on to the head node, ssh to
> a client node and see her directories (including any
> executables she has built there). I get the impression
> this is the model used if your cluster is used by lots
> of people (e.g. in a commercial setting). After all, a
> normal user can probably only write to their home
> directory and /tmp.
>
> b) Parallel file systems exist, such as PVFS, which
> could be used to distribute a volume across several
> nodes. I was considering installing PVFS on all four
> nodes of my 'toy' cluster. The way I was hoping to
> install this was going to end up with a file system
> each node could access locally but which would be
> spread across (and shared by) all nodes in the system.
>
> Because the Oscar PVFS package is not currently
> maintained, I went for using the shared home dirs. If
> I get a bit more comfortable with the cluster, I might
> give the package a go and see if I can fix whatever
> might be broken in it.
>
> Remember that with either option, the I/O would be
> across the network, so file access might be
> inefficient (i.e. reading the same file over and
> over). I was thinking of copying any such files to
> /tmp but, as you say, cpush might be useful here. Is
> there a programatic interface to cpush, or just
> exec()?
>
> But I am only a novice at this and could have got
> entirely the wrong idea...
>
> Steve.
>
>
> --- Tyler Cruickshank <[EMAIL PROTECTED]> wrote:
>
> > Hello.
> >
> > I have 2 items:
> >
> > 1) I believe that I have successfully built,
> > installed, and pushed
> > MPICH2 using PGI compilers.  Once I am sure that it
> > is working Ill write
> > it up and send it on.
> >
> > 2) I have a question that illustrates my depth of
> > understanding of
> > clusters (lack of depth).  I am trying to run a
> > model where the compute
> > nodes need access to the same input/output dirs and
> > executables (perhaps
> > this is always the case?).  Right now, when the
> > nodes try to do a job,
> > they cant access the executable that lives on the
> > server.  How do I set
> > the nodes up so that they are able to access the
> > server node
> > directories?  I can imagine using cpush in some way
> > or fully mounting
> > the file systems?
> >
> > Thanks for listening/reading.
> >
> > -Tyler
>
>
>
>
>
>
-------------------------------------------------------------------------
> Using Tomcat but need to do more? Need to support web services,
security?
> Get stuff done quickly with pre-integrated technology to make your
job easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache
Geronimo
>
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> _______________________________________________
> Oscar-users mailing list
> Oscar-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/oscar-users
>

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services,
security?
Get stuff done quickly with pre-integrated technology to make your job
easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache
Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Oscar-users mailing list
Oscar-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oscar-users


-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/ms-tnef
Size: 8731 bytes
Desc: not available
Url :
http://sourceforge.net/mailarchive/forum.php?forum=oscar-users/attachments/20060811/a1deb4ef/attachment.bin


------------------------------

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services,
security?
Get stuff done quickly with pre-integrated technology to make your job
easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache
Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642

------------------------------

_______________________________________________
Oscar-users mailing list
Oscar-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oscar-users


End of Oscar-users Digest, Vol 3, Issue 12
******************************************


<<winmail.dat>>

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Oscar-users mailing list
Oscar-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oscar-users

Reply via email to