Not directly. SAS depends on read/write/modify access to a conventional file system. Hadoop does not by default expose such a thing.
There are two well understood solutions, - the first is to copy data products to and from the Hadoop cluster and any individual SAS workstations. - the other is to use vendor software such as MapR which provides uniform access to all clustered files via NFS as well as the normal HDFS API. It may also be possible to use a commercial NFS based filer that is mounted on every node of your cluster and then use a local: prefix on your output paths. This can be very dangerous since a decent size cluster can probably take out most filers if a large number of nodes all write significant amounts of data at the same time. On Wed, Nov 23, 2011 at 1:43 PM, Saurabh Agrawal <[email protected]>wrote: > Can a sas based system integrated with a hadoop platform ? >
