I used memcached as a basis for a node.js app package and I do find the 
host_port under the publisher link:

http://host:port/proxy/application_1419345382641_0014/ws/v1/slider/publisher/slider/servers

{"description":"Servers","updated":0,"entries":{"host_port":"c6404.ambari.apache.org:59737"},"empty":false}

On Dec 29, 2014, at 4:43 PM, Yong Feng <fengyong...@gmail.com> wrote:

> Thanks Ted, and sorry for late response.
> 
> The information showed by above link actually is the info of AppMaster of
> Slider instead of entry point of application created by slider.
> 
> I use my environment and hbase application as an example (
> https://github.com/luckyfengyong/vagrant-hadoop are the vagrant scripts I
> used to create my env)
> 
> http://node6:59866/ws/v1/slider/registry/users/root/services/org-apache-slider/hbase
> is the equivalent link in my env according to your suggestion. I append
> what I get from the link at the end of the mail.. It is almost the same as
> the AppMaster UI of Slier. What I want is the info how to access hbase
> cluster. I cannot get such info by either command of "slider status" or any
> other links in AppMaster UI of Slider.
> 
> I made some investigation, and found the "hbase.master.info.port" in
> hase-site.xml is defined as following in appConfig-default.json which I
> used to create the hbase cluster.
> 
> "site.hbase-site.hbase.master.info.port": "${HBASE_MASTER.ALLOCATED_PORT}",
> 
> Besides, in metainfo.xml of hbase slider package, it is exported by
> org.apache.slider.monitor
> as follows. I also saw other exported elements in metainfo.xml. However I
> cannot find them in output of any slider GUI or CLI. I also cannot find any
> related doc explaining it. I am going to create a slider package for a HPC
> application, so any help on it will be much appreciated.
> 
> <export>
> 
> <name>org.apache.slider.monitor</name>
> 
> <value>http://
> ${HBASE_MASTER_HOST}:${site.hbase-site.hbase.master.info.port}/master-status
> </value>
> 
> </export>
> 
> P.S.
> 
> Info get from
> http://node6:59866/ws/v1/slider/registry/users/root/services/org-apache-slider/hbase
> 
> {"nodes":["components"],"service":{"type":"JSONServiceRecord","description":"Slider
> Application Master","external":[{"api":"http://
> ","addressType":"uri","protocolType":"webui","addresses":[{"uri":"
> http://node6:59866
> "}]},{"api":"classpath:org.apache.slider.management","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> http://node6:59866/ws/v1/slider/mgmt
> "}]},{"api":"classpath:org.apache.slider.publisher","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> http://node6:59866/ws/v1/slider/publisher
> "}]},{"api":"classpath:org.apache.slider.registry","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> http://node6:59866/ws/v1/slider/registry
> "}]},{"api":"classpath:org.apache.slider.publisher.configurations","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> http://node6:59866/ws/v1/slider/publisher/slider
> "}]},{"api":"classpath:org.apache.slider.publisher.exports","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> http://node6:59866/ws/v1/slider/publisher/exports
> "}]}],"internal":[{"api":"classpath:org.apache.slider.agents.secure","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> https://node6:59404/ws/v1/slider/agents
> "}]},{"api":"classpath:org.apache.slider.agents.oneway","addressType":"uri","protocolType":"REST","addresses":[{"uri":"
> https://node6:37466/ws/v1/slider/agents
> "}]}],"yarn:persistence":"application","yarn:id":"application_1419653295411_0004"}}
> 
> Thanks,
> 
> Yong
> 
> On Sat, Dec 27, 2014 at 8:30 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> 
>> When you go to AppMaster UI (slider 0.60), you should see an entry
>> for org.apache.slider.registry
>> e.g.
>> http://hor15n22.gq1.ygridcore.net:44221/ws/v1/slider/registry
>> which has two nodes: services and users.
>> 
>> You can navigate to:
>> 
>> http://hor11n14.gq1.ygridcore.net:8088/proxy/application_1419019592694_0002/ws/v1/slider/registry/users/
>> <username>/services/org-apache-slider/<appname>
>> 
>> Do you find the information there?
>> 
>> Cheers
>> 
>> On Sat, Dec 27, 2014 at 4:52 PM, Yong Feng <fengyong...@gmail.com> wrote:
>> 
>>> Hi Team,
>>> 
>>> Could anyone shed some light on it?
>>> 
>>> Thanks,
>>> 
>>> Yong
>>> 
>>> On Wed, Dec 24, 2014 at 3:09 PM, Yong Feng <fengyong...@gmail.com>
>> wrote:
>>> 
>>>> Happy Christmas, slider team.
>>>> 
>>>> I use this mail thread for a similar question on querying exported port
>>> of
>>>> slider sample cluster jmemcached. After I deployed jmemcached on
>> slider,
>>> I
>>>> did not find the entry point of the cluster by command "slider
>> status". I
>>>> have to go to the host on which the jmemcached is running and execute
>>> "ps"
>>>> command to get the allocated port.
>>>> 
>>>> Generally speaking, how slider user knows the entry point of deployed
>>>> cluster? OpenStack Heat and K8S of Google allows user to query entry
>>> point
>>>> of their stack/service. As similar app orchestrator, I would like to
>> know
>>>> how slider resolve the issue of "service discovery".
>>>> 
>>>> Thanks,
>>>> 
>>>> Yong
>>>> 
>>>> 
>>>> 
>>>> On Tue, Dec 23, 2014 at 9:06 AM, 杨浩 <yangha...@gmail.com> wrote:
>>>> 
>>>>> I think it would be a convient way. The source idea is that to get
>> some
>>>>> result of slider shell command by REST API. We just don't want to get
>>> the
>>>>> result by executing shell command in Java.
>>>>> 
>>>>> 2014-12-23 19:39 GMT+08:00 Jon Maron <jma...@hortonworks.com>:
>>>>> 
>>>>>> Are you suggesting that the client interact with the REST API to
>>>>> retrieve
>>>>>> results (instead of the current rpc mechanism)?  That is part of the
>>>>> plan.
>>>>>> 
>>>>>>> On Dec 23, 2014, at 1:45 AM, 杨浩 <yangha...@gmail.com> wrote:
>>>>>>> 
>>>>>>> I think a way to do so is that  exposing the REST API to get the
>>>>> result
>>>>>> of
>>>>>>> slider shell command
>>>>>>> 
>>>>>>> 2014-12-23 14:22 GMT+08:00 Gour Saha <gs...@hortonworks.com>:
>>>>>>> 
>>>>>>>> Do you mean REST API?
>>>>>>>> 
>>>>>>>> Significant work is going on in exposing REST API in slider for
>> the
>>>>> next
>>>>>>>> major release. We still don't know the best way to expose a REST
>>> API
>>>>> to
>>>>>>>> retrieve the AM host:port (via YARN REST API maybe) as the REST
>>>>> endpoint
>>>>>>>> itself will be served by the Slider AM host:port, but will surely
>>>>> come
>>>>>> up
>>>>>>>> with an elegant solution. Suggestions are welcome!!
>>>>>>>> 
>>>>>>>> Check the uber jira for more details -
>>>>>>>> https://issues.apache.org/jira/browse/SLIDER-151
>>>>>>>> 
>>>>>>>> -Gour
>>>>>>>> 
>>>>>>>>> On Mon, Dec 22, 2014 at 1:50 AM, 杨浩 <yangha...@gmail.com>
>> wrote:
>>>>>>>>> 
>>>>>>>>> Hi ,I've get the am port through shell command "slider list
>>>>>>>>> "+applicationName+" --state RUNNING",but arguing with my boss,
>> we
>>>>> think
>>>>>>>>> it's an ugly way to be used in production env.
>>>>>>>>> 
>>>>>>>>> Can we get the am host:port through Java API
>>>>>>>>> 
>>>>>>>>> 2014-12-16 9:07 GMT+08:00 Gour Saha <gs...@hortonworks.com>:
>>>>>>>>> 
>>>>>>>>>> Once the app is up and running can you hit the following url
>> and
>>>>> copy
>>>>>>>>> paste
>>>>>>>>>> what you see?
>>>>>>>>>> 
>>>>>>>>>> http://yang:8088/proxy/
>>>>> <application_id>/ws/v1/slider/publisher/slider
>>>>>>>>>> 
>>>>>>>>>> where the <application_id> will be the value from the property
>> "*
>>>>>>>>>> info.am.app.id
>>>>>>>>>> <http://info.am.app.id>*" in the status output above.
>>>>>>>>>> 
>>>>>>>>>> -Gour
>>>>>>>>>> 
>>>>>>>>>>> On Thu, Dec 11, 2014 at 8:23 PM, 杨浩 <yangha...@gmail.com>
>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> yang@yang:/usr/local/slider$ slider status memcached1
>>>>>>>>>>> 2014-12-12 12:22:58,305 [main] INFO  client.RMProxy -
>> Connecting
>>>>> to
>>>>>>>>>>> ResourceManager at yang/127.0.0.1:8032
>>>>>>>>>>> 2014-12-12 12:22:58,597 [main] INFO  client.SliderClient - {
>>>>>>>>>>> "version" : "1.0",
>>>>>>>>>>> "name" : "memcached1",
>>>>>>>>>>> "type" : "agent",
>>>>>>>>>>> "state" : 3,
>>>>>>>>>>> "createTime" : 1418357615354,
>>>>>>>>>>> "updateTime" : 1418357615603,
>>>>>>>>>>> "originConfigurationPath" :
>>>>>>>>>>> 
>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/snapshot",
>>>>>>>>>>> "generatedConfigurationPath" :
>>>>>>>>>>> 
>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/generated",
>>>>>>>>>>> "dataPath" :
>>>>>>>>>>> 
>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/database",
>>>>>>>>>>> "options" : {
>>>>>>>>>>>   "slider.am.restart.supported" : "true",
>>>>>>>>>>>   "site.global.security_enabled" : "false",
>>>>>>>>>>>   "internal.application.home" : null,
>>>>>>>>>>>   "internal.queue" : "default",
>>>>>>>>>>>   "application.name" : "memcached1",
>>>>>>>>>>>   "slider.cluster.directory.permissions" : "0770",
>>>>>>>>>>>   "site.global.slider.allowed.ports" : "48000, 49000,
>>>>> 50001-50010",
>>>>>>>>>>>   "internal.tmp.dir" :
>>>>>>>>>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/tmp",
>>>>>>>>>>>   "java_home" : "/opt/soft/jdk",
>>>>>>>>>>>   "internal.snapshot.conf.path" :
>>>>>>>>>>> 
>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/snapshot",
>>>>>>>>>>>   "env.MALLOC_ARENA_MAX" : "4",
>>>>>>>>>>>   "zookeeper.path" :
>> "/services/slider/users/yang/memcached1",
>>>>>>>>>>>   "internal.container.failure.shortlife" : "60000",
>>>>>>>>>>>   "internal.application.image.path" : null,
>>>>>>>>>>>   "internal.generated.conf.path" :
>>>>>>>>>>> 
>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/generated",
>>>>>>>>>>>   "site.fs.default.name" : "hdfs://yang:8020",
>>>>>>>>>>>   "site.global.additional_cp" : "/usr/lib/hadoop/lib/*",
>>>>>>>>>>>   "zookeeper.hosts" : "127.0.0.1",
>>>>>>>>>>>   "internal.provider.name" : "agent",
>>>>>>>>>>>   "internal.data.dir.path" :
>>>>>>>>>>> 
>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/database",
>>>>>>>>>>>   "site.fs.defaultFS" : "hdfs://yang:8020",
>>>>>>>>>>>   "site.global.memory_val" : "200M",
>>>>>>>>>>>   "slider.data.directory.permissions" : "0770",
>>>>>>>>>>>   "site.global.listen_port" :
>>>>>>>>>>> "${MEMCACHED.ALLOCATED_PORT}{PER_CONTAINER}",
>>>>>>>>>>>   "zookeeper.quorum" : "127.0.0.1:2181",
>>>>>>>>>>>   "site.global.xmx_val" : "256m",
>>>>>>>>>>>   "internal.am.tmp.dir" :
>>>>>>>> 
>>>>> "hdfs://yang:8020/user/yang/.slider/cluster/memcached1/tmp/appmaster",
>>>>>>>>>>>   "application.def" :
>>>>>>>>> ".slider/package/MEMCACHED/jmemcached-1.0.0.zip",
>>>>>>>>>>>   "internal.container.failure.threshold" : "5",
>>>>>>>>>>>   "site.global.xms_val" : "128m"
>>>>>>>>>>> },
>>>>>>>>>>> "info" : {
>>>>>>>>>>>   "info.am.agent.status.url" : "https://yang:60422/";,
>>>>>>>>>>>   "yarn.memory" : "2048",
>>>>>>>>>>>   "info.am.app.id" : "application_1418350976699_0004",
>>>>>>>>>>>   "info.am.agent.status.port" : "60422",
>>>>>>>>>>>   "info.am.agent.ops.url" : "https://yang:47879/";,
>>>>>>>>>>>   "yarn.vcores" : "32",
>>>>>>>>>>>   "info.am.container.id" :
>>>>>>>> "container_1418350976699_0004_03_000001",
>>>>>>>>>>>   "info.am.attempt.id" :
>>> "appattempt_1418350976699_0004_000003",
>>>>>>>>>>>   "info.am.rpc.port" : "48000",
>>>>>>>>>>>   "info.am.web.port" : "49000",
>>>>>>>>>>>   "info.am.web.url" : "http://yang:49000/";,
>>>>>>>>>>>   "info.am.hostname" : "yang",
>>>>>>>>>>>   "info.am.agent.ops.port" : "47879",
>>>>>>>>>>>   "status.application.build.info" : "Slider
>>>>> Core-0.60.0-incubating
>>>>>>>>>> Built
>>>>>>>>>>> against commit# 9e03554f99 on Java 1.6.0_31 by yang",
>>>>>>>>>>>   "status.hadoop.build.info" : "2.6.0",
>>>>>>>>>>>   "status.hadoop.deployed.info" : "branch-2.6.0
>>>>>>>>>>> @18e43357c8f927c0695f1e9522859d6a",
>>>>>>>>>>>   "live.time" : "12 Dec 2014 04:13:35 GMT",
>>>>>>>>>>>   "live.time.millis" : "1418357615354",
>>>>>>>>>>>   "create.time" : "12 Dec 2014 04:13:35 GMT",
>>>>>>>>>>>   "create.time.millis" : "1418357615354",
>>>>>>>>>>>   "containers.at.am-restart" : "0",
>>>>>>>>>>>   "status.time" : "12 Dec 2014 04:22:58 GMT",
>>>>>>>>>>>   "status.time.millis" : "1418358178437"
>>>>>>>>>>> },
>>>>>>>>>>> "statistics" : {
>>>>>>>>>>>   "MEMCACHED" : {
>>>>>>>>>>>     "containers.start.started" : 1,
>>>>>>>>>>>     "containers.live" : 1,
>>>>>>>>>>>     "containers.start.failed" : 0,
>>>>>>>>>>>     "containers.active.requests" : 0,
>>>>>>>>>>>     "containers.failed" : 0,
>>>>>>>>>>>     "containers.completed" : 0,
>>>>>>>>>>>     "containers.desired" : 1,
>>>>>>>>>>>     "containers.requested" : 1
>>>>>>>>>>>   },
>>>>>>>>>>>   "slider-appmaster" : {
>>>>>>>>>>>     "containers.unknown.completed" : 1,
>>>>>>>>>>>     "containers.start.started" : 1,
>>>>>>>>>>>     "containers.live" : 2,
>>>>>>>>>>>     "containers.start.failed" : 0,
>>>>>>>>>>>     "containers.failed" : 0,
>>>>>>>>>>>     "containers.completed" : 0,
>>>>>>>>>>>     "containers.surplus" : 0
>>>>>>>>>>>   }
>>>>>>>>>>> },
>>>>>>>>>>> "instances" : {
>>>>>>>>>>>   "MEMCACHED" : [ "container_1418350976699_0004_03_000002" ],
>>>>>>>>>>>   "slider-appmaster" : [
>>>>> "container_1418350976699_0004_03_000001" ]
>>>>>>>>>>> },
>>>>>>>>>>> "roles" : {
>>>>>>>>>>>   "MEMCACHED" : {
>>>>>>>>>>>     "yarn.memory" : "256",
>>>>>>>>>>>     "yarn.role.priority" : "1",
>>>>>>>>>>>     "role.requested.instances" : "0",
>>>>>>>>>>>     "role.failed.starting.instances" : "0",
>>>>>>>>>>>     "role.actual.instances" : "1",
>>>>>>>>>>>     "yarn.component.instances" : "1",
>>>>>>>>>>>     "role.releasing.instances" : "0",
>>>>>>>>>>>     "role.failed.instances" : "0"
>>>>>>>>>>>   },
>>>>>>>>>>>   "slider-appmaster" : {
>>>>>>>>>>>     "yarn.memory" : "1024",
>>>>>>>>>>>     "role.requested.instances" : "0",
>>>>>>>>>>>     "role.failed.starting.instances" : "0",
>>>>>>>>>>>     "role.actual.instances" : "1",
>>>>>>>>>>>     "yarn.vcores" : "1",
>>>>>>>>>>>     "yarn.component.instances" : "1",
>>>>>>>>>>>     "role.releasing.instances" : "0",
>>>>>>>>>>>     "role.failed.instances" : "0"
>>>>>>>>>>>   }
>>>>>>>>>>> },
>>>>>>>>>>> "clientProperties" : { },
>>>>>>>>>>> "status" : {
>>>>>>>>>>>   "live" : {
>>>>>>>>>>>     "MEMCACHED" : {
>>>>>>>>>>>       "container_1418350976699_0004_03_000002" : {
>>>>>>>>>>>         "name" : "container_1418350976699_0004_03_000002",
>>>>>>>>>>>         "role" : "MEMCACHED",
>>>>>>>>>>>         "roleId" : 1,
>>>>>>>>>>>         "createTime" : 1418357617294,
>>>>>>>>>>>         "startTime" : 1418357617328,
>>>>>>>>>>>         "released" : false,
>>>>>>>>>>>         "host" : "localhost",
>>>>>>>>>>>         "state" : 3,
>>>>>>>>>>>         "exitCode" : 0,
>>>>>>>>>>>         "command" : "python
>>>>>>>> ./infra/agent/slider-agent/agent/main.py
>>>>>>>>>>> --label container_1418350976699_0004_03_000002___MEMCACHED
>>>>>>>> --zk-quorum
>>>>>>>>>>> 127.0.0.1:2181 --zk-reg-path
>>>>>>>>>>> /registry/users/yang/services/org-apache-slider/memcached1 >
>>>>>>>>>>> <LOG_DIR>/slider-agent.out 2>&1 ; ",
>>>>>>>>>>>         "diagnostics" : "",
>>>>>>>>>>>         "environment" : [ "AGENT_WORK_ROOT=\"$PWD\"",
>>>>>>>>>>> "HADOOP_USER_NAME=\"yang\"", "AGENT_LOG_ROOT=\"<LOG_DIR>\"",
>>>>>>>>>>> "PYTHONPATH=\"./infra/agent/slider-agent/\"",
>>>>>>>> 
>>>>>> 
>>>>> 
>>> 
>> "SLIDER_PASSPHRASE=\"aa178fGHttfGC7Cnss3DPbLzYDEmqJuDcCUNwAW2YUfyPNQMZN\""
>>>>>>>>>>> ]
>>>>>>>>>>>       }
>>>>>>>>>>>     },
>>>>>>>>>>>     "slider-appmaster" : {
>>>>>>>>>>>       "container_1418350976699_0004_03_000001" : {
>>>>>>>>>>>         "name" : "container_1418350976699_0004_03_000001",
>>>>>>>>>>>         "role" : "slider-appmaster",
>>>>>>>>>>>         "roleId" : 0,
>>>>>>>>>>>         "createTime" : 0,
>>>>>>>>>>>         "startTime" : 0,
>>>>>>>>>>>         "released" : false,
>>>>>>>>>>>         "host" : "yang",
>>>>>>>>>>>         "state" : 3,
>>>>>>>>>>>         "exitCode" : 0,
>>>>>>>>>>>         "command" : "",
>>>>>>>>>>>         "diagnostics" : ""
>>>>>>>>>>>       }
>>>>>>>>>>>     }
>>>>>>>>>>>   }
>>>>>>>>>>> }
>>>>>>>>>>> }
>>>>>>>>>>> 2014-12-12 12:22:58,598 [main] INFO  util.ExitUtil - Exiting
>>> with
>>>>>>>>> status
>>>>>>>>>> 0
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 2014-12-11 1:01 GMT+08:00 Gour Saha <gs...@hortonworks.com>:
>>>>>>>>>>>> 
>>>>>>>>>>>> What do you get when you call "slider status <application>"?
>>>>>>>>>>>> 
>>>>>>>>>>>> -Gour
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Wed, Dec 10, 2014 at 1:02 AM, 杨浩 <yangha...@gmail.com>
>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi, I have installed the jmemcached successfully, but how
>> can
>>> I
>>>>>>>> use
>>>>>>>>>> it,
>>>>>>>>>>>> or
>>>>>>>>>>>>> how to get the port of memcached
>>>>>>>>>>>> 
>>>>>>>>>>>> --
>>>>>>>>>>>> CONFIDENTIALITY NOTICE
>>>>>>>>>>>> NOTICE: This message is intended for the use of the
>> individual
>>> or
>>>>>>>>>> entity
>>>>>>>>>>> to
>>>>>>>>>>>> which it is addressed and may contain information that is
>>>>>>>>> confidential,
>>>>>>>>>>>> privileged and exempt from disclosure under applicable law.
>> If
>>>>> the
>>>>>>>>>> reader
>>>>>>>>>>>> of this message is not the intended recipient, you are hereby
>>>>>>>>> notified
>>>>>>>>>>> that
>>>>>>>>>>>> any printing, copying, dissemination, distribution,
>> disclosure
>>> or
>>>>>>>>>>>> forwarding of this communication is strictly prohibited. If
>> you
>>>>>>>> have
>>>>>>>>>>>> received this communication in error, please contact the
>> sender
>>>>>>>>>>> immediately
>>>>>>>>>>>> and delete it from your system. Thank You.
>>>>>>>>>> 
>>>>>>>>>> --
>>>>>>>>>> CONFIDENTIALITY NOTICE
>>>>>>>>>> NOTICE: This message is intended for the use of the individual
>> or
>>>>>>>> entity
>>>>>>>>> to
>>>>>>>>>> which it is addressed and may contain information that is
>>>>>> confidential,
>>>>>>>>>> privileged and exempt from disclosure under applicable law. If
>>> the
>>>>>>>> reader
>>>>>>>>>> of this message is not the intended recipient, you are hereby
>>>>> notified
>>>>>>>>> that
>>>>>>>>>> any printing, copying, dissemination, distribution, disclosure
>> or
>>>>>>>>>> forwarding of this communication is strictly prohibited. If you
>>>>> have
>>>>>>>>>> received this communication in error, please contact the sender
>>>>>>>>> immediately
>>>>>>>>>> and delete it from your system. Thank You.
>>>>>>>> 
>>>>>>>> --
>>>>>>>> CONFIDENTIALITY NOTICE
>>>>>>>> NOTICE: This message is intended for the use of the individual or
>>>>>> entity to
>>>>>>>> which it is addressed and may contain information that is
>>>>> confidential,
>>>>>>>> privileged and exempt from disclosure under applicable law. If
>> the
>>>>>> reader
>>>>>>>> of this message is not the intended recipient, you are hereby
>>>>> notified
>>>>>> that
>>>>>>>> any printing, copying, dissemination, distribution, disclosure or
>>>>>>>> forwarding of this communication is strictly prohibited. If you
>>> have
>>>>>>>> received this communication in error, please contact the sender
>>>>>> immediately
>>>>>>>> and delete it from your system. Thank You.
>>>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> CONFIDENTIALITY NOTICE
>>>>>> NOTICE: This message is intended for the use of the individual or
>>>>> entity to
>>>>>> which it is addressed and may contain information that is
>>> confidential,
>>>>>> privileged and exempt from disclosure under applicable law. If the
>>>>> reader
>>>>>> of this message is not the intended recipient, you are hereby
>> notified
>>>>> that
>>>>>> any printing, copying, dissemination, distribution, disclosure or
>>>>>> forwarding of this communication is strictly prohibited. If you have
>>>>>> received this communication in error, please contact the sender
>>>>> immediately
>>>>>> and delete it from your system. Thank You.
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to