And actually, if you set "whirr.store-cluster-in-etc-hosts=true" in your
properties file, Whirr should set up /etc/hosts on the instances for you.

A.

On Tue, May 21, 2013 at 1:09 PM, Andrei Savu <[email protected]> wrote:

> Yes, you should be able to make that work.
>
> -- Andrei Savu
>
> On Tue, May 21, 2013 at 11:04 PM, Sebastien Goasguen <[email protected]>wrote:
>
>>
>> On May 21, 2013, at 4:00 PM, Andrei Savu <[email protected]> wrote:
>>
>> You need sane dns settings (forward and reverse for each machine to make
>> this work).
>>
>>
>> Can I try to hack configure_hostname.sh in:
>>
>> services/cdh/target/classes/functions
>>
>> Adding some entry in /etc/hosts
>>
>> Will that be enough ?
>>
>>
>> -- Andrei Savu
>>
>> On Tue, May 21, 2013 at 10:57 PM, Sebastien Goasguen <[email protected]>wrote:
>>
>>>
>>> On May 21, 2013, at 3:48 PM, Andrew Bayer <[email protected]>
>>> wrote:
>>>
>>> Yeah, DNS is a giant pain. If at all possible, you need to get the
>>> hostnames resolvable from wherever you're spinning the instances up, as
>>> well as on the instances themselves. The DNS that CloudStack's DHCP assigns
>>> should do the trick for that.
>>>
>>>
>>> argh…
>>>
>>> These instances have public IPs but not DNS entries.
>>>
>>> @andrei the hadoop-3d5 and other names are setup as the name of the
>>> instances. They are used for local 'hostname'. so no not resolvable.
>>>
>>>
>>>
>>>
>>> A.
>>>
>>> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen 
>>> <[email protected]>wrote:
>>>
>>>> Hi,
>>>>
>>>> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
>>>> Instances get launched and I am trying to setup cdh.
>>>>
>>>> I believe I am running into a DNS issue as I am running into lots of
>>>> issues of this type:
>>>>
>>>> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname
>>>> -falling back to "localhost"
>>>> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>>>>
>>>> If I log in to the name node and try to use hadoop I get things like:
>>>>
>>>> $ hadoop fs -mkdir /toto
>>>> -mkdir: java.net.UnknownHostException: hadoop-3d5
>>>>
>>>> my hadoop-site.xml looks like:
>>>>
>>>> <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>> <configuration>
>>>>  <property>
>>>>    <name>dfs.client.use.legacy.blockreader</name>
>>>>    <value>true</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>fs.default.name</name>
>>>>    <value>hdfs://hadoop-3d5:8020/</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>mapred.job.tracker</name>
>>>>    <value>hadoop-3d5:8021</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>hadoop.job.ugi</name>
>>>>    <value>root,root</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>hadoop.rpc.socket.factory.class.default</name>
>>>>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>hadoop.socks.server</name>
>>>>    <value>localhost:6666</value>
>>>>  </property>
>>>> </configuration>
>>>>
>>>> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I
>>>> don't think the security group rules got created.
>>>>
>>>> Any thoughts ?
>>>>
>>>> thanks,
>>>>
>>>> -sebastien
>>>>
>>>>
>>>
>>>
>>
>>
>

Reply via email to