When a server is started, it sends a batch_create request to every other
server in the filesystem.  The batch_create request asks the receiving
server to send back a list of unused data handles (owned by that particular
server).  For those handles in the list, the receiving server sets an
attribute in the local database to indicate that the handle is in use.  You
may see a batch_create request after your servers have been running for a
while, since a server will request another batch of handles if its current
store gets low.  This entire process is a performance enhancement, which
allows a file's data handles to be assigned by the metadata server without
contacting the data handle servers, thus reducing the time it takes to
create a file.

With all of that said, it seems that one of your servers is having trouble
accessing the database or communicating with another server.  I'm not
exactly sure without further research.  Think about the above description
and see if you can't pinpoint which server is causing the trouble.

Becky

On Sun, Feb 19, 2012 at 10:48 PM, Dimos Stamatakis <dimsta...@gmail.com>wrote:

> Hello again!
> I want to help you realize what is going wrong by telling you that the
> client blocks at a pvfs2-ls (it does not say connection refused
> immediatelly). It can also ping the new elastic IP normally!
>
> I redirected the metadata server output to a file and when I checked it i
> didn't find anything wrong... It did all the gets - puts that are happening
> everytime the DB is created. Here is the output tail:
>
> get (handle: 4611686018427387903)()(key_sz:8) -> (511)(4)
> put (handle: 4611686018427387903)()(key_sz:8) -> (512)(4)
> [1329709072:419164][4413/140213703595776] TROVE:DBPF:Berkeley DB:
> bulk_msg: Send buffer after copy due to PERM
> [1329709072:419173][4413/140213703595776] TROVE:DBPF:Berkeley DB:
> send_bulk: Send 160 (0xa0) bulk buffer bytes
> [1329709072:419183][4413/140213703595776] TROVE:DBPF:Berkeley DB:
> //pvfs2-storage-space/27c41225/ rep_send_message: msgv = 7 logv 19 gen = 1
> eid -1, type bulk_log, LSN [1][217660]  perm
> [1329709072:419193][4413/140213703595776] TROVE:DBPF:Berkeley DB:
> rep_send_function returned: -30975
>
> How can I find out why this metadata server refuses serving the client
> requests?
>
> Thanks again,
> Dimos.
>
>
>
> On Mon, Feb 20, 2012 at 3:49 AM, Dimos Stamatakis <dimsta...@gmail.com>wrote:
>
>> I forgot to tell you that I use ec2-associate-address commands to tell
>> the new master to grab the elastic IP address. If I do not use replication
>> and I use the normal IP addresses it works fine!
>> Is there a way to have High availability to amazon EC2 without use of
>> elastic IPs??
>>
>> Many thanks,
>> Dimos.
>>
>>
>>
>> On Mon, Feb 20, 2012 at 2:17 AM, Dimos Stamatakis <dimsta...@gmail.com>wrote:
>>
>>> Hello!
>>> I have successfully run a pvfs installation on a eucalyptus cloud, but
>>> when I moved to Amazon EC2, I get a very strange error.
>>> When I run a metadata server it says:
>>>
>>> [S 02/20 00:04] PVFS2 Server ready.
>>>
>>> and then it says:
>>>
>>> [E 02/20 00:04] batch_create request got: No space left on device
>>> ....... And this error repeats ......
>>>
>>> I checked all of my devices and there is plenty of space, so I don't
>>> think there is not enough space left...
>>> Can you explain that?
>>> What is the batch_create function? And where is it trying to write?
>>>
>>> Here is the output of the df -h on the data node:
>>>
>>> Filesystem            Size  Used Avail Use% Mounted on
>>> /dev/sda1             9.9G  2.7G  6.8G  29% /
>>> tmpfs                 308M     0  308M   0% /lib/init/rw
>>> udev                   10M  108K  9.9M   2% /dev
>>> tmpfs                 308M  4.0K  308M   1% /dev/shm
>>>
>>> and on the meta data node:
>>>
>>> Filesystem            Size  Used Avail Use% Mounted on
>>> /dev/sda1             9.9G  2.0G  7.5G  21% /
>>> tmpfs                 308M     0  308M   0% /lib/init/rw
>>> udev                   10M  108K  9.9M   2% /dev
>>> tmpfs                 308M  4.0K  308M   1% /dev/shm
>>>
>>> Many thanks,
>>> Dimos.
>>>
>>
>>
>
> _______________________________________________
> Pvfs2-developers mailing list
> Pvfs2-developers@beowulf-underground.org
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers
>
>


-- 
Becky Ligon
OrangeFS Support and Development
Omnibond Systems
Anderson, South Carolina
_______________________________________________
Pvfs2-developers mailing list
Pvfs2-developers@beowulf-underground.org
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to