Hi, Dear VNet Users,
I have successfully tested the VNet, and would like to give my thanks
to the authors of VNet.
Based on my experiment ( see attached image ), I have some suggestions.
1) In my big geo-referenced world ( Lantau island of Hong Kong, true
3-D terrain data),
Avatars are very small ( about 1.5-2.0 meter tall). In the condition of
multi users, it is easy to lose the trace of other avatars. So, if I
want see or follow somebody, could I use the coordinate of his/her
avatar to jump to his/her place? ( It is impossible to use eyes to find
his/her avatar because the world is too big and avatars are too small ).
The coordinate of one's avatar could be transferred by text-based talk.
Users should be provided with multi channels ( approaches ) for
communication.
2) ActiveWorlds provides a view point of a third person. In this case,
I can see my avatar moving depending upon my real-time operation. Using
VNet, in the case of appearance of my avatar , I found my avatar is
not related to my operation ( the position of my avatar do not change).
In my opinion, it is more natural to allow my avatar to move with my
operation, and meantime it can be seen by the operator (by myself ).
Thanks
Jianhua
