Because that, the hypergrid option help a lot to increase the number of avatars doing a meeting or in collaborative projects. By other side, perhaps the solution is in this other way<http://www.maxping.org/technology/platforms/open-source/solipsis-p2p-shows-great-promise-for-the-metaverse.aspx> .
Albert 2009/7/30 Toni Alatalo <[email protected]> > > Anu Mishra kirjoitti: > > Does this limitation of 20 client avatars apply across all regions in > > realxtend OR is this 20 per region? > > Per region. The UDP connection from the clients are to the region > server, and it sends the updates of the movements of the other avatars > etc. to everyone, and runs the physics etc. AFAIK there is no limit for > how much a grid can handle, given you can cope with the assets, > inventories etc. A trick that's IIRC used also on the Linden servers > when organizing events is to have a central place in the intersection of > four regions, so that if people are evenly around it the load is shared > for 4 servers. I don't know how stable region crossings are now in > Opensim nowadays, perhaps enough for that tech to work. > > It's not a limitation that has been programmed as some sort of a limit, > but empirical observations how the server current performs, e.g. where > e.g. IBM now says that (their version shipped in that sametime3d thing) > Opensim runs in a stable fashion. I think it's also tested weekly in the > Opensim office hours at Wright Plaza, at least if there are enough > participants to make it crash eventually :) > > > I noticed that QWAQ (croquet based) also has limitation of 40 clients. > > Is there a limitation on > > second life as well - given that IBM hosts events/conferences in SL > > (their registration closes > > after some time that does indicate a limitation)? > > There has to be some limit in SL, with the architecture where a single > server handles everything in one place and communicats with all the > clients there, but I don't know what it would be currently. And I guess > any architecture will have *some* limit, but I hope with a clever one it > will be 1000-10000 (but am not surprised if was way too optimistic > there.. it also boils down to how the space is partitioned and what it > means to share space etc, systems like Bigworld or MXP are much more > dynamic about it than the rigid SL arch). > > For example in MXP AFAIK the clients don't know about regions at all, > they just have a single 'bubble' which means their sphere of perception, > and the server tells the clients the info they need. IIRC it was also so > that the size of the bubble can be scaled as needed. Perhaps it might > work e.g. so that when you are in an event with 10000 avatars, you are > not all the time communicating with 10000 individuals anyway, but > perhaps only see the ones within 3 meters from you (and that is perhaps > 20 :) and some performers on stage (ok so 5 more, let's say we can cope > with 25) .. the huge crowd is shown to you perhaps using some mass > system like sometimes used for large vegetations in games, or perhaps > even image/video plates, like matte paintings in 3d making (those > 'paintings' are often nowadays 3d renders too, but rendered separately > to be the background, to not have to render the whole scene every frame > for the movie. here the same idea but to save load on the server). There > is no fixed region size, the servers can partition the spaces how they > want, and nothing of that shows in the network protocol nor in the > client code, so it perhaps can be optimized in clever ways for dense > mass events vs. sparse large worlds etc. > > ~Toni > > > > > On 7/29/09, *Toni Alatalo* <[email protected] > > <mailto:[email protected]>> wrote: > > > > > > On Jul 28, 2009, at 7:27 PM, bulma wrote: > > > > Hi, > > > > the guys who did that work last year are still on holiday (for July), > > so I'll respond instead. > > > > > I have just see this > > > (http://www.youtube.com/v/AKg6zf9oPHM&hl=en&fs=1 > > <http://www.youtube.com/v/AKg6zf9oPHM&hl=en&fs=1>) wonderful > > > video and read that with > > > realXtend 0.4 optimization is possible to have over 300 bots on LAN > > > servers. It means that is also possible to have over > > > 300 clients connected? > > > > No, it does not mean that. > > > > The big difference is that those bots run on the server. In that > sense > > they are similar to other objects that live on the server, like > having > > hundreds of prims that move, and not similar to connected clients at > > all. > > > > For server-side bots, like for prims etc., there is no connection per > > object to the server from somewhere else, the server doesn't have to > > send updates of what happens in the world to those, as they just > > access > > the server memory for what they need to know about the world. > > > > The achievement in that video was optimizing the avatar update > sending > > in the networking so that a connected client can get information > about > > movements and animations of hundreds of characters. But not about > > enabling hundreds of clients which the server could handle. It's > > useful > > work making a rich environment with many bots, like a swarm of fishes > > or birds, or people walking streets or cars driving roads. > > > > To allow more participants, I think there is much room in Opensim > > still > > for optimizations of how updates are sent to clients, to push up the > > current limit of 20 (or a bit more?) that they say works well now. > > Luckily we are not alone in doing that, but it seems that e.g. the > > folks at IBM and Intel are also pretty constantly profiling the > > code to > > find and fix bottlenecks etc. > > > > > I'm curious to know what are the characteristics of the server that > > > was made for this test. > > > > AFAIK a normal powerful modern desktop pc, IIRC it was said in the > > article on realxtend.org <http://realxtend.org> in which the work > > was published. > > > > > Cheers, bulma > > > > ~Toni > > > > > > > > > > > > > > --~--~---------~--~----~------------~-------~--~----~ http://groups.google.com/group/realxtend http://www.realxtend.org -~----------~----~----~----~------~----~------~--~---
