Hi Manju,
On Sat, 2005-01-08 at 01:05, Manjunath Sripadarao wrote:
>
> I am using SortLastWindow and SepiaComposer. Once I can get it working
> satisfactorily maybe I can post some screenshots. I already have a
> very large polygon model loaded, but I am trying to understand more
> and try out some new things.
OK. How large is very large? These things are pretty relative nowadays.
;)
> The idea seems nice, but currently if there is a model with a large
> number of polygons, that model is not split. But only multiple models
> are rendered on different servers. I want to know if there is any I
> can implement this (cutting a model and distributing the task of
> rendering it on different nodes). Any links to algorithms would be
> nice too.
>
> I see that splitDrawables in OSGSortLastWindow.cpp takes a 'bool cut'
> as argument, but currently it is not being used and is always false.
> Does this indicate the ability to implement breaking large polygon
> models into pieces ?
Ah, ok. Right now the Cluster code doesn't do that automatically. In
general we try to keep the scenegraph intact, as the application might
have a need for a certain structure that we can't know.
But if the app knows what it wants, there is a SplitGraphOp that splits
large geometries until they only contain a certain maximum number of
polygons. You can either call it explicitly or add it to the loader to
have it run automatically.
> The other reason I can think of is that say we have 2 quads and they
> need to be overlayed with a fairly large texture, in this case the
> user may want split according to which polygon takes which texture and
> divide the data among nodes by texture.
That is trickier. The amount of texture could be added as an additional
factor in the SortLastWindow load balancing. Right now it's just
polygon-based.
> But I am still trying to learn OpenSG and till now I haven't been able
> to send any useful data to any node. I read in another post on the
> mailing-list that one could use proxy groups to load data from local
> disks, I tried implementing this, but I am unable to get very far.
>
> Here is primarily what I did, I added 2 nodes to the scene,
>
> node0 = OSG::Node::create();
> node1 = OSG::Node::create();
>
> ProxyGroupPtr p0 = OSG::ProxyGroup::create();
> ProxyGroupPtr p1 = OSG::ProxyGroup::create();
>
> p0->setUrl("cube.osb");
> p1->setUrl("sphere.osb");
>
> then I set the core of these nodes to ProxyGroup.
>
> node0->setCore(p0);
> node0->setCore(p1);
>
> But I am not really sure what to do on the server side ? The server
> just gives some error messages saying node is null and matrix near ~=
> far ~= 0 or some such message.
You don't have to do anything on the server side, that happens
automatically when you render (the first frame will be slow). But you
need to give the ProxyNode the bounding volume (field "volume") of the
underlying objects (otherwise culling doesn't work, and you get the near
~= far message). For load balancing to work you would also have to give
the ProxyGroup some ideas about the complexity of the underlying graph
(indices, triangles, positons, geometries fields). Marcus, do you have a
tool to calculate these things from a given file?
> I tried getting the nodes on the display side using ract, something like
>
> for (int i = 0; i < ract->getNNodes; i++) {
> NodePtr node = ract->getNode(i);
> std::string file(node->getCore().getUrl()); // This maybe wrong I
> am typing from memory
> node = SceneFileHandler::the().read(file.c_str(),0);
> ...
>
> then I try to load the file, but where am I supposed to do this, in
> the main of testClusterServer.cpp after server->start() ?
SEP (Somebody Else's Problem). ;) You don't have to worry about this.
> I am lost, as I was unable to find any examples of ProxyGroup, what is
> it used for ?
> The thing is I do not want to send the file to a different node, I
> want to send only the filename and then load the relevant file locally
> (from the disk).
That's what the ProxyGroup is supposed to do. The constraint is that the
actual loading is done on the servers, so the file has to be accessible
from the running server (i.e. in your example cube.osb and spere.osb
have to be in the directories the server is run).
> I have to talk to my technical director or project manager on the time
> frame thing, I would be interested to work with you on this.
Ok, good.
> > How big are your datasets though? Conceptually this assumes that the
> > dataset fits into the client's main memory, which might be a problem for
> > large datasets. We don't have out-of-core tools for volume splitting or
> > anything like that yet.
>
> Currently I have a 512x512 MB dataset that I got from the internet.
> I am also trying to get larger data samples, but I have to see about that.
I assume you mean 512^3? That's not too bad, really. Can you give me a
link to that?
> Yes I know, I was wondering if it would be possible to shift the
> channel to irc.freenode.net ? Normally I idle/chat on some graphics
> software channels on there, and I wouldn't mind adding opensg to the
> list. Also maybe there might be other people on there who might be
> interested, how does that sound ? :-)
I don't mind moving it. We chose GalaxyNet initially because it has a
server in Singapore, but I'm not sure how relevant that really is any
more, especially given the stability problems it used to have.
Yours
Dirk
-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users