On Wed, 22 Oct 2003, Christopher Browne wrote:
No, tables wouldn't be the right way to do it.
But it's going to be troubled, in any case, because of the
every-popular mixtures of:
a) Often weird declarations of what character sets are in use;
I gotta admit that I haven't spend too much
On Thu, 23 Oct 2003, Christopher Kings-Lynne wrote:
4. Extend the contrib/ltree gist-based tree indexing scheme to work on
xml and hence the operations in no.3 above are really fast...
but then, the plain xml data is still stored in a database colum, if I
understand correctly?
--
Gregor
On Wed, 22 Oct 2003, Josh Berkus wrote:
Reinvent the wheel? Well, yes.
The first thing ... the VERY first thing, abosolutely ... that you need to do
is invent a theory of XML databases.
Well, I have. It doen't cover all parts in detail yet, because I've
started with a simple IO layer (simple
Storing the XML text has problems - you have to parse it every time you
want something - that has to cause a huge performance hit.
I use XML a lot for all sorts of purposes, but it is appropriate for
data transfer rather than data storage, IMNSHO.
cheers
andrew
Christopher Kings-Lynne wrote:
On Thu, 23 Oct 2003, Andrew Dunstan wrote:
Storing the XML text has problems - you have to parse it every time you
want something - that has to cause a huge performance hit.
You couldn't have said better what I meant.
I store the xml already parsed. You can navigate right along. To the
parent,
4. Extend the contrib/ltree gist-based tree indexing scheme to work on
xml and hence the operations in no.3 above are really fast...
but then, the plain xml data is still stored in a database colum, if I
understand correctly?
Yep - which to me seems to be the most useful way to store it :)
Chris
*nod* I have tried this several times - it just doesn't work well,
because the maps are too different.
You could do something like this:
. a table for each element type, fields being the attributes, plus
the node id.
. a table to tie everything together (parent_id, child_id,
On Thu, 23 Oct 2003, Christopher Kings-Lynne wrote:
You couldn't have said better what I meant.
I store the xml already parsed. You can navigate right along. To the
parent, the previous, the next elemnt or the first or last child.
Which is the whole point of indexing it...
not quite.
Gregor,
Well, I have. It doen't cover all parts in detail yet, because I've
started with a simple IO layer (simple page locking, no concurrent
transactions) and worked on the page layout and parsing algorithms from
there on. Querying on that format will follow thereafter. And concurrency
On Thu, 23 Oct 2003, Josh Berkus wrote:
Um, I/O and Page layout are not theory. They are implementation issues.
yes or no, depending on your point of view.
Theory would answer things like What are the mathematical operations I can
use to define compliance or non-compliance with the DTD for a
Gregor,
I'm developing a native XML database (C++) (which is supposed to become
open source one day) and I'm wondering wheather I could use GiST for it's
indexes. Is GiST still alive?
Don't know, sorry.
Would PostgreSQL fit that requirement? And are you interested in having a
fast,
[EMAIL PROTECTED] (Josh Berkus) writes:
Gregor,
I'm developing a native XML database (C++) (which is supposed to become
open source one day) and I'm wondering wheather I could use GiST for it's
indexes. Is GiST still alive?
Don't know, sorry.
Would PostgreSQL fit that requirement? And
Christopher Browne wrote:
But I think back to the XML generator I wrote for GnuCash; it has the
notion of building up a hierarchy of entities and attributes, each of
which is visible as an identifyable object of some sort. Mapping that
onto a set of PostgreSQL relations wouldn't work terribly
On Wed, 22 Oct 2003, Christopher Browne wrote:
It leaves open the question of what is the appropriate way of
expressing XML entities and attributes and CDATA in database form.
snip
Thanx for your advise, but that's not what I had in mind. The original
idea to have a native xml database was that
On Wed, 22 Oct 2003, Andrew Dunstan wrote:
But why put yourself to such bother? I have never found a good reason to
do this sort of thing.
I think there is a huge potential for XML databases once there are good
ones and people start using them more extensively.
But for having real fast xml
Do this:
1. Create a new type called 'xml', based on text.
2. The xmlin function for that type will validate what you are
enterering is XML
3. Create new functions to implement XPath, SAX, etc. on the xml type.
4. Extend the contrib/ltree gist-based tree indexing scheme to work on
xml and
In the last exciting episode, [EMAIL PROTECTED] (Gregor Zeitlinger) wrote:
On Wed, 22 Oct 2003, Andrew Dunstan wrote:
But why put yourself to such bother? I have never found a good reason to
do this sort of thing.
I think there is a huge potential for XML databases once there are good
ones
Gregor,
Thanx for your advise, but that's not what I had in mind. The original
idea to have a native xml database was that I doesn't work too well in a
relational database.
I was just wondering wheater I have to reinvent the wheel of database
technology when it comes to transaction
Hi,
I'm developing a native XML database (C++) (which is supposed to become
open source one day) and I'm wondering wheather I could use GiST for it's
indexes. Is GiST still alive?
Also, I'm looking for a database that I could use for my XML database.
Right now, I'm using a custom IO layer. Right
19 matches
Mail list logo