Re: [GENERAL] Controlling maximal on-disk size of a table

2004-11-17 Thread David Helgason
What you're looking for sounds like and extended (destructive) version 
of what autovacuum does.

So you might try to look at the statistics tables like autovacuum does. 
I don't know how it does that, but it seems that that way you'd be able 
to incrementally have approximate information about what happens with 
the table.

Good luck,
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
On 16. nov 2004, at 13:21, Nils Rennebarth wrote:
I have a table that is essentially a log where new entries are 
streaming in continually and from time to time I throw old entries 
away to
keep the table from growing.

I understand that in addition to issue a
  DELETE FROM log WHERE date  xxx
I also need to issue a
  VACUUM log
so that new entries will use the space of deleted entries.
Now I want to reserve a certain amount of disk storage to hold the log 
table. So I first let the table (plus its index, its toast table and 
toast index)  grow until it is about to reach the maximum size. Then a 
daemon continually deletes old entries and vacuums the table so the 
on-disk usage stays more or less constant from now on, at least this 
is the idea.

Of course I would like to keep as much history as possible, given the 
available space. Also the log may sometimes be quiet and sometimes 
quite busy, also the size of the text entries may vary quite a bit.

Now to make a good guess about when to issue the next delete, I need 
to estimate how much of the on-disk usage is accounted for by deleted 
entries.

I can of course count the number of entries, estimate the bytes needed 
for storage by averaging the length of the text column, adding the 
size of the fixed columns and compare that to the on-disk size to 
conclude how much space is still available. As for the index I assume 
it is has a fixed size per row.

But these queries are expensive because the log may easily contain 
millions of entries with an on disk size in the range of a few GB, and 
must be repeated quite often to prevent sudden bursts of new entries 
from overflowing the log.

Is there a better way to get at the current free space inside of a 
table/index?

__
Mit WEB.DE FreePhone mit hoechster Qualitaet ab 0 Ct./Min.
weltweit telefonieren! http://freephone.web.de/?mc=021201
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Transaction rollback - newbie

2004-11-09 Thread David Helgason
Anytime data changes in Postgres, the old rows are still on the disk. 
This is true regardless if the transaction rolls back.

Read in the docs about vacuuming, which is a process that cleans this 
up.

Regards,
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
On 10. nov 2004, at 03:38, Doug McNaught wrote:
A. Mous [EMAIL PROTECTED] writes:
Hi,
I've got clients connected to pgsql via ODBC.  If they lose their 
connection
abruptly, all un-committed transactions are automatically rolled-back 
(I'm
assuming) but is there anything left behind that needs to be cleaned 
up on
the server side with regards to the uncommitted transaction(s)?
No.  When the connection goes away, the backend will log an error,
roll back the transaction and exit cleanly.
-Doug
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] PQSQL - binary data question

2004-10-31 Thread David Helgason
On 29. okt 2004, at 14:09, Bastian Voigt wrote:
I just found this old posting on google. Now my question is how can I 
read an
integer value from the PGresult using the binary format? Can someone 
plz
gimme a code example? (please mail to me, because I have not 
subscribed to
the list)
Easy peasy:
/* (in C++, actually would give simpler code in C) */
// To submit the query
bool PgConn::SendPrepared(const string name, const vectorconst 
char* values, const vectorint lengths, const vectorint 
isBinary) {
	if (values.size() != lengths.size() || values.size() != 
isBinary.size())
		return Error(PgConn::SendPrepared: All parameter arrays must have 
same size);
	
//	for (int i = 0; i != values.size(); i++)
//		printf (Query parameter %d, length %d, binary %d: '%s'\n, i, 
lengths[i], (int)isBinary[i], values[i]);

	if (! PQsendQueryPrepared(m_Conn, name.c_str(), values.size(), 
values.front(), lengths.front(), isBinary.front(), 1 /* want binary 
result */))
		return Error();

return Success();
}
/* ... then after reading the PGresult */
static int NetworkIntFromBuffer(const char* buff) {
// Make a network-byte-ordered integer from the fetched data
const int *network = reinterpret_castconst int*(buff);
// Convert to host (local) byte order and return
int host = ntohl(*network);
return host;
}
int PgColumn::GetInt(int row) {
if (IsNull(row) || row  Rows() || GetLength(row) != 4)
return 0;
return NetworkIntFromBuffer(PQgetvalue(m_Res, row, m_Col));
}
Thanks a bunch! Now here's the old posting:
On Monday 27 October 2003 09:15, Tomasz Myrta wrote:
Dnia 2003-10-27 00:08, UÅytkownik creid napisaÅ:
Problem:  Assigning a COUNT(*) result into an integer variable in my 
C
program consistently fails except when I assign the same result to a 
char
variable.  I can only assume that the internal data type the COUNT
function uses is integer.

Can anyone help put me in the proper mindset so I may deal with this,
seemingly simple issue, to resolution.
I need the integer result to to help me satisfy a dynamic memory
requirement... COUNT(*) result will tell me how many rows of data I 
need
to malloc and I cannot perform a math operation on a char variable.
All libpq results are strings.
some_int_value=atoi(PQgetvalue(...))
Not true anymore with protocol v3, which added the binary format. Text 
format
is still the default.

Anyway why do you need count(*) ? When you retrieve your rows, you can
always check how many are them using PQntuples(...) and then malloc 
your
memory tables.

Regards,
Tomasz Myrta
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly

--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] table size/record limit

2004-10-21 Thread David Helgason
On 21. okt 2004, at 01:30, Dennis Gearon wrote:
I am designing something that may be the size of yahoo, google, ebay, 
etc.
Grrr. Geek wet-dream.
Just ONE many to many table could possibly have the following 
characteristics:

   3,600,000,000 records
   each record is 9 fields of INT4/DATE
I don't do this myself (my data is only 3 gig, and most of that is in 
blobs), but people have repeatedly reported such sizes on this list.

Check
http://archives.postgresql.org/pgsql-admin/2001-01/msg00188.php
... but the best you can do is just to try it out. With a few commands 
in the 'pql' query tool you can easily populate a ridiculously large 
database (insert into foo select * from foo a few times).

In few hours you'll have some feel of it.
Other tables will have about 5 million records of about the same size.
There are lots of scenarios here to lessson this.
What you'll have to worry about most is the access pattern, and update 
frequency.

There's a lot of info out there. You may need any of the following:
  clustering, the 'slony' project seems to be popular around here.
  concurrency of updating
  connnection pooling, maybe via Apache or some java-thingey
  securing yourself from hardware errors
This list is a goldmine of discussions. Search the archives for 
discussions and pointers. Search interfaces at

http://archives.postgresql.org/pgsql-general/
http://archives.postgresql.org/pgsql-admin/
 or download the list archive mbox files into your mail-program and 
use that (which is what I do).

d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Undefined symbols: _poll

2004-10-01 Thread David Helgason
On 30. sep 2004, at 21:35, Joe Lester wrote:
I'm just moved my Postgres client project from Mac 10.2.8 to 10.3.5. 
It's an Objective-C program that links to libpq.a (the header file is 
libpq-fe.h). However, when I try to compile now I get the following 
linking error:

ld: Undefined symbols: _poll
Does this sound familiar to anyone? My guess is that _poll is 
referenced somewhere in libpq.a. Anyone have any ideas on how to get 
the linking happening on Mac OS 10.3.5?
We had the same problem. In our case the trick was to compile libpq on 
OS X 10.3. The poll function was added in 10.3 (it's based on the 
select function, but at least it exists).

Reading the description of your problem it sounds different.
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: Null comparisons (was Re: [GENERAL] checksum)

2004-09-27 Thread David Helgason
On 27. sep 2004, at 22:08, Dean Gibson (DB Administrator) wrote:
Greg Stark wrote on 2004-09-27 08:17:
Stephan Szabo [EMAIL PROTECTED] writes:
 On Sun, 26 Sep 2004 20:16:52 +0200, David Helgason [EMAIL PROTECTED] 
wrote:
 On a similar note, I've found myself wanting an extended '=' 
operator
 meaning
 (a = b or (a is null and b is null))

 The original does appear to be equivalent to not(a is distinct 
from b),
 although I'm not sure that's necessarily easier to use than the 
above.

I often do things like coalesce(a,0) = coalesce(b,0).
(Or whatever value you know won't appear)
Even simpler:  COALESCE( a = b, a IS NULL AND b IS NULL )
I'm not quite sure what is being accomplished here... My original 
expression wasn't that bad, just clunky. I'd prefer a === b or (a 
samevalue b), but the above just complicates matters. Also, a 'set' 
command outside the expression goes completely against the idea, that 
certain fields have 'null' as a legal, comparable value, while others 
do not.

Anyway, idle speculation :)
d.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] nonblocking libpq large object access?

2004-02-15 Thread David Helgason
On 15. feb 2004, at 18:18, Tom Lane wrote:
A workaround you could think about is invoking the LO functions via
ordinary SELECT commands, ignoring libpq's LO API altogether.  This
would have been rather painful in pre-7.4 releases since you'd have
to be willing to deal with quoting and dequoting bytea values, but
as of 7.4 you could use PQsendQueryParams() and specify binary format
for the bytea inputs and results.
I think I'll go down that road provided it's not a performance killer.

I'll also be considering using a custom perl-socket-server system I 
implemented for kicks (it's put up at 
http://uti.is/david/320_maintserver_procs.sql, BSD license). It does 
gzip of the steam for free, which is a win.

d.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] nonblocking libpq large object access?

2004-02-14 Thread David Helgason
I was just wondering whether this was either:
- supported, or
- doable, either
- using the normal poll-read-done? loop
- a settable timeout
It would be awfully nice.

David Helgason,
Over the Edge Entertainments
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] newbie question on database structure

2004-01-19 Thread David Helgason
On 15. jan 2004, at 18:51, JustSomeGuy wrote:

I want to design a data base that can hold the following structure

struct {
   unsigned short a;
   unsigned short b;
   unsigned int len;
   unsigned char *data;
} myObject;
a and b describe the type of object, len is the number of bytes that
are required to hold the object.
and of course data points to the object
Of course the size of the objects is not a constant so this is where I 
get confused...
How do I define a database to hold these in postgres?
You want a data-type which can hold variable amount of data.

create table myObject (
a integer,
b integer,
len integer,
data bytea
);
Now, I assume that your unsigned short is a 2-byte integer. However 
postgresql doesn't have an unsigned short (afaik), but it has a signed 
version. Unless you're going to be storing *a lot* of these records, 
you'll probably be fine with just using a 4-byte integer. You might of 
course consider dropping the unsigned, or doing your own conversion 
(to/from signed).

The bytea (byte-array) type will store up to 1 or 2 gigabyte of data 
(never remember which one). Only caveat is that when you are inserting 
you may have to escape the null character, backslash and single-quote. 
See the docs for how to 
(http://www.postgresql.org/docs/current/static/datatype-binary.html).

If you are using libpq (the C interface), it has functions that will 
help you... check the docs for libpq for that too.

If you are actually going to store multi-megabyte data buffers in 
there, there are considerations of memory allocation (which may get 
pretty extreme if you try to transfer huge buffers at once). Consider 
using the Large Object interface instead.

Hope this helps.

David Helgason
Over the Edge Entertainments


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] YAGT (yet another GUID thread)

2004-01-16 Thread David Helgason
On 16. jan 2004, at 12:18, David Garamond wrote:

David Helgason wrote:
I'm switching right away. The notation doesn't really do anything for 
me, but that's fine. I've been using bit(128), but always suspected 
that of being unoptimal (for no particular reason).
I think bit(128) is quite efficient (OCTET_LENGTH() function shows me 
it's using 16 bytes).
Since I'm storing several big piles of data for each GUID, it's not 
ally an issue whether the storage is 16, 20, 22, 24, or 26 bytes, but 
thanks for the extensive guide. I've not gone over to using a GUID as 
PK+FK for the tables, and even if that should become interesting (for 
uniqueness across several databases for example), I would prefer a dual 
PK of (host-id, serial), where host-ids would be preassigned bit(8) 
values or some such.

I was mostly wondering about index efficiency and such. A bit of 
testing confirms that this seems to be just fine. Not that I'm 
surprised.

4. Ease of incremental searching. Suppose we're creating a GUI app to 
let user type in an item by its ID. VARCHAR(22) is a winner here since 
it allows users to type in normal characters in the keyboard and still 
lets Pg uses index for searching using WHERE col LIKE '...%'.

However, most sane database design would use another unique code for 
most entities that need to be typed in. 128bit (22 characters as 
base64) are just too long anyway.
In my case, only applications ever specify the GUIDs, so this is a 
non-issue.

5. The ease of migrating to future real GUID datatype. I think using 
INET/CIDR will be easiest, as I can just use some simple combination 
of builtin Pg string function. But this is a very minor issue since if 
we're using a real GUID in the future, we most probably can't use 
our old GUID anymore, due to different creation algorithm.
I'm already using 'real' GUIDs, which in my case means that the 
database never generates them (since I don't have a 
generate_real_guid() function in the database (and don't need to).

Neither GUID project on gborg (mentioned in another thread) seem to be 
Mac OSX compatible, which is my current platform (but I want to stay 
more-or-less free of platform dependance just yet).

Howver, using INET/CIDR prevents me to use LIKE or ~. So I guess it's 
back to BYTEA for me.
Seems useless to me, since the GUIDS are practically random so LIKEness 
has no relevance.

So in short, for GUID I now tend to use BYTEA or INET/CIDR. Storing as 
base192/base64 feels a little wasteful for me, since I can use 
ENCODE(...) to display binary data as base64 anyway. I find BIT(n) 
awkward to work with/not properly supported in most languages.
I think I'll be staying with BIT(128) here, since none of the other 
really make anything easier. The INET might have been a choice, but it 
seems to have to much 'magic' in its output routines for me to feel 
comfortable around it.

However the client app uses a hex representation internally (don't tell 
me it's silly, it's already coded, due to intricacies of our project 
management), and my PL/PgSQL hex2bit() and bit2hex() functions are damn 
slow, so I'll be converting them to C any day not-so-soon (tried 
PL/Perl too, but even its simpler implementation was 5x slower yet !?)

d.

David Helgason
Over the Edge Entertainments
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] I got bit by that darn GEQO setting again...

2004-01-16 Thread David Helgason
Mike Mascari [EMAIL PROTECTED] writes:
But just as a quick notice to those upgrading from 7.3 to 7.4 with 
fully
normalized databases requiring  11 joins, the GEQO setting can be a
killer...
Uh ... dare I ask whether you think it's too high?  Or too low?
Just a data point: With a fresh 7.4 and

geqo=on
geqo_threshold=11
geqo_generations=0
geqo_effort=1
geqo_pool_size=0
geqo_selection_bias=2
a 12 table join was taking a whole second to plan until I manually 
connected the tables (now it's in the order of a few ms).

I figure geqo had kicked in at that point.



d.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Bytea/Large Objects/external files best practices

2004-01-15 Thread David Helgason
I think those best practices threads are a treat to follow (might even 
consider archiving some of them in a sort of best-practices faq), so 
here's one more.

In coding an game asset server I want to keep a large number of file 
revisions of varying sizes (1Kb-50Mb) inside the database.

Naturally I want to avoid having to allocate whole buffers of 50Mb too 
often.

I am also throwing around rsync-style deltas (with librsync), that I 
run through functions to generate new objects. The librsync functions 
have an elegant buffer oriented approach to readingwriting data, so I 
need a way to read through my objects inside the database, again 
without having them allocated in RAM too often.

I tried browsing around, but couldn't find any good discussion of this 
(or just plain good advice), but of course that may be due to 
google-impairment on my part.

I see several options:

1) Bytea column. Seems the cleanest solution, but
	*) it includes lots of escaping-unescaping with associated allocation 
of huge buffers.
	*) I seem to remember reading in a discussion in the [hackers] list 
that the TOAST column substring operation had been optimized so that 
partial fetches might be viable.
	*) Incremental inserts (using a series of update file set data = data 
|| moredata where id=666) probably won't scale nicely.
	*) However it's fully ACID,
	*) and recommended by the docs, if that's a good reason.
	*) TOAST compression. Can't quite figure out if this happens, but in 
many cases would store a lot of disk-space  -access.
	*) passing them as arguments to the librsync functions may be a hassle.
	open questions: I wonder what timeframe for this to be viable. It 
seems that two things are being worked on that would fix this. The 
wire-protocol to avoid having to escape all communications, and methods 
for fast incremental update. Both seem to be at least a year away (Pg 
7.5, and someone even suggested jumping to 8.0 with the wire-protocol 
changes).

2) Bytea column with many rows per file.
	*) Also some escaping-unescaping, but with small buffers
	*) Simple and fast to do partial inserts/selects.
	*) Full ACID (and with a couple of triggers, also easy to keep 
completely correct)
	*) Butt ugly solution, but what we used to do with ye olde versions of 
MSQL.
	*) Compression, but probably lesser space savings with smaller chunks.
	open questions: What would the chunk-size be? I would imagine 5-20Kb 
per row.

3) Large Objects.
	*) Requires special interface to talk to, not that its so bad.
	*) Need a trigger to garbage collect them (simple to do though)
	*) Fully ACID.
	*) I've actually implemented the librsync functions using this 
allready, and it seems to work nicely with large files.

4) External files. I really don't like the idea of this.
	*) As high performance as anything.
	*) Need an alternate mechanism to access the server, with security 
implications etc.
	*) Complications trying to maintain more-or-less full ACIDity
	open questions: What tricks to use to keep this ACID-ish? I won't have 
any updating of content once it's committed, so just keeping the files 
read-only most of the time would do most of the trick. Also, I imagine

I'm going with 3) for now, but even tempted by 2) when looking this 
over.

At one point I was playing around with 4), and made a plperlu server to 
run inside the database to ease this. The security was implemented by 
the client receiving a ticket from the database when requesting to 
create a file, and then having to send this ticket before being allowed 
to send any data to the perl-server. Not an un-clever system, I think, 
that I'd be happy to share.

I wonder what other people are doing and if anyone has other arguments.



David Helgason,
Over the Edge Entertainments
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] array faults?

2004-01-09 Thread David Helgason
Using arrays I came across some strangenesses. Probably this is well 
known, but I couldn't find mentions of it.

I am under the impression that this should be an error. Am I wrong?

EXAMPLE 1:
maint=# select ('{{1,2,3},{4,5,6}}'::int[])[1][1:3];
   int4
---
 {{1,2,3}}
(1 row)
Shouldn't this have been just {1,2,3} ?

EXAMPLE 2:
maint=# select ('{{1,2,3},{4,5,6}}'::int[])[2][1:3];
   int4
---
 {{1,2,3},{4,5,6}}
(1 row)
Shouldn't this have been just {4,5,6} (or maybe {{4,5,6}} accepting the 
result of example 1) ?

I am trying to do sth like:
	maint=# select 1 = any ('{{1,2,3},{4,5,6}}'::int[])[2][1:3];
But that is not working for obvious reasons. This makes arrays pretty 
broken for me.

Am I missing anything obvious?

Regards,

David Helgason
Over the Edge Entertainments
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] SPI question (or not): trying to read from Large Objects from within a function

2004-01-06 Thread David Helgason
What:
I'm having trouble finding out how to find the current PGconn 
connection inside a C function. Looking through the documentation 
didn't give this up. Could anyone suggest where to look? I didn't even 
see anything similar to this in the SPI_* documentation. Perhaps I am 
totally mislead here?

Why:
I am writing an wrapper around librsync, allowing differential updating 
of large amounts of data (librsync wraps the clever algorithm of 
rsync).

The first function I'm wrapping is the one which generates a signature 
(a hash of each block of data, with some specified block-size) from a 
LO. Its signature would be:
	
	create function rsync_signature(oid /* of an Large Object */) returns 
bytea

But I can't figure out how to get the current PGconn to be able to run 
the lo_* functions.

On another not, would it be possible to avoid using Large Objects, and 
use TOAST columns instead? Ie. is it possible to quickly read/write 
partial toast columns (I know it's not possible for clients, but on the 
server side?).

There may be more questions later, but I'll try to pay back by 
submitting the final implementation to contrib/ (if anyone is 
interested). It'll allow for really fast incremental updates of a 
columns, which I'll use to make storing of huge blobs less of a pain 
(although it depends on the client also speaking rsync-ese, but that'll 
be included with the package).

Regards,

d.
--
David Helgason
Over the Edge Entertainments


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] SPI question (or not): trying to read from Large Objects from within a function

2004-01-06 Thread David Helgason
Thank you very much,

I figured I needed to open my own using SPI_connect(). I had assumed 
that there was sth like a 
the-connection-this-functions-is-begin-run-through.

Now I'm having problems with

size_t inBufSize = 8192;
char* inBuffer = (char*)palloc(inBufSize);
int bytes_read = DatumGetInt32(DirectFunctionCall3(loread, 
Int32GetDatum(fd), CStringGetDatum(inBuffer), 
UInt32GetDatum(inBufSize)));

which returns an extremely large number in bytes_read (consistently 
46235672), regardless of the contents of inBufSize.

I tried using lo_lseek(fd, 0, SEEK_END) on this fd already, which 
correctly returned the size of the Large Object, so it's not a question 
of an invalid descriptor. Also that seek didn't effect the result at 
all. I guess it's wrong usage of the DatumGet*() / *GetDatum() 
functions, but I can't see where.

Any suggestions?

d.

On 7. jan 2004, at 05:40, Tom Lane wrote:

David Helgason [EMAIL PROTECTED] writes:
I'm having trouble finding out how to find the current PGconn
connection inside a C function.
What makes you think that *the* current PGconn is a valid concept?
libpq has always supported multiple active connections.
			regards, tom lane

---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] SPI question (or not): trying to read from Large Objects from within a function

2004-01-06 Thread David Helgason
Sorry for spamming I'm getting the hang of this, and figured this 
one out myself :)

These internal functions (loread/lowrite) have quite different 
signatures from their C equivalents (as opposed to lo_lseek). Found out 
from the sources that I was using it very incorrectly. But discovered 
lo_read with a signature different from that documented as the Large 
Object client interface: ones which don't take a connection parameter 
at all. This really simplifies my code, which can now be:

size_t inBufSize = 8192;
char* inBuffer = (char*)palloc(inBufSize);
int bytes_read = DatumGetInt32(DirectFunctionCall3(loread, 
Int32GetDatum(fd), CStringGetDatum(inBuffer), 
UInt32GetDatum(inBufSize)));
int bytes_read = lo_read(fd, inBuffer, inBufSize);

and all is well... just too bad there aren't similarly simple versions 
of the other lo_{lseek,open,...}.

Thanks for the audience, and keep up the good work!

d.

On 7. jan 2004, at 06:22, David Helgason wrote:

Thank you very much,

I figured I needed to open my own using SPI_connect(). I had assumed 
that there was sth like a 
the-connection-this-functions-is-begin-run-through.

Now I'm having problems with

which returns an extremely large number in bytes_read (consistently 
46235672), regardless of the contents of inBufSize.

I tried using lo_lseek(fd, 0, SEEK_END) on this fd already, which 
correctly returned the size of the Large Object, so it's not a 
question of an invalid descriptor. Also that seek didn't effect the 
result at all. I guess it's wrong usage of the DatumGet*() / 
*GetDatum() functions, but I can't see where.

Any suggestions?

d.

On 7. jan 2004, at 05:40, Tom Lane wrote:

David Helgason [EMAIL PROTECTED] writes:
I'm having trouble finding out how to find the current PGconn
connection inside a C function.
What makes you think that *the* current PGconn is a valid concept?
libpq has always supported multiple active connections.
			regards, tom lane

---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match



---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org