[SQL] how to store more than 3 MB of character data in Postgres Varchar field

2005-06-06 Thread Vadivel Subramaniam




Hi,

We have  a requirement wherein we have to store around 3 MB of data in
Postgres database.

We had gone through the postgres website
http://www.postgresql.org/docs/7.4/interactive/datatype-character.html#DATATYPE-CHARACTER-SPECIAL-TABLE


The above link says "varchar" can store upto 1 GB of data.  But when i try
to insert using ODBC API's the varchar field takes
only 65484 characters.But i could insert more characters using INSERT
SQL command.  Even in this case when we fetch the data it reads only 65540
characters. Remaining data is not read from the varchar field.

We have tried in 2 ways

1. table is created without any limit specified for varchar.
  create table details (name varchar, data varchar);

2. create table utilntmlscripts (name character varying, data character
varying(10485770));
  ERROR:  length for type 'varchar' cannot exceed 10485760
  It's not allowing more than 10 MB of size during table creation.


Is there anything needs to be configured to make the varchar field to
support upto 1 GB.
Even if it does not support 1 GB, can we store & retrieve atleast 10 MB of
character data
in Varchar field?

Please send the details of how this can be acheived.

Postgres Version 7.4.2
Accessing using UnixODBC driver from C++ application.

thanks,
Vadivel.

***  FSS-Private   ***
"DISCLAIMER: This message is proprietary to Flextronics Software Systems
Limited (FSS) and is intended solely for the use of the
individual to whom it is addressed. It may contain  privileged or
confidential information and should not be circulated or used for
any purpose other than for what it is intended. If you have received this
message in  error, please notify the originator immediately.
If you are not the intended recipient, you are notified that you are
strictly  prohibited  from  using, copying, altering, or disclosing
the contents of this message.  FSS  accepts no  responsibility  for loss or
damage arising from the use of  the information transmitted
by this email including damage from virus."


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] How do write a query...

2005-06-06 Thread KÖPFERL Robert
You could have a look at the OFFSET and LIMIT modifiers

as for untested example
select ((select max( "AValue") from table group by "Num") - "AValue") as
difference from table order by "AValue" desc offset 1

this says: give me a inversed ordered AValue-list but ommitting the first
(biggest) and subtract each  from the biggest value of each group

|-Original Message-
|From: Alain Reymond [mailto:[EMAIL PROTECTED]
|Sent: Mittwoch, 01. Juni 2005 18:01
|To: pgsql-sql@postgresql.org
|Subject: [SQL] How do write a query...
|
|
|Hello,
|
|I have the following problem :
|
|I have a table like
|IdNum  Date   AValue
|1  10 01/01/2005   50
|2  10 31/05/2005   60
|3  25 02/02/2005   55
|4  25 15/03/2005   43
|5  25 28/05/2005   62
|etc..
|
|Id is unique, Num is an identification number with duplicates possible,
|date is a ... date and Avalue... a value!
|
|If we have
|IdNum  Date   AValue
|Id1  Num1Date1  AValue1
|Id2  Num1Date2  AValue2
|
|The table is ordered on Num+Date.
|What I would like to calculate is (AValue2-AValue1) for a given Num
|(here num1).
|
|In this case, I would have to calculate
|60-50 for Num 10
|and
|43-55, 62-43 for Num 25.
|
|Do you have any idea if it can be done simply with a request...
|
|I thank you
|
|Regards.
|
|Alain Reymond
|
|
|
|---(end of 
|broadcast)---
|TIP 2: you can get off all lists at once with the unregister command
|(send "unregister YourEmailAddressHere" to 
|[EMAIL PROTECTED])
|

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Returning a Cross Tab record set from a function

2005-06-06 Thread KÖPFERL Robert


|-Original Message-
|From: Marc Wrubleski [mailto:[EMAIL PROTECTED]
|Sent: Mittwoch, 01. Juni 2005 16:15
|To: pgsql-sql@postgresql.org
|Subject: [SQL] Returning a Cross Tab record set from a function
|
[...]
|
|It seems I can do this from any higher level language, but it drives me
|crazy that I can't perform this operation as a function inside of
|Postgres... 

Yes, semms very like this can't be done in just sql but needs a higher level
lng.
Actually Postgres provides such higher languages. Try including PL/Python
PL/perl or PL/tk. Fom there you are able to query all the metadata of the
wanted tables so that a adequate SQL-string can be generated.

In case of sourcecode, you find theses languages in the contrib dir
|
|Thanks for any thoughts you might have...
|
|-- 
|Marc Wrubleski 
|
|
|---(end of 
|broadcast)---
|TIP 9: the planner will ignore your desire to choose an index 
|scan if your
|  joining column's datatypes do not match
|

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] how to store more than 3 MB of character data in Postgres

2005-06-06 Thread Richard Huxton

Vadivel Subramaniam wrote:


Hi,

We have  a requirement wherein we have to store around 3 MB of data
in Postgres database.

We had gone through the postgres website 
http://www.postgresql.org/docs/7.4/interactive/datatype-character.html#DATATYPE-CHARACTER-SPECIAL-TABLE


The above link says "varchar" can store upto 1 GB of data. 


I don't think it does, it says the "longest possible character string" 
which covers type "text".


> But when

i try to insert using ODBC API's the varchar field takes only 65484
characters.But i could insert more characters using INSERT SQL
command.  Even in this case when we fetch the data it reads only
65540 characters. Remaining data is not read from the varchar field.


So, your problem is with the ODBC interface rather than actually 
inserting the data.



We have tried in 2 ways

1. table is created without any limit specified for varchar. create
table details (name varchar, data varchar);

2. create table utilntmlscripts (name character varying, data
character varying(10485770)); ERROR:  length for type 'varchar'
cannot exceed 10485760 It's not allowing more than 10 MB of size
during table creation.


Is there anything needs to be configured to make the varchar field to
 support upto 1 GB. Even if it does not support 1 GB, can we store &
retrieve atleast 10 MB of character data in Varchar field?


Check your ODBC settings and try type "text" instead of "varchar". I 
think the problem is with ODBC restrictions rather than anything else. 
You should be able to get type "text" to map to a "Memo" type or 
similar, which should hold more than 64k IIRC.


Failing that, try internal type "bytea" and large-object on the ODBC end.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] how to store more than 3 MB of character data in Postgres

2005-06-06 Thread Richard Huxton

Vadivel Subramaniam wrote:


I assume, it could not be a problem with ODBC. I am able to store 2.5 MB of
data into Oracle using the same ODBC APIs.


Well, it certainly isn't to do with PG itself:

$ cat bigtest.pl
#!/usr/bin/perl
print "CREATE TABLE foo (a int, b varchar);\n";
print "INSERT INTO foo VALUES (1, '".('a' x 500)."');\n";

$ perl bigtest.pl | psql -Urichardh richardh
CREATE TABLE
INSERT 3634376 1
$ psql -Urichardh richardh

richardh=> SELECT a,length(b) FROM foo;
 a | length
---+-
 1 | 500
(1 row)



Only difference is that Oracle supports CLOB datatype, that is mapped to
LONG VARCHAR in ODBC.



The input parameter during insert is bound using the below ODBC API
  retStatus = SQLBindParameter(dStmtHandle, 1, SQL_PARAM_INPUT,
SQL_C_CHAR, SQL_LONGVARCHAR, dLen, 1, pData, 0,  &dLen);

For PostGres also the same API is used.  Here varchar/text at the Postgres
schema is mapped to LONGVARCHAR at ODBC.
We can not go for bytea etc, since the data is in character format.

I tried using "text" also at the postgres schema, still the impact is same
(only stores around 64K).
Could you please provide some example, how the large character data ( more
than 1 MB ) can be inserted to text/varchar (in PostGres) using ODBC?


Hmm - looking back at your original email, I see you're using the 
UnixODBC driver. Might be worth trying the PG version:

 http://gborg.postgresql.org/project/psqlodbc/genpage.php?downloads

First step though, should be to check whether there are any 
configuration settings that apply to longvarchar, because it sounds like 
the odbc driver should be mapping to that.


I can't help you with finding those settings, since I have only ever 
used the "official" PG driver on Windows.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[SQL] What is faster?

2005-06-06 Thread ON.KG
Hi All!

What is faster - SLECTion data from one large table (200 000 - 300 000
records), or SELECTion from a few small tables (example, 2 tables 150
000 records each)?

Thank You


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] how to store more than 3 MB of character data in Postgres Varchar field

2005-06-06 Thread Tom Lane
Vadivel Subramaniam <[EMAIL PROTECTED]> writes:
> 2. create table utilntmlscripts (name character varying, data character
> varying(10485770));
>   ERROR:  length for type 'varchar' cannot exceed 10485760
>   It's not allowing more than 10 MB of size during table creation.

Use type "text", or equivalently varchar with no length specification.

The 1-MB limit on what you can write as a length spec is somewhat
arbitrary, but certainly an enforced length spec larger than that
would be a completely arbitrary number as well ...

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] What is faster?

2005-06-06 Thread Richard Huxton

ON.KG wrote:

Hi All!

What is faster - SLECTion data from one large table (200 000 - 300 000
records), or SELECTion from a few small tables (example, 2 tables 150
000 records each)?


It depends. Are you selecting all records? One record? A few records? If 
one or a few, do you have a suitable index on the table(s)? Is the table 
clustered? Expected to be cached in RAM?


Do you have a specific problem, or reason to believe you may encounter one?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] What is faster?

2005-06-06 Thread ON.KG
Hi All!

>> What is faster - SLECTion data from one large table (200 000 - 300 000
>> records), or SELECTion from a few small tables (example, 2 tables 150
>> 000 records each)?

RH> It depends. Are you selecting all records? One record? A few records? If
RH> one or a few, do you have a suitable index on the table(s)? Is the table
RH> clustered? Expected to be cached in RAM?

RH> Do you have a specific problem, or reason to believe you may encounter one?

It's a real problem

For example i have two large tables
Structure of tables is same - has two fields - id, ip

Now i'm using two selection from each in one transaction
Each of them selects only one record
selection clase like WHERE ip = 'xxx.xxx.xxx.xxx'
so it is searches existance of IP in each table

tables are clustered

about cached in RAM - i'm novice in Postgresql - how does it work?

now i need to make much faster as it is possible
and have an idea just merge two tables in one - will it help me?

Thank You


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] ARRAY() returning NULL instead of ARRAY[] resp. {}

2005-06-06 Thread Markus Bertheau ☭
В Втр, 24/05/2005 в 00:06 -0400, Tom Lane пишет:
> Joe Conway <[EMAIL PROTECTED]> writes:
> > Markus Bertheau wrote:
> >> why does SELECT ARRAY(SELECT 1 WHERE FALSE) return NULL instead of
> >> ARRAY[] resp. '{}'?
> 
> > Why would you expect an empty array instead of a NULL?
> 
> I think he's got a good point, actually.  We document the ARRAY-with-
> parens-around-a-SELECT syntax as
> 
>   The resulting one-dimensional array will have an element for
>   each row in the subquery result, with an element type matching
>   that of the subquery's output column.
> 
> To me, that implies that a subquery result of no rows generates a
> one-dimensional array of no elements, not a null array.

By analogy, array_upper('{}'::TEXT[], 1) should return 0 instead of
NULL.

Markus

-- 
Markus Bertheau ☭ <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] ARRAY() returning NULL instead of ARRAY[] resp. {}

2005-06-06 Thread Tom Lane
Markus Bertheau =?UTF-8?Q?=E2=98=AD?= <[EMAIL PROTECTED]> writes:
> By analogy, array_upper('{}'::TEXT[], 1) should return 0 instead of
> NULL.

No, that doesn't follow ... we've traditionally considered '{}' to
denote a zero-dimensional array.  A 1-D array of no elements is
'[1:0]={}', just as Joe shows ... or at least it would be except
for an overenthusiastic error check:

regression=# select '[1:0]={}' :: int[];
ERROR:  upper bound cannot be less than lower bound

I think this should be a legal boundary case.  In general, it should be
possible to form zero-size arrays of any number of dimensions.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] ARRAY() returning NULL instead of ARRAY[] resp. {}

2005-06-06 Thread Joe Conway

Tom Lane wrote:

Markus Bertheau =?UTF-8?Q?=E2=98=AD?= <[EMAIL PROTECTED]> writes:


By analogy, array_upper('{}'::TEXT[], 1) should return 0 instead of
NULL.


No, that doesn't follow ... we've traditionally considered '{}' to
denote a zero-dimensional array.  A 1-D array of no elements is
'[1:0]={}', just as Joe shows ... or at least it would be except
for an overenthusiastic error check:

regression=# select '[1:0]={}' :: int[];
ERROR:  upper bound cannot be less than lower bound

I think this should be a legal boundary case.  In general, it should be
possible to form zero-size arrays of any number of dimensions.



I've been playing with exactly this over the weekend. Of course, as 
usual, the devil is in the details. For instance, using the above 
notation, how would I specify a zero-element 1D array starting at a 
lower bound index of 0? The answer following the above pattern would be:


  select '[0:-1]={}'::int[];

You could not use '[0:0]={}'::int[], because that would be a one-element 
array. I propose the following instead:


regression=# select '[1:]={}' :: int[];
 int4
--
 {}
(1 row)

regression=# select array_dims('[1:]={}' :: int[]);
 array_dims

 [1:]
(1 row)

In other words, an missing upper bound indicates zero elements.

Now the next question; what does a 2D zero-element array look like? I 
think this makes sense:


regression=# select '[1:2][1:]={{},{}}'::int[];
 int4
--
 {}
(1 row)

Except (I think) array_out() should probably output something closer to 
the input literal. Any thoughts on this?


Joe

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] What is faster?

2005-06-06 Thread Richard Huxton

ON.KG wrote:

Hi All!


What is faster - SLECTion data from one large table (200 000 - 300 000
records), or SELECTion from a few small tables (example, 2 tables 150
000 records each)?


RH> It depends. Are you selecting all records? One record? A few records? If
RH> one or a few, do you have a suitable index on the table(s)? Is the table
RH> clustered? Expected to be cached in RAM?

RH> Do you have a specific problem, or reason to believe you may encounter one?

It's a real problem

For example i have two large tables
Structure of tables is same - has two fields - id, ip

Now i'm using two selection from each in one transaction
Each of them selects only one record
selection clase like WHERE ip = 'xxx.xxx.xxx.xxx'
so it is searches existance of IP in each table

tables are clustered


OK - so the tables aren't updated frequently, I assume. Do you have an 
index on "ip"?



about cached in RAM - i'm novice in Postgresql - how does it work?


The operating-system will keep frequently used disk-blocks in memory. 
You don't have to do anything. Have you done any performance tuning in 
your postgresql.conf file? If not, try reading:

  http://www.powerpostgresql.com/PerfList


now i need to make much faster as it is possible
and have an idea just merge two tables in one - will it help me?


If they hold the same information, they probably shouldn't have been 
split in the first place.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] How do write a query...

2005-06-06 Thread Bruno Wolff III
On Wed, Jun 01, 2005 at 18:00:49 +0200,
  Alain Reymond <[EMAIL PROTECTED]> wrote:
> Hello,
> 
> I have the following problem :
> 
> I have a table like
> IdNum  Date   AValue
> 1  10 01/01/2005   50
> 2  10 31/05/2005   60
> 3  25 02/02/2005   55
> 4  25 15/03/2005   43
> 5  25 28/05/2005   62
> etc..
> 
> Id is unique, Num is an identification number with duplicates possible,
> date is a ... date and Avalue... a value!
> 
> If we have
> IdNum  Date   AValue
> Id1  Num1Date1  AValue1
> Id2  Num1Date2  AValue2
> 
> The table is ordered on Num+Date.
> What I would like to calculate is (AValue2-AValue1) for a given Num
> (here num1).
> 
> In this case, I would have to calculate
> 60-50 for Num 10
> and
> 43-55, 62-43 for Num 25.
> 
> Do you have any idea if it can be done simply with a request...

I would suggest using a different design. Probably the easiest is to
put the start and end dates and values in one record.

Otherwise you could write a function or have your application read entries
for a particular Num value ordered by date and treat them as pairs.
If any intervals overlap then there really isn't a fix.


---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] SQL equivalent to nested loop

2005-06-06 Thread Markus Bertheau ☭
Hi,

I basically need the SQL equivalent of the following pseudo code:

BEGIN
FOR v IN SELECT * FROM f(4, 'foo') LOOP
FOR w IN SELECT * FROM f(v.id, 'bar') LOOP
RETURN NEXT W
END LOOP;
END LOOP;
RETURN;

Is that possible in SQL?

Markus

-- 
Markus Bertheau ☭ <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] SQL equivalent to nested loop

2005-06-06 Thread KÖPFERL Robert
This is basicly a Join, a cross table

i.e.

select b.w from  table a, table b where ...



|-Original Message-
|From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
|Sent: Montag, 06. Juni 2005 18:53
|To: pgsql-sql@postgresql.org
|Subject: [SQL] SQL equivalent to nested loop
|
|
|Hi,
|
|I basically need the SQL equivalent of the following pseudo code:
|
|BEGIN
|FOR v IN SELECT * FROM f(4, 'foo') LOOP
|FOR w IN SELECT * FROM f(v.id, 'bar') LOOP
|RETURN NEXT W
|END LOOP;
|END LOOP;
|RETURN;
|
|Is that possible in SQL?
|
|Markus
|
|-- 
|Markus Bertheau ☭ <[EMAIL PROTECTED]>
|
|
|---(end of 
|broadcast)---
|TIP 7: don't forget to increase your free space map settings
|

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] SQL equivalent to nested loop

2005-06-06 Thread Markus Bertheau
Dnia 06-06-2005, pon o godzinie 20:08 +0200, KÖPFERL Robert napisał(a):
> This is basicly a Join, a cross table
> 
> i.e.
> 
> select b.w from  table a, table b where ...

You'd hope that. The problem is that you don't have the tables and
afterwards specify the join condition, but in specifying the tables
(=calling the function) you have the join condition already. I can't
figure out the right syntax. What are you proposing?

SELECT * from f(4, 'foo') as b, f(b.int, 'bar')?

oocms=# select * from object_get_list_of_reference_property(311,
'Themen') as b, object_get_list_of_reference_property(b.object_id,
'Objekte');
ERROR:  ?? ? ??
FROM ?? ? ? ?? ?? ? ?? ??? ?? ?? ???

I can't get an english error message atm :/, but that doesn't work.

Markus

-- 
Markus Bertheau <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


[SQL] preventing deletion of a row

2005-06-06 Thread Kenneth Gonsalves
hi
i have a table of the type:

id serial unique,
name varchar(50),
fixed boolean default FALSE

if the value of 'fixed' is TRUE, then this row cannot be deleted. How 
do i enforce this condition?
-- 
regards
kg

http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
ಇಂಡ್ಲಿನಕ್ಸ வாழ்க!

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] datestyle syntax

2005-06-06 Thread Kenneth Gonsalves
hi
in the query:
alter database dbname set datestyle='DMY','European';
what is the significance of the two parameters given in datestyle. I 
assumed that one is input and the other is output, but if i put:
set datestyle='DMY','DMY' 
i get an error saying 'conflicting datestyles'
-- 
regards
kg

http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
ಇಂಡ್ಲಿನಕ್ಸ வாழ்க!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] What is faster?

2005-06-06 Thread ON.KG
RH> ON.KG wrote:
>> Hi All!
>> 
What is faster - SLECTion data from one large table (200 000 - 300 000
records), or SELECTion from a few small tables (example, 2 tables 150
000 records each)?
>> 
>> For example i have two large tables
>> Structure of tables is same - has two fields - id, ip
>> 
>> Now i'm using two selection from each in one transaction
>> Each of them selects only one record
>> selection clase like WHERE ip = 'xxx.xxx.xxx.xxx'
>> so it is searches existance of IP in each table
>> 
>> tables are clustered

RH> OK - so the tables aren't updated frequently, I assume. Do you have an
RH> index on "ip"?

Yes

>> about cached in RAM - i'm novice in Postgresql - how does it work?

RH> The operating-system will keep frequently used disk-blocks in memory.
RH> You don't have to do anything. Have you done any performance tuning in
RH> your postgresql.conf file? If not, try reading:
RH>http://www.powerpostgresql.com/PerfList

Server Administrator says, he did

>> now i need to make much faster as it is possible
>> and have an idea just merge two tables in one - will it help me?

RH> If they hold the same information, they probably shouldn't have been
RH> split in the first place.

Content of tables is not absolutely same
One has one kind of IPs, second - others

And there's one more addition in question - if I will merge tables,
in new table will be set new additional field - `type` char - to
determine type of IP

Thank You


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] preventing deletion of a row

2005-06-06 Thread Nick Johnson

Kenneth Gonsalves wrote:


hi
i have a table of the type:

id serial unique,
name varchar(50),
fixed boolean default FALSE

if the value of 'fixed' is TRUE, then this row cannot be deleted. How 
do i enforce this condition?
 


Try this:
CREATE RULE tablename_delete AS ON DELETE TO tablename WHERE OLD.fixed = 
TRUE DO INSTEAD NOTHING;


-Nick Johnson

---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] large object support in PostGres 7.4

2005-06-06 Thread Vadivel Subramaniam




Hi,

We have API's in /usr/local/pgsql/include/libpq-fe.h which support large
objects in PostGres
  extern int  lo_open(PGconn *conn, Oid lobjId, int mode);
  extern int  lo_close(PGconn *conn, int fd);
  extern int  lo_read(PGconn *conn, int fd, char *buf, size_t len);
  extern int  lo_write(PGconn *conn, int fd, char *buf, size_t len);
  extern int  lo_lseek(PGconn *conn, int fd, int offset, int whence);
  extern Oid  lo_creat(PGconn *conn, int mode);
  extern int  lo_tell(PGconn *conn, int fd);
  extern int  lo_unlink(PGconn *conn, Oid lobjId);
  extern Oid  lo_import(PGconn *conn, const char *filename);
  extern int  lo_export(PGconn *conn, Oid lobjId, const char
*filename);

My doubt is, do these API's operate on character data?   i.e., My table
schema is like this

table (name varchar, script varchar).  I have to store a large data(in
character form) in the script column (upto 3 MB).

As of my understanding the above mentioned API's work on the OID field.
i.e,.  table(name varchar, script oid).  Is that correct?
Please clarify this if anyone has already used this large object APIs and
specify if this has any limitation on the size?

thanks,
-Vadivel



***  FSS-Private   ***
"DISCLAIMER: This message is proprietary to Flextronics Software Systems
Limited (FSS) and is intended solely for the use of the
individual to whom it is addressed. It may contain  privileged or
confidential information and should not be circulated or used for
any purpose other than for what it is intended. If you have received this
message in  error, please notify the originator immediately.
If you are not the intended recipient, you are notified that you are
strictly  prohibited  from  using, copying, altering, or disclosing
the contents of this message.  FSS  accepts no  responsibility  for loss or
damage arising from the use of  the information transmitted
by this email including damage from virus."


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] large object support in PostGres 7.4

2005-06-06 Thread Volkan YAZICI
Hi,

On 6/7/05, Vadivel Subramaniam
<[EMAIL PROTECTED]> wrote:
> My doubt is, do these API's operate on character data?   i.e., My table
> schema is like this
> 
> table (name varchar, script varchar).  I have to store a large data(in
> character form) in the script column (upto 3 MB).
> 
> As of my understanding the above mentioned API's work on the OID field.
> i.e,.  table(name varchar, script oid).  Is that correct?

Yep. Large Objects are stored (and distributed between rows) in a
specific table: `pg_largeobject'. You can use these lo records by
referencing to their OIDs. Furthermore, you cannot (also shouldn't)
reach data on pg_largeobject with another function/routine out of
supplied lo_... functions by libpq.

> Please clarify this if anyone has already used this large object APIs and
> specify if this has any limitation on the size?

Yes, there's a limitation about 'em. (2GB) For more information please see: 
http://www.postgresql.org/docs/7.4/interactive/largeobjects.html

Regards.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] preventing deletion of a row

2005-06-06 Thread Kenneth Gonsalves
On Tuesday 07 Jun 2005 11:29 am, Nick Johnson wrote:
> >if the value of 'fixed' is TRUE, then this row cannot be deleted.
> > How do i enforce this condition?
> >  
>
> Try this:
> CREATE RULE tablename_delete AS ON DELETE TO tablename WHERE
> OLD.fixed = TRUE DO INSTEAD NOTHING;

cool - thanks

-- 
regards
kg

http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
ಇಂಡ್ಲಿನಕ್ಸ வாழ்க!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])