[GENERAL] SQL Agreate Functions

2004-11-02 Thread Alex P
Hi,
I have a problem I dont really know how to solve except for writing a
function.
I have a table with prices;
SecCode| Price   | PriceDate
---++---
A0001  | 13.10   | 2004-10-30
A0001  | 13.03   | 2004-10-29
A0001  | 13.12   | 2004-10-28
A0001  | 12.45   | 2004-10-27
A0001  | 12.65   | 2004-10-26
A0001  | 12.45   | 2004-10-25
A0002  | 10.10   | 2004-10-30
A0002  | 10.45   | 2004-10-27
A0002  | 10.65   | 2004-10-26
A0002  | 10.45   | 2004-10-25
etc.
What I would like to calculate is:
a) the difference of the past 2 days for every security of
  available prices
b) a flag indicating, that the price of today-1 is yesterday's
  price (true in case A0001, false for A0002)
c) the variance of the past 30 days
Is it possible to do that within one query?
Thanks for any advise
Alex


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Subselect Question

2004-11-02 Thread Alex P
Hi,
when creating a query with a subselect
SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) AS max_pop
   FROM states;
then it is not possible to sort after max_pop or use max_pop in a function or a CASE.
am I dont anything wrong or is this meant to be the case?
Thanks
Alex



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Postgres Versions / Releases

2004-11-02 Thread Alex P
Will there be a  release 7.4.7 in the future?
When can the production release of 8 be expected?
thanks
Alex
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Numeric type problems

2004-11-02 Thread M.A. Oude Kotte
This is a very interesting option. My biggest concern is performance: 
the project will require tables with millions of tuples. How does the 
performance of such user created types compare to using native types? Or 
are they 'built' using the same structure?

Thanks again!
Marc
Paul Tillotson wrote:
Use a numeric type if you need more precision.
template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value = 
0 and value  '18446744073709551616'::numeric(20,0));
CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR:  value for domain bigint_unsigned violates check constraint $1
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric, 64::numeric) 
- 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric, 
64::numeric)); --too large
ERROR:  value for domain bigint_unsigned violates check constraint $1
template1=# select * from foobar;
 i
--
   0
18446744073709551615
(2 rows)

Paul Tillotson
Hi All,
I hope this is the correct mailing list for this question. But neither 
postgresql.org nor google could help me out on this subject.
I did find one disturbing topic on the mailing list archives 
(http://archives.postgresql.org/pgsql-admin/2000-05/msg00032.php), but 
since it was quite old I'm posting my question anyway.

I'm writing a generic database layer that should support a fixed 
number of generic numeric types on a number of databases. At this 
point it supports MySQL just fine, but I'm having some trouble finding 
the right implementation details for PostgreSQL. Please take a moment 
to look at the following table. The field description speaks for 
itself pretty much I guess.

Field descr.   MySQL  PostgreSQL
==
DB_FIELD_INT8  TINYINTSMALLINT (too big, but best match)
DB_FIELD_INT16 SMALLINT   SMALLINT
DB_FIELD_INT32 INTINT
DB_FIELD_INT64 BIGINT BIGINT
DB_FIELD_UINT8 TINYINT UNSIGNED   not supported natively, is it?
DB_FIELD_UINT16SMALLINT UNSIGNED  not supported natively, is it?
DB_FIELD_UINT32INT UNSIGNED   not supported natively, is it?
DB_FIELD_UINT64BIGINT UNSIGNEDnot supported natively, is it?
DB_FIELD_FLOAT FLOAT  REAL
DB_FIELD_DOUBLEDOUBLE DOUBLE PRECISION
My problem is obvisouly the unsigned values I really need to be able 
to represent properly. I know I can just use the twice as big signed 
types and put a constraint on it, but that only works for UINT8, 
UINT16 and UINT32 (there is no 128-bit signed integer type, right?): I 
really need to have proper 64-bit unsigned integer value support.

I *could* use a BIGINT to represent 64-bit unsigned values, and just 
cast the binary data to an unsigned long long (or unsigned __int64 on 
win32), but this would leave me with the problem that I couldn't 
safely let SQL do comparisons on the value, right?

Is there any solution? I've seen someone suggesting elsewhere that one 
should use the OID type, but others said that one shouldn't. I'm 
pretty desperate. PostgreSQL would really be my database of choice for 
our current project, but I'm afraid we can't use it if I can't get 
this right...

Thanks in advance for any help!
Bye,
  Marc
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


--
Bye,
  Marc 'Foddex' Oude Kotte
  -=-=-=-=-=-=-=-=-=-=-=-=-
  Need a programmer?
  Go to http://www.foddex.net
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Postgres Versions / Releases

2004-11-02 Thread Peter Eisentraut
Am Dienstag, 2. November 2004 09:09 schrieb Alex P:
 Will there be a  release 7.4.7 in the future?

Possibly.

 When can the production release of 8 be expected?

When it's ready.  Probably this year.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Split query

2004-11-02 Thread Tino Wildenhain
Hi,

On Tue, 2004-11-02 at 07:47, Katsaros Kwn/nos wrote:
 Hi!
  
 I want to parse a query (SFW) and create new queries: one for each
 table referenced, if that possible.
 I've written a function for this in the Query struct level after
 parserewrite (not very general ofcourse :-)) but I would like to
 know  whether there is any code written for this purpose at any level.
 Are there any algorithms for this?
 Following, is there any code or at least some algorithm about merging
 the individual results returned from these queries?

The latter is easy. Consider:

SELECT ... FROM 
(SELECT ... first query here) AS table_a
JOIN 
(SELECT ... second query here) AS table_b
USING (same_column);

(or ON table_a.column = table_b.column)

I dont think you should think about joining results
outside the database. You have Postgresql here, remember :-)

Regards
Tino 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Subselect Question

2004-11-02 Thread Tino Wildenhain
Hi,

On Tue, 2004-11-02 at 09:05, Alex P wrote:
 Hi,
 
 when creating a query with a subselect
 
 SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) AS 
 max_pop
 FROM states;
 
 then it is not possible to sort after max_pop or use max_pop in a function or a CASE.

Hm. Here it works.

select 1 as foo,(select 2) as bar union select 5 as foo,(select 1) as
bar order by bar;

 foo | bar
-+-
   5 |   1
   1 |   2


Postgresql 7.4.2 in this case.
You can also use the whole query as a subselect, for example:

SELECT name, max_pop FROM 
(SELECT name, (SELECT max(pop) FROM cities WHERE
cities.state=states.name) AS max_pop FROM states) as statepop;

if you want to filter with where clauses or whatever.

Regards
Tino


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Split query

2004-11-02 Thread Katsaros Kwn/nos
Well, the second question is trivial indeed :-) ,even though I had
pipelining in mind when writting it...
(which is not so query oriented but rather plan/executor oriented)
However, I'm more interested in the first question.

Thank you very much,
Ntinos


Tino Wildenhain wrote:
 Hi,

 On Tue, 2004-11-02 at 07:47, Katsaros Kwn/nos wrote:
  Hi!
 
  I want to parse a query (SFW) and create new queries: one for each
  table referenced, if that possible.
  I've written a function for this in the Query struct level after
  parserewrite (not very general ofcourse :-)) but I would like to
  know  whether there is any code written for this purpose at any level.
  Are there any algorithms for this?
  Following, is there any code or at least some algorithm about merging
  the individual results returned from these queries?

 The latter is easy. Consider:

 SELECT ... FROM
 (SELECT ... first query here) AS table_a
 JOIN
 (SELECT ... second query here) AS table_b
 USING (same_column);

 (or ON table_a.column = table_b.column)

 I dont think you should think about joining results
 outside the database. You have Postgresql here, remember :-)

 Regards
 Tino



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Subselect Question

2004-11-02 Thread Sim Zacks
You can't use the alias name in the sort, case, where etc.. you have
to use the entire subselect.
So you would order by (select max(pop)...)
and you would also case the full thing as well.
A bit of a pain but Tom Lane explained it in a post a couple days ago
and said the system was optimized so it actually only ran the subquery
once.


Thank You
Sim Zacks
IT Manager
CompuLab
04-829-0145 - Office
04-832-5251 - Fax



Hi,

when creating a query with a subselect

SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) AS max_pop
FROM states;

then it is not possible to sort after max_pop or use max_pop in a function or a CASE.

am I dont anything wrong or is this meant to be the case?

Thanks
Alex







---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Subselect Question

2004-11-02 Thread Richard Huxton
Alex P wrote:
Hi,
when creating a query with a subselect
SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = 
states.name) AS max_pop
   FROM states;

then it is not possible to sort after max_pop or use max_pop in a 
function or a CASE.
Here max_pop is naming the whole subselect. How about something like:
SELECT name, max_pop
FROM
 states,
 (SELECT state AS target_state, max(pop) AS max_pop FROM cities) AS pops
WHERE
 states.name = pops.target_state
;
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres Versions / Releases

2004-11-02 Thread Richard Huxton
Alex P wrote:
Will there be a  release 7.4.7 in the future?
Maybe - there have been patches for 7.3 after 7.4 was released. It 
depends if any serious bugs are found.

When can the production release of 8 be expected?
The official answer is when it's ready. If I were a betting man, I'd 
be happy saying before the end of the year, perhaps even the end of 
November.

If you are just starting to develop a new application, I'd target 
8.0beta now.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] SQL Agreate Functions

2004-11-02 Thread Richard Huxton
Alex P wrote:
Hi,
I have a problem I dont really know how to solve except for writing a
function.
[snip]
What I would like to calculate is:
a) the difference of the past 2 days for every security of
  available prices
Find the maximum date for a give SecCode (simple enough) and then the 
maximum date that is smaller than the one you just found (assuming no 
repetition of dates for a given SecCode).

b) a flag indicating, that the price of today-1 is yesterday's
  price (true in case A0001, false for A0002)
SELECT ... (PriceDate = (CURRENT_DATE - 1)) AS is_yesterday, ...
c) the variance of the past 30 days
Variance aggregate function
Is it possible to do that within one query?
Three sub-queries and some joining, certainly. It'll be a big query 
mind, perhaps worth wrapping in a function.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Reasoning behind process instead of thread based

2004-11-02 Thread Marco Colombo
[Cc: list minimized]
On Tue, 2 Nov 2004, Neil Conway wrote:
I don't see the big difference between what Marco is suggesting and user 
threads -- or to be more precise, I think user threads and event-based 
programming are just two sides of the same coin. A user thread just 
represents the state of a computation -- say, a register context and some 
stack. It is exactly that *state* that is passed to a callback function in 
the event-based model. The only difference is that with user threads the 
system manages context for you, whereas the event-based model lets the 
programmer manage it. Which model is better is difficult to say.
Well, the difference is that in a pure event-driven model, you
(the programmer) have full control over what the state is. Any thread
library offers a general purpose thread, which may be more than
what you want/need.
Of course, very often userland threads are good implementation of
an even-driven model. Think of GUIs.
The problem is not threads or not. The problem is one thread/process
per session, as opposed to a few specialized threads or one thread per
outstanding query. We can start another thread :-) on threads in 
general but it would be largely off-topic here.

Martijn van Oosterhout wrote:
1. non-blocking is nice, but lots of OSes (eg POSIX) don't support it
on disk I/O unless you use a completely different interface.
We could implement I/O via something like POSIX AIO or a pool of worker 
threads that do the actual I/O in a synchronous fashion. But yeah, either way 
it's a major change.

2. If one of your 'processes' decides to do work for half an hour (say,
a really big merge sort), you're stuck.
It would be relatively easy to insert yield points into the code to prevent 
this from occurring. However, preemptive scheduling would come in handy when 
running foreign code (e.g. user-defined functions in C).

I honestly don't think you could really do a much better job of
scheduling than the kernel.
I think we could do better than the kernel by taking advantage of 
domain-specific knowledge, I'm just not sure we could beat the kernel by 
enough to make this worth doing.

BTW, I think this thread is really interesting -- certainly more informative 
than a rehash of the usual processes vs. threads debate.
Thanks, that was the whole point.
I thought that the even-driven model was well-understood, I personally
consider it an established alternative to the threads/processes one.
I'd do a bad and pointless job in further explaining it. Please let me
just throw a few URLs in...
http://www.usenix.org/events/usenix01/full_papers/chandra/chandra_html/index.html
A random quote to attract readers: :-)
  In general, thread-per-connection servers have the drawback of large
  forking and context-switching overhead. In addition, the memory usage
  due to threads' individual stack space can become huge for handling
  large number of concurrent connections. The problem is even more
  pronounced if the operating system does not support kernel-level
  threads, and the application has to use processes or user-level
  threads. It has been shown that thread-based servers do not scale well
  at high loads [7]. Hence, many servers are structured as event-based
  applications, whose performance is determined by the efficiency of event
  notification mechanisms they employ. Pure event-based servers do not
  scale to multiprocessor machines, and hence, on SMP machines, hybrid
  schemes need to be employed, where we have a multi-threaded server
  with each thread using event-handling as a mechanism for servicing
  concurrent connections. Even with a hybrid server, the performance of
  event-based mechanisms is an important issue. Since efficient event
  dispatching is at the core of both event-based and hybrid servers,
  we will focus on the former here.
http://www.kegel.com/c10k.html
This paper is very complete, it covers almost all possible techniques
to implement even-driver servers, and it's a very interesting reading
anyway.
Please note that the rationale behind it is the C10k problem, which
I _don't_ think we're facing here. There are some nice properties
of even-driven servers other than being able to  handle 100K connections,
IMHO.
All this started from the priority inversion problem, a few messages ago
on this list. The problem was to 'slow down' a query.
In general, I've been thinking about a not-so-cooperative environment,
which demands for some active measures to limit resources used by a
session (other than the DBA yelling at the (mis)user). Think of high
density web services, with hundreds of sites on the same host.
Even-driven servers easily allow to take full control over the resources
allocated to each session.
.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]

Re: [GENERAL] Rows created by a stored proc prompt Access' dreaded write conflict

2004-11-02 Thread Jeff Eckermann

--- Sim Zacks [EMAIL PROTECTED] wrote:

 After the stored procedure is run, call requery on
 the form that was
 updated.
 
 We are in the middle of moving Access
 implementations to PostGreSQL.
 I'd be happy to trade war stories, if you'd like.

I hope that you do it on the list, so that the rest of
us can profit from your experience.

 
 Thank You
 Sim Zacks
 IT Manager
 CompuLab
 04-829-0145 - Office
 04-832-5251 - Fax
 


 
 Hi all,
 I am using an Access client linked to a PG 7.4
 server via ODBC.
 
 I have a stored proc on the server that inserts rows
 into a 
 table.particular table, accomplished via an INSERT
 within the body of the stored proc.  The procedure
 does not explicitly 
 commit this data, as no transactions are invoked.
 
 The problem is that Access will not modify these
 records via table or 
 form view, giving its generic Write conflict:
 another user has modified 
 this record message.  It does just fine for any
 other records in the 
 table, but it will not modify those created by the
 stored proc. It will 
 also execute an UPDATE OR DELETE query to modify
 these records  This 
 stored procedure is pretty key for us to go forward.
 
 
 Does anyone have any ideas of what's going on and
 how to fix it?  I can 
 post more details, but I wanted to see if this was a
 known problem 
 before doing so.
 
 Many thanks,
 
 Eric
 
 ---(end of
 broadcast)---
 TIP 1: subscribe and unsubscribe commands go to
 [EMAIL PROTECTED]
 
 
 ---(end of
 broadcast)---
 TIP 2: you can get off all lists at once with the
 unregister command
 (send unregister YourEmailAddressHere to
 [EMAIL PROTECTED])
 




__ 
Do you Yahoo!? 
Check out the new Yahoo! Front Page. 
www.yahoo.com 
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Rows created by a stored proc prompt Access' dreaded write conflict

2004-11-02 Thread Sim Zacks
Maybe you need some ODBC settings reconfigured:
Here's what I have, I read a couple of these settings on various lists
and websites and others were the defaults. I would guess if you don't
have row versioning checked, that is the problem.
Also, if you change ODBC settings you have to delete(unlink) the table
and relink it. Just going to Linked Table Manager and refreshing
doesn't do it. Access stores the ODBC settings in each table and does
not really refresh it. So anytime you change the ODBC settings you
have to delete all tables and relink them before it will catch. I
would recommend deleting one table and testing, if possible, and when
you find a setting that works then redo all the tables.
Also I'm using 8.0beta1, so that might also be a difference.

I'm using psqlODBC
Page 1:   The only checks I have  are Disable Genetic Optimizer, KSQO
and Recognize Unique Indexes. Unknown Sizes is set to Maximum.
Max Varchar and LongVarchar are 4094.
Page 2:
The ones I have checked are LFCR?LF conversion, Updateable Cursors
and Row Versioning. (If you don't have row versioning, that might be
the problem, I'm pretty sure it's not a default)
I tested both True is -1 on and off and it didn't make a difference,
now I have it off.
Int8 is Default and I'm not showing OID. Protocol is 7.X,6.4+

Let us know how it goes.

Thank You
Sim Zacks
IT Manager
CompuLab
04-829-0145 - Office
04-832-5251 - Fax



Hi Sim,
Thanks for the advice.  The problem persists when I close and reopen 
any of the objects, or even the database client.  I suspect it has 
something to do with how Access determines the uniqueID of the row, but 
that's only because that seems to be the major issue with Access and 
ODBC.  Any other suggestions?

Thanks,

Eric

Sim Zacks wrote:

 After the stored procedure is run, call requery on the form that was
 updated.
 
 We are in the middle of moving Access implementations to PostGreSQL.
 I'd be happy to trade war stories, if you'd like.
 
 Thank You
 Sim Zacks
 IT Manager
 CompuLab
 04-829-0145 - Office
 04-832-5251 - Fax
 
 
 
 Hi all,
 I am using an Access client linked to a PG 7.4 server via ODBC.
 
 I have a stored proc on the server that inserts rows into a 
 table.particular table, accomplished via an INSERT
 within the body of the stored proc.  The procedure does not explicitly 
 commit this data, as no transactions are invoked.
 
 The problem is that Access will not modify these records via table or 
 form view, giving its generic Write conflict: another user has modified 
 this record message.  It does just fine for any other records in the 
 table, but it will not modify those created by the stored proc. It will 
 also execute an UPDATE OR DELETE query to modify these records  This 
 stored procedure is pretty key for us to go forward. 
 
 Does anyone have any ideas of what's going on and how to fix it?  I can 
 post more details, but I wanted to see if this was a known problem 
 before doing so.
 
 Many thanks,
 
 Eric
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Postgres Versions / Releases

2004-11-02 Thread Bruno Wolff III
On Tue, Nov 02, 2004 at 19:09:02 +1100,
  Alex P [EMAIL PROTECTED] wrote:
 Will there be a  release 7.4.7 in the future?

You can get a snapshot of the 7.4 stable cvs if there is some post 7.4.6
fix you are interested in, before there is a 7.4.7 release.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Calling on all SQL guru's

2004-11-02 Thread Alvaro Herrera
On Mon, Nov 01, 2004 at 05:34:21PM -0800, John Fabiani wrote:

 God bless you!  It works as expected.  But is it possible to create a SQL 
 statement using only the pg files.  This will allow it to be used with 7.3.x 
 and later.  I have been trying for a full day.  Actually, I really need to 
 understand the relationship between the pg files.  Is there a description 
 somewhere???  

Yes, see the System Catalogs section in the Internals chapter of the
documentation.

http://www.postgresql.org/docs/7.4/static/catalogs.html

-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
On the other flipper, one wrong move and we're Fatal Exceptions
(T.U.X.: Term Unit X  - http://www.thelinuxreview.com/TUX/)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Daylight Savings Time handling on persistent connections

2004-11-02 Thread Steve Crawford
On Sunday 31 October 2004 11:44 am, Tom Lane wrote:
 Randall Nortman [EMAIL PROTECTED] writes:
  Ah, I see now.  PostgreSQL is behaving a bit differently than I
  expected.  The timestamp string above is ambiguous in the
  timezone US/Eastern -- it could be EST or EDT.  I was expecting
  PostgreSQL to resolve this ambiguity based on the current time
  when the SQL statement is processed

 I think this would be a very bad thing for it to do.  It might seem
 to make sense for a timestamp representing now, but as soon as
 you consider a timestamp that isn't now it becomes a sure way to
 shoot yourself in the foot.

Would it help to add the PG locale TZ to the insert statement? For 
example the following queries return the TZ as text.

select to_char(now(),'tz');
 to_char
-
 pst

select to_char(now()-'3 days'::interval,'tz');
 to_char
-
 pdt

So the following might fix this particular situation:
insert into sensor_readings_numeric (...) values (...,'2004-10-31 
01:00:00 ' || to_char(now(),'tz'),...)

I realize that it assumes that the data is being inserted at the time 
it was taken so a reading taken just before DST changes and inserted 
just after will be incorrect but it may work for this particular app.

Of course the better solution is to have the application generate a 
fully-qualified timestamp with time zone. Generating all the 
timestamps in UTC and explicitly specifying that in the insert is 
probably the easiest way to go. Your queries will still have your 
local-appropriate TZ:

select '2004-10-31 00:00:00+00'::timestamptz;
  timestamptz

 2004-10-30 17:00:00-07

select '2004-11-01 00:00:00+00'::timestamptz;
  timestamptz

 2004-10-31 16:00:00-08


Cheers,
Steve


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Batch processing select

2004-11-02 Thread Steve Crawford
I'm still mulling the best way to handle this situation.

I have a table that describes work to be processed. This table 
includes a description of the work as well as priority and scheduling 
information (certain records can only be handled by certain client 
processes or at particular times of the day or week). I have several 
hundred client processes to handle the work, most, but not all, of 
which can handle any of the items in the database.

When a process is free, it needs to return the results to the table 
(not an issue) but also needs to get new work assigned for processing 
(problem).

I need to select one record from the table so the client program can 
process it. This record should be the highest priority item that the 
requesting client is able to process at that particular time. Of 
course, it can't be a record that has been completed or which is 
already being handled by another process.

Performance is an issue. Each piece of work takes ~20-300 seconds to 
handle and the overall processing rate is ~10 items/second. The 
to-do table often exceeds 500,000 records.

In my earlier attempt I tried select ... for update where {record 
needs processing} limit 1; set status flag to in-progress;. 

Unfortunately for this purpose the second process hitting the DB will 
block and then return 0 records when the first process completes 
since the status-flag has changed to in-progress.

I've considered select ... for update where {record needs processing 
and tuple not locked} limit 1... but don't know of a function that 
returns the lock status of a tuple.

Any ideas of how I can attack this problem?

Cheers,
Steve


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Numeric type problems

2004-11-02 Thread Paul Tillotson
First, every type in postgres is user-defined, in the sense that its 
binary structure and the arithmetic and comparison operations you can 
perform on it are defined by a set of native C functions that are 
present in the database executable or loaded as shared libraries.  
Because of postgres's extensible type system, all types share a small 
performance penalty, but you can make new ones that are just as efficient.

http://www.postgresql.org/docs/7.4/static/sql-createtype.html
http://www.postgresql.org/docs/7.4/static/sql-createopclass.html
If you need a high performance unsigned 64 bit integer, you should make 
your own type, using the existing bigint type as a template, which 
should be just as efficient as the builtin bigint type.  This is 
probably premature optimization though.

Also note that if you're trying to make a type that will merely hold a 
MySQL BIGINT UNSIGNED, and you want low overhead, then numeric(20) 
without the check constraint will do nicely.  Since MySQL itself doesn't 
check to see if the values you are inserting are negative or too big*, 
then presumably that responsibility doesn't fall on you either.  If you 
are trying to make something that is bug-for-bug compatible with MySQL, 
then you'd better start working on the user defined type.

Personally, I am curious to know what sort of application you are 
writing that requires storing numbers

- larger than 2 ** 63 (otherwise you would just use signed bigint)
- but less than 2 ** 64 (as far as I know you can't do this in MySQL 
anyway, although you can in postgres)
- with exact precision (otherwise you would use floating point),
- but without any requirements for checking the validity of input (since 
MySQL won't do this for you.)
- and without any requirements for being able to math in the database 
and get a valid answer.  (why don't you store it as a string?)

Paul Tillotson
*As evidenced: 

mysql create table foobar (i bigint unsigned);
Query OK, 0 rows affected (0.00 sec)
mysql insert into foobar values (-3);
Query OK, 1 row affected (0.00 sec)
mysql insert into foobar values (10 * 1);
Query OK, 1 row affected (0.00 sec)
mysql insert into foobar values (10);
Query OK, 1 row affected (0.00 sec)
mysql select * from foobar;
+--+
| i|
+--+
| 18446744073709551613 |
|  3875820019684212736 |
| 18446744073709551615 |
+--+
3 rows in set (0.00 sec)
mysql update foobar set i = -i;
Query OK, 3 rows affected (0.00 sec)
Rows matched: 3  Changed: 3  Warnings: 0
mysql select * from foobar;
+--+
| i|
+--+
|3 |
| 14570924054025338880 |
|1 |
+--+
3 rows in set (0.00 sec)
M.A. Oude Kotte wrote:
This is a very interesting option. My biggest concern is performance: 
the project will require tables with millions of tuples. How does the 
performance of such user created types compare to using native types? 
Or are they 'built' using the same structure?

Thanks again!
Marc
Paul Tillotson wrote:
Use a numeric type if you need more precision.
template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value 
= 0 and value  '18446744073709551616'::numeric(20,0));
CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR:  value for domain bigint_unsigned violates check constraint $1
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric, 
64::numeric) - 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric, 
64::numeric)); --too large
ERROR:  value for domain bigint_unsigned violates check constraint $1
template1=# select * from foobar;
 i
--
   0
18446744073709551615
(2 rows)

Paul Tillotson


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] unsubscribe

2004-11-02 Thread Keow Yeong Huat Joseph





unsubscribe from postgresql 
maillist