Hello list,
table diary_entry
entry_id SERIAL PK
d_entry_date_time timestamp without time zone
d_entry_company_id integer
d_entry_location_id integer
d_entry_shift_id integer
d_user_id integer
d_entry_header text
...
Get the last entries from companies and their locations?
The last, i.e. the bi
0', "-MM-DD")#
Produces:
{ts '2008-04-10 10:26:21'} || {d '1905-06-16'} = {d '1905-06-16'} | {d
'2008-04-10'} | {d '2008-04-10'} | 2008-04-10 | 2008-04-10
This is on CFMX7.
Best regards,
--
Aarni Ruuhimäki
---
Burglars usually come in through your windows.
---
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
ion script ?
Which client (browser?) / platform produces the error ?
And just out of general interest, which cf-version and platform are you
using ? Pg version ?
I use pg 8.x's on CentOS and Fedora with CF 5 Pro Linux and CFMX7 Standard. I
also heard that CFMX7+ would install and run ok on
t_res pr
LEFT JOIN countries c ON pr.country_id = c.country_id
WHERE
group_id = 1 AND group_size > 0 AND res_start_day <= '#date1#' AND res_end_day
>= '#date1#' AND res_end_day > res_start_day
AND region_id = #form.region#
AND company_id = #form.companyt#
AND
_day = '$date1' AND res_end_day >= '$date1' [AND
region_id = $region_id] [AND company_id = $company_id] [AND product_id =
$product_id]
OR
group_id = 1 AND res_start_day >= '$date1' AND res_start_day < '$date2' AND
res_end_day >= '$date1
Thanks Frank,
Top and between posting ...
On Friday 14 March 2008 15:58, Frank Bax wrote:
> Frank Bax wrote:
> > Aarni Ruuhimäki wrote:
> >> Anyway, I have to rethink and elaborate the query. I know that it will
> >> usually be on a monthly or yearly basis, but a reser
riod_end
9. start_day = period_start, end_day = period_end
10 start_day before period_start, end_day after period_end
Hmm ...
Best regards,
--
Aarni Ruuhimäki
---
Burglars usually come in through your windows.
---
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
This was superfast, thank you !
On Thursday 13 March 2008 20:58, Steve Crawford wrote:
> Aarni Ruuhimäki wrote:
> > res_id 2, start_day 2008-02-10, end_day 2008-02-15, number of persons 4
> >
>
> If you use the same inclusive counting of days for res_id 2, you have 4
> p
Ok, but a reservation can be of any nationality / country:
SELECT count(country_id) FROM countries;
count
---
243
(1 row)
Country_id is also stored in the product_res table.
I would like to, or need to, get the total split into different nationalities,
like:
FI 12345
RU 9876
DE 4321
On Saturday 23 February 2008 07:50, Tom Lane wrote:
>Hmm ... while ...
> so I'm disinclined to throw the first
> stone ...
Meanwhile,
Throw cones, not stones.
http://cfx.kymi.com/lotsacones.jpg
These things/projectiles hurt not so much. And it's fun !
BR,
On Wednesday 14 November 2007 13:28, Richard Huxton wrote:
> Aarni Ruuhimäki wrote:
> > Hello,
> >
> > In a web app (Pg 8.2.4 + php) I have product and other tables with fields
> > like
> >
> > product_created timestamp without time zone
> > product
Hello,
In a web app (Pg 8.2.4 + php) I have product and other tables with fields like
product_created timestamp without time zone
product_created_user_id integer
product_last_mod timestamp without time zone
product_last_mod_user_id integer
The person who last modified an item can obviously be so
x27;an Enterprise-class Linux Distribution
derived from sources freely provided to the public by a prominent North
American Enterprise Linux vendor.'
Their latest release comes with PostgreSQL 8.1
BR,
Aarni
--
Aarni Ruuhimäki
---(end of broadcast)---
Ahh,
Forgot about trunc() in the midst of all this ...
Thank you guys again !
Aarni
On Thursday 08 February 2007 12:06, Bart Degryse wrote:
> Use trunc instead of round.
> Also take a look at ceil and floor functions
>
> >>> Aarni Ruuhimäki <[EMAIL PROTECTED]>
start_date_time)
FROM work_times WHERE user_id = 10))/60) as mins;
mins
--
3729
(1 row)
So instead of rounding up to 3729 the result would have to be 'stripped' to
3728 ?
Thanks,
--
Aarni Ruuhimäki
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
u are copying all columns, including the pk.
Try:
INSERT INTO mytable (colname_1, colname_2, colname_3)
SELECT (colname_1, colname_2, colname_3)
FROM mytable WHERE pk = 123;
BR,
--
Aarni Ruuhimäki
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
entOs 4.4
???
Thanks,
--
Aarni Ruuhimäki
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
>match
Verbosily you can have even more control over the sequence.
With SERIAL the default is something like
CREATE SEQUENCE foo
INCREMENT BY 1
NO MAXVALUE
NO MINVALUE
CACHE 1;
By hand you can define e.g.
CREATE SEQUENCE foo
START n
INCREMENT BY n
MAXVALUE
On Monday 28 August 2006 16:08, you wrote:
> > So this merely means that in future one can not insert empty values into
> > field of type double precision ?
>
> Right. 8.0 issues a warning and 8.1 gives an error:
>
Ok, thanks.
But NULLs will go in the future too ?
On Friday 25 August 2006 08:12, Aarni Ruuhimäki wrote:
> On Thursday 24 August 2006 20:29, Tom Lane wrote:
> > Aarni =?iso-8859-1?q?Ruuhim=E4ki?= <[EMAIL PROTECTED]> writes:
> > > I vaguely remember having seen a message
> > > ' ... type double precision ..
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
Well, I have used it for 'money type' like sums and prices but I have never
used the actual "mone
ne or more fields type
double precision and have so far upgraded ok since 7.0.x (I now use numeric
with appropriate precision and scale.)
Is there something to worry about when upgrading next time ? Start changing
these to numeric perhaps ?
Running 8.0.2 at the moment.
Best regards to all,
Aarni
w to solve my problem ?
>
> Best Regards. Milen
>
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
--
Aarni Ruuhimäki
Megative Tmi
Pääsintie 26
45100 Kouvola
Finland
+358-5-3755035
+358-50-4910037
On Wednesday 15 March 2006 03:11, John DeSoi wrote:
> On Mar 14, 2006, at 2:19 AM, Aarni Ruuhimäki wrote:
> > testing=# INSERT INTO foo (foo_1, foo_2, foo_3 ...) (SELECT foo_1,
> > foo_2,
> > foo_3 ... FROM message_table WHERE foo_id = 10);
> > INSERT 717286 1
> >
1
testing=#
Is there a fast way to copy all but not the PK column to a new row within the
same table so that the new foo_id gets its value from the sequence ?
TIA and BR,
Aarni
--
Aarni Ruuhimäki
--
This is a bugfree broadcast to you
from **Kmail**
on **Fedora Core** li
On Tuesday 20 December 2005 15:19, Michael Burke wrote:
> On December 20, 2005 08:59 am, Aarni Ruuhimäki wrote:
> > Hello List,
> >
> > I have a time stamp without time zone field, -MM-DD hh:mm:ss, in my
> > table. I want to also find something just for a particula
Hello List,
I have a time stamp without time zone field, -MM-DD hh:mm:ss, in my table.
I want to also find something just for a particular day regardless of the
time.
(Pg)SQL way to do this ?
TIA,
Aarni
--
--
This is a bugfree broadcast to you
from **Kmail**
on **Fedora Core
> END;
> $body$
> LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;
>
> I call the function with:
> SELECT update_messages();
>
> I'm using apache cocoon, which is why you see the variable placeholder:
> );
>
> Unfortunately, the functi
Hi,
In my experience , I think your best bet and an all-around good general
encoding to use is latin1, which copes with l'accent egys & graves, umlauts,
harasoos and others.
Not so sure about the M$-import stuff though. Or asp or .net. Read the Gates
Private Licence ...
You might also want (r
Hi,
Could someone please give a hint on how to query the following neatly ?
Get news from a news table that belong to a particular account, get segment
name from segments table for each news item and read count from read history
table that gets a news_id and timestamp insert every time the news
30 matches
Mail list logo