> On 27 ביולי 2015, at 18:20, Tom Lane wrote:
>
> Herouth Maoz writes:
>> So I’m left with the question of what caused the shutdown on July 21st.
>
> Well, you had
>
> 2015-07-21 15:37:59 IDT LOG: received fast shutdown request
>
> There is exactly on
> On 27 ביולי 2015, at 18:01, Adrian Klaver wrote:
> Not sure what yo have set up for logging, but you might to crank it up. 13
> days between entries for a system that is in use all the time seems sort of
> light to me.
Most of the log settings are just the Debian default (except the log pref
> On 27 ביולי 2015, at 16:55, Melvin Davidson wrote:
>
> If you are running Linux (please ALWAYS give the O/S ), then this could have
> been caused by the sys admin doing a system shutdown.
Yes, sorry about that, as I previously answered Adrian Klaver, the OS is Debian
Gnu/Linux 7.
But I did
> On 27 ביולי 2015, at 16:39, Adrian Klaver wrote:
>>
>> * Given that I did not terminate any backend connection interactively,
>>why did I get a "terminating connection due to administrator
>>command” message? Is there any situation where this message is
>>issued without the admin
Hello everybody.
In the past week, it has happened to us twice already that we got an exception
from our Java application, due to PostgreSQL "terminating connection due to
administrator command”.
The problem is that I’m the administrator, and I issued no such command.
On the first opportunity
On 01/12/2014, at 19:26, Andy Colson wrote:
> On 12/1/2014 11:14 AM, Herouth Maoz wrote:
>> I am currently in the process of creating a huge archive database that
>> contains data from all of our systems, going back for almost a decade.
>>
>> Most of the tables fall
how it behaves in older
> versions…
>
> From: Herouth Maoz [mailto:hero...@unicell.co.il]
> Sent: Wednesday, September 10, 2014 6:26 PM
> To: Huang, Suya
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Decreasing performance in table partitioning
>
> Thank you.
that make
sense?
On 07/09/2014, at 19:50, Tom Lane wrote:
> Herouth Maoz writes:
>> My problem is the main loop, in which data for one month is moved from the
>> old table to the partition table.
>
>>EXECUTE FORMAT (
>>'WITH del
.
> Rename old non-partition table to something else.
> Rename new partition table to the correct name as you wanted.
>
> Drop old non-partition table if you’re satisfied with current table structure.
>
> Thanks,
> Suya
> From: pgsql-general-ow...@postgresql.org
>
Hello all.
I have created a function that partitions a large table into monthly
partitions. Since the name of the table, target schema for partitions, name of
the date field etc. are all passed as strings, the function is heavily based on
EXECUTE statements.
My problem is the main loop, in wh
On 18/02/2014, at 19:02, Jeff Janes wrote:
> On Mon, Feb 17, 2014 at 8:45 AM, Herouth Maoz wrote:
> I have a production system using Postgresql 9.1.2.
>
> The system basically receives messages, puts them in a queue, and then
> several parallel modules, each in its own thread,
times per hour. The table normally contains around 2-3 million records,
and has 3 indexes.
Thank you,
Herouth
On 17/02/2014, at 18:45, Herouth Maoz wrote:
> I have a production system using Postgresql 9.1.2.
>
> The system basically receives messages, puts them in a queue, and then
I have a production system using Postgresql 9.1.2.
The system basically receives messages, puts them in a queue, and then several
parallel modules, each in its own thread, read from that queue, and perform two
inserts, then release the message to the next queue for non-database-related
processi
On 10/12/2013, at 20:55, Kevin Grittner wrote:
> Herouth Maoz wrote:
>
>> The problem starts when our partner has some glitch, under high
>> load, and fails to send back a few hundred thousand reports. In
>> that case, the table grows to a few hundred records, and they
On 10/12/2013, at 20:55, Jeff Janes wrote:
>
> On Tue, Dec 10, 2013 at 8:23 AM, Herouth Maoz wrote:
>
> Hello.
>
> I have one particular table with very specialized use. I am sending messages
> to some partner. The partner processes them asynchronously, and then returns
Hello.
I have one particular table with very specialized use. I am sending messages to
some partner. The partner processes them asynchronously, and then returns the
status report to me. The table is used to store a serialized version of the
message object, together with a few identifiers, expi
makes for maintenance spaghetti. I also don't like running automated
DDL commands. They don't play well with backups.
-הודעה מקורית-
מאת: Steve Crawford [mailto:scrawf...@pinpointresearch.com]
נשלח: ב 28/10/2013 22:31
אל: Herouth Maoz; pgsql-general@postgresql.org
נושא: Re: [GENER
:
> On 2013-10-28 12:27, Herouth Maoz wrote:
>> I have a rather large and slow table in Postgresql 9.1. I'm thinking of
>> partitioning it by months, but I don't like the idea of creating and
>> dropping tables all the time.
>>
>> I'm thinking of sim
I have a rather large and slow table in Postgresql 9.1. I'm thinking of
partitioning it by months, but I don't like the idea of creating and dropping
tables all the time.
I'm thinking of simply creating 12 child tables, in which the check condition
will be, for example, date_part('month'', time
On 18/09/2012, at 20:19, Jeff Janes wrote:
> I think the one below will show an even larger discrepancy. You are
> doing 2 casts for each comparison,
> so I think the casts overhead will dilute out the comparison.
>
> select count(distinct foo) from ( select cast(random() as varchar(14)) as foo
I think you hit the nail right on the head when you asked:
> I wonder if they have different encoding/collations.
[headdesk]Of course. One of the requirements of the upgrade was to change the
database encoding to unicode, because previously it was in an 8-bit encoding
and we couldn't handle int
s to run for an hour, and
I'm sending this hour's worth of stats.
I'm attaching the stats files in tarballs. I'm not sure what I'm supposed to
look at.
Thanks for your time,
Herouth
-הודעה מקורית-
מאת: Craig Ringer [mailto:ring...@ringerc.id.au]
נשלח: ב 17/09/2
, which is basically what I needed for the time being.
I suspect it's less efficient than unpack, and I hope the function I created
won't be too slow for use inside a trigger.
Thanks,
Herouth
On 12/09/2012, at 17:47, Tom Lane wrote:
> Herouth Maoz writes:
>> I created a functio
I created a function that does some heavy string manipulation, so I needed to
use pl/perl rather than pl/pgsql.
I'm not experienced in perl, but the function works well when used as an
independent perl subroutine - it depends only on its arguments. I use the
Encode package (in postgresql config
We have tables which we archive and shorten every day. That is - the main table
that has daily inserts and updates is kept small, and there is a parallel table
with all the old data up to a year ago.
In the past we noticed that the bulk transfer from the main table to the
archive table takes a
> Subject changed to describe the problem. Reply in-line.
>
> On 09/04/2012 07:57 PM, Herouth Maoz wrote:
>
>> The issue is that when an insert or an update is fired, I can't say
>> whether all the segments of the same transaction have been written yet,
>> and i
Basically, I have several production databases with various data, and I have a
reports database that grabs all necessary data once a day.
Now, there is is a new requirement to have some of the data available in the
reports database as soon as it is inserted in the production database.
Specifica
ith my encoding (asterisks or whatever).
Thank you,
Herouth
On 21/07/2012, at 15:36, Craig Ringer wrote:
> On 07/21/2012 04:59 PM, Herouth Maoz wrote:
>> I am using Postgresql 8.3.14 on our reporting system. There are scripts that
>> collect data from many databases across the firm into
I am using Postgresql 8.3.14 on our reporting system. There are scripts that
collect data from many databases across the firm into this database. Recently I
added tables from a particular database which has encoding UTF-8. My dump
procedure says
\encoding ISO-8859-8
\copy ( SELECT ... ) to file
On 23/05/2012, at 18:54, Bartosz Dmytrak wrote:
> hi,
> my suggestion is to redesign reporting database to fit reporting specifics
> (e.g. brake normal form of database, in some cases this will speed up
> reports). Than you can use some ETL tool to sync production and reporting.
> Good thing i
On 23/05/2012, at 17:20, Chris Ernst wrote:
> I would have a look at slony. It is a trigger based replication system
> that allows you to replicate only the tables you define and you can have
> different indexing on the slave. The only requirement is that each
> table you want to replicate has
Hi guys,
I'm interested in a solution that will allow our customers to run reports -
which may involve complicated queries - on data which is as up-to-date as
possible.
One thing I don't want to do is to let the reporting system connect to the
production database. I want the indexes in product
We are looking at a replication solution aimed at high availability.
So we want to use PostgreSQL 9's streaming replication/hot standby. But I seem
to be missing a very basic piece of information: suppose the primary is host1
and the secondary is host2. Suppose that when host1 fails host2 detect
On 29/11/2011, at 09:13, Tom Lane wrote:
> "Herouth Maoz" writes:
>> I was instructed to delete old records from one of the tables in our
>> production system. The deletion took hours and I had to stop it in
>> mid-operation and reschedule it as a night job. Bu
Hi.
I was instructed to delete old records from one of the tables in our production
system. The deletion took hours and I had to stop it in mid-operation and
reschedule it as a night job. But then I had to do the same when I got up in
the morning and it was still running.
The odd thing about i
on 06/02/11 18:16, quoting Tom Lane:
Most likely, some other session requested an exclusive lock on the
table. Autovacuum will quit to avoid blocking the other query.
That's strange. During the day, only selects are running on that
database, or at worst, temporary tables are being created
Hi there.
During the weekend I've worked for hours on recovering table bloat. Now I was
hoping that after the tables are properly trimmed, then after the next delete
operation which created dead tuples, autovacuum will go into effect and do its
job properly, and prevent the situation from recu
As a result of my recent encounter with table bloat and other tuning issues
I've been running into, I'm looking for a good resource for improving my tuning
skills.
My sysadmin ran into the following book:
PostgreSQL 9.0 High Performance, by Gregory Smith, ISBN 184951030X
http://amzn.com/1849510
On 31/01/2011, at 03:49, Craig Ringer wrote:
> For approaches to possibly fixing your problem, see:
>
> http://www.depesz.com/index.php/2010/10/17/reduce-bloat-of-table-without-longexclusive-locks/
>
> http://blog.endpoint.com/2010/09/reducing-bloat-without-locking.html
I'm not quite sure what
On 30/01/2011, at 12:27, Craig Ringer wrote:
>
> OK, so you're pre-8.4 , which means you have the max_fsm settings to play
> with. Have you seen any messages in the logs about the free space map (fsm)?
> If your install didn't have a big enough fsm to keep track of deleted tuples,
> you'd face
On 30/01/2011, at 13:03, Alban Hertroys wrote:
> On 28 Jan 2011, at 22:12, Herouth Maoz wrote:
>
>> 2. That database has a few really huge tables. I think they are not being
>> automatically vacuumed properly. In the past few days I've noticed a vacuum
>> process
בתאריך 29/01/11 13:57, ציטוט Craig Ringer:
On 01/29/2011 05:12 AM, Herouth Maoz wrote:
The machine has no additional room for internal disks. It is a recent
purchase and not likely to be replaced any time soon.
Newly acquired or not, it sounds like it isn't sized correctly for the
loa
Hello. We have two problems (which may actually be related...)
1. We are running at over 90% capacity of the disk at one of the servers - a
report/data warehouse system. We have ran out of disk space several times. Now
we need to make some file-archived data available on the database to support
ציטוט Bill Moran:
In response to Herouth Maoz :
Did I understand the original problem correctly? I thought you were saying
that _lack_ of analyzing was causing performance issues, and that running
vacuum analyze was taking too long and causing the interval between
analyze runs to be too
First, I'd like to thank Bill and Alvaro as well as you for your replies.
Quoting Tom Lane:
Hmm. Given the churn rate on the table, I'm having a very hard time
believing that you don't need to vacuum it pretty dang often. Maybe the
direction you need to be moving is to persuade autovac to vacu
Hi all.
We had a crisis this week that was resolved by tuning pg_autovacuum for a
particular table. The table is supposed to contain a small number of items at
any given point in time (typically around 10,000-30,000). The items are
inserted when we send out a message, and are selected, then del
? Scott Marlowe:
On Sat, Mar 20, 2010 at 11:44 AM, Herouth Maoz wrote:
The server version is 8.3.1. Migration to a higher version might be
difficult as far as policies go, if there isn't a supported debian package
for it, but if you can point out a version where this has been fi
quoth Greg Smith:
Herouth Maoz wrote:
Aren't socket writes supposed to have time outs of some sort? Stupid
policies notwithstanding, processes on the client side can disappear
for any number of reasons - bugs, power failures, whatever - and this
is not something that is supposed to ca
On Mar 17, 2010, at 14:56 , Craig Ringer wrote:
> On 17/03/2010 8:43 PM, Herouth Maoz wrote:
>>
>> On Mar 17, 2010, at 13:34 , Craig Ringer wrote:
>>
>>> On 17/03/2010 6:32 PM, Herouth Maoz wrote:
>>>>
>>>> On Mar 3, 2010, at 18:01 , Jo
On Mar 17, 2010, at 13:34 , Craig Ringer wrote:
> On 17/03/2010 6:32 PM, Herouth Maoz wrote:
>>
>> On Mar 3, 2010, at 18:01 , Josh Kupershmidt wrote:
>>
>>> Though next time you see a query which doesn't respond to
>>> pg_cancel_backend(), try gather
On Mar 3, 2010, at 18:01 , Josh Kupershmidt wrote:
> Though next time you see a query which doesn't respond to
> pg_cancel_backend(), try gathering information about the query and what the
> backend is doing; either you're doing something unusual (e.g. an app is
> restarting the query automati
On Mar 3, 2010, at 18:01 , Josh Kupershmidt wrote:
>
> On Wed, Mar 3, 2010 at 8:31 AM, Herouth Maoz wrote:
>
> First, the easy part - regarding allowing/disallowing queries. Is it possible
> to GRANT or REVOKE access to tables based on the originating IP?
>
> I
whole server, and without causing any harm to the database or memory
corruption? Something I can call from within SQL? I run the nightly script from
a linux user which is not "postgres", so I'd prefer a way that doesn't require
using "kill".
Thank you,
Herouth Maoz
Greg Stark wrote:
On Mon, Jan 25, 2010 at 11:37 AM, Herouth Maoz wrote:
The tcp_keepalive setting would only come into play if the remote
machine crashed or was disconnected from the network.
That's the situation I'm having, so it's OK. Crystal, being a Windows
applica
Greg Stark wrote:
On Mon, Jan 25, 2010 at 8:15 AM, Scott Marlowe wrote:
Is there a parameter to set in the configuration or some other means to
shorten the time before an abandoned backend's query is cancelled?
You can shorten the tcp_keepalive settings so that dead connections
get
Scott Marlowe wrote:
You can shorten the tcp_keepalive settings so that dead connections
get detected faster.
Thanks, I'll ask my sysadmin to do that.
Might be, but not very likely. I and many others run pgsql in
production environments where it handles thousands of updates /
inserts per
Hi Everybody.
I have two questions.
1. We have a system that is accessed by Crystal reports which is in turned
controlled by another (3rd party) system. Now, when a report takes too long or
the user cancels it, it doesn't send a cancel request to Postgres. It just
kills the Crystal process tha
Alban Hertroys wrote:
> On Feb 9, 2009, at 2:07 PM, Grzegorz Jaśkiewicz wrote:
>
>> On Mon, Feb 9, 2009 at 12:50 PM, Herouth Maoz
>> wrote:
>>> I hope someone can clue me in based on the results of explain analyze.
>>
>> Did you have a chance to run vmstat
Grzegorz Jaśkiewicz wrote:
> On Mon, Feb 9, 2009 at 12:50 PM, Herouth Maoz wrote:
>
>> I hope someone can clue me in based on the results of explain analyze.
>>
>
> Did you have a chance to run vmstat on it, and post it here ? Maybe -
> if db resides on the sam
Filip Rembiałkowski wrote:
>
> 2009/1/21 Herouth Maoz <mailto:hero...@unicell.co.il>>
>
> Hello.
>
> I have a daily process that synchronizes our reports database from
> our production databases. In the past few days, it happened a
> couple o
Grzegorz Jaśkiewicz wrote:
> On Wed, Jan 21, 2009 at 12:55 PM, Herouth Maoz wrote:
>
>> Well, if it executes the query it's a problem. I might be able to do so
>> during the weekend, when I can play with the scripts and get away with
>> failures, but of course the
Filip Rembiałkowski wrote:
>
> 1. which postgres version?
8.3.1
> 2. can you post results of EXPLAIN ANALYZE (please note it actually
> executes the query)?
>
Well, if it executes the query it's a problem. I might be able to do so
during the weekend, when I can play with the scripts and get away
Marc Mamin wrote:
> Hello,
>
> - did you vacuum your tables recently ?
>
> - What I miss in your query is a check for the rows that do not need
> to be udated:
>
> AND NOT (service = b.service
>AND status = b.status
> AND has_notification = gateway_id NOT IN (4,
Hello.
I have a daily process that synchronizes our reports database from our
production databases. In the past few days, it happened a couple of
times that an update query took around 7-8 hours to complete, which
seems a bit excessive. This is the query:
UPDATE rb
SET service = b.service
Adrian Klaver wrote:
> On Sunday 21 December 2008 1:49:18 am Herouth Maoz wrote:
>
>> Adrian Klaver wrote:
>>
>>>
>>>
>>> Are you sure the problem is not in "$datefield" = "*" . That the script
>>> that formats the
Adrian Klaver wrote:
>
>
> Are you sure the problem is not in "$datefield" = "*" . That the script that
> formats the data file is not correctly adding "*" to the right file. Seems
> almost like sometimes the second CMD is being run against the table that the
> first CMD should be run on. In ot
I have a strange situation that occurs every now and again.
We have a reports system that gathers all the data from our various
production systems during the night, where we can run heavy reports on
it without loading the production databases.
I have two shell scripts that do this nightly transfe
ngth of the other and then compares, selects
and whatnot.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herouth/personal/
EQUENCE new_seq INCREMENT 100 START 100;
UPDATE questions
SET the_order = nextval( 'new_seq' )
WHERE questions.the_order = temp_numbers.the_order;
DROP SEQUENCE new_seq;
DROP TABLE temp_numbers;
The idea is to do the renumbering in batch, and have a small penalty in
"real time"
o a temp table that only has the
non-default columns. Then INSERT ... SELECT ... from that temp table to
your "real" table.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
At 20:13 +0200 on 12/10/1999, Duncan Kinder wrote:
> pg:2345:respawn:/bin/su - Postgres -c
> "/usr/local/pgsql/bin/postmaster -D/usr/local/pgsql/data >>
> /usr/local/pgsql/server.log 2>&1
> I would like to know how to edit this language so that Postgres will
> automatically start with the -i fl
ostgreSQL.
If you think this niche is important, maybe you should convince the rest of
us here (I never needed to use a stored procedure so far, and I don't
remember many people using them five years ago when I was in an Oracle
environment). Or you could prioritize it with money...
Herouth
-
, not
char_ops.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
g deletes but
not cascading updates.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
.
The bottom line of all this is that if you want to use passwords, you have
to have a frontend-backend agent/driver/module which is compatible with the
new protocol. If you mentioned Postgre 95, it's probably an old, old agent.
Herouth
--
Herouth Maoz, Internet developer.
Ope
n't find anybody on the NT listening to that port.
That means no ident server is running on the NT.
* Connection is therefore refused because user could not be
authenticated.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
ident server on
the NT (is there such a beast?).
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
;[EMAIL PROTECTED]%'
UNION
SELECT .
WHERE
AND lower(SOTRUD.EMAIL) LIKE '[EMAIL PROTECTED]%'
...
etc.
Also try UNION ALL.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
At 11:56 +0300 on 05/09/1999, Alois Maier wrote:
>I know that I can set the transaction level with the SET TRANSACTION
>LEVEL statement. How can I get the transaction level from SQL ?
Normally, the counterpart of SET is SHOW. Did you try SHOW TRANSACTION LEVEL?
Herouth
--
Herout
you to use case insensitive comparison and give up the 'lower':
where CLIENTS.CLIENTID=SOTRUD.CLIENTID
and CLIENTS.PRINADL=PRINADLEG.PRINADL
and CLIENTS.FLG_MY
and not CLIENTS.ARH
and SORTUD.EMAIL ~*
'ruslanmr@hotmail\\.com|matukin@hotmail\\.com|knirti@kaluga\\.ru|avk@vniicom\
7; testing < stam.txt
The following exports the same table:
psql -qc 'COPY test5 TO stdin' testing > stam.txt
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
publisher:
Morgan Kaufmann Publishers, Inc
340 Pine St 6th Floor
San Francisco CA 94104-33205
USA
415-392-2665
[EMAIL PROTECTED]
http://www.mkp.com
The original poster of this recommendation was Terry Harple, and it was on
the (now defunct) QUESTIONS list.
Herouth
--
Herouth Maoz, Internet develop
At 14:30 +0300 on 10/08/1999, =?iso-8859-9?Q?Safa_Pilavc=FD?= wrote:
> Please help
Char16 has little support. Any possibility of changing the definition to
char(16)?
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
g SPI, install it on the backend, and use
it for the comparison.
2) Create the new locale, or at least the LC_CTYPE part of the locale,
on the unix you are using.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
ckup script, plus a non-standard interface
for writing into them and reading from them, and they are not deleted when
you drop the row referring to them, then you may as well use files, and
store only the path in Postgres for easy lookup.
Herouth
--
Herouth Maoz, Internet developer.
Open Uni
s that in your case, you will simply find that the difference
results from some different error message. Perhaps your system would say
"out of range" instead of "Math result not representable". Check the diff.
If this is true, then you have nothing to worry about.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
as "BACKSLASH_BEHAVIOR", which will be
either "literal" or "escape". It can default to the current behavior
(namely "escape") so that current code won't fail, but will enable people
to write sane standard code.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
rom an RPM perhaps? Common PostgreSQL RPMs were
somehow separated into three packages, though for the life of me I can't
understand why the data package is needed. You should be able to create the
default database using initdb - unless they didn't RPM the initdb
executable?
Herouth
--
He
next_tuple;
if ( tuple.id = last_id )
print( "," + tuple.child )
else
print( + tuple.person + + tuple.child )
end if
last_id = tuple.id;
end while
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
g into psql as that person, and create the tables.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
ble (check to see if there are extra spaces there,
though). You can give names that contain spaces in unix, it's no problem.
And then I'd try the drop again.
I hope any of these suggestions helps. Just make sure you have a backup
copy of the directories somewhere safe.
Herouth
--
Herout
At 22:26 +0300 on 10/05/1999, Jonny Hinojosa wrote:
> The last 2 entries have been corrupted. How do I (can I) correct these
> entries ??
Have you tried logging into psql (template1) as postgres and updating the
pg_database table?
Herouth
--
Herouth Maoz, Internet developer
lds? In that case, the 6.3.2 psql is the culprit, and since your issue
is with upgrading to 6.4.2, you need not worry.
If, on the other hand, the dump makes the copy with single quotes doubled
or backslashed, you will have to use some sed or perl script to remove
that, because they will not be
customer
testing-> );
count
-
5
(1 row)
And this is the exact number of distinct names in the table.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
NULL value unless it truely stands for "no value here".
4) Insert into your real table using a SELECT statement. The INSERT
clause should include only the names of "external source" fields.
This will cause the internal ones to be filled from the default
source.
This method allows also the use of functions and stuff when populating the
table.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
the
bottleneck is wider. You know, like you would treat any shared object in an
inter-process environment?
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
you make the call to currval - the correct value is
already available to you.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
ession. That is, it won't work if you use it before the insertion (because
the sequence didn't give you a number yet). It will also give you the
correct number even if between the INSERT and the SELECT, another process
or another connection also made an insert.
Herouth
--
Herouth Maoz
At 22:48 +0200 on 11/3/99, Ralf Weidemann wrote:
>
> how could I do an automatic daily check
> to delete some expired data ? I mean
> can I have a cron functionality in post-
> gresql ?
You don't need to have cron functionality in postgresql when you have cron
functionality in cron. :)
What yo
insert updated data back?
(Assuming you don't have a separate update for each line).
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma
1 - 100 of 140 matches
Mail list logo