Guy Rouillier wrote:
Andrus wrote:
Last change for this project was 3 years ago.
So I think that it is dead.
I'm writing application in C#.
I expected that I can wrote stored procedures in C# also using
something like mod_mono in Apache.
So it seems that most reasonable way is to learn
I'm redoing a sql schema , and looking for some input
First I had 2 tables :
Table_A
id
name
a
b
c
Table_B
id
name
x
y
z
Magnus Hagander [EMAIL PROTECTED] writes:
Both MS SQL Server and IBM DB2 (on windows) supports .net stored
procedures in C#, VB, or any other .net hosted language.
Awhile back I read an article claiming that .NET could only host one
language, or at least only languages that differed merely in
Tom,
Awhile back I read an article claiming that .NET could only host one
language, or at least only languages that differed merely in trivial
syntactic details --- its execution engine isn't flexible enough for
anything truly interesting.
Jim Hugunin (creator of Jython, which is Python on
On Tue, Apr 03, 2007 at 04:00:17AM -0400, Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
Both MS SQL Server and IBM DB2 (on windows) supports .net stored
procedures in C#, VB, or any other .net hosted language.
Awhile back I read an article claiming that .NET could only host one
Hey all
I am possibly looking to use PSGSQL in a project I am working on for a very
large client. The upshot of this is the throughput of data will be pretty
massive, around 20,000 new rows in one of the tables per day. We also have to
keep this data online for a set period so after 5 or 6 weeks
On Tue, Apr 03, 2007 at 09:28:28AM +0100, Tim Perrett wrote:
Hey all
I am possibly looking to use PSGSQL in a project I am working on for a very
large client. The upshot of this is the throughput of data will be pretty
massive, around 20,000 new rows in one of the tables per day. We also
Tim Perrett wrote:
Hey all
I am possibly looking to use PSGSQL in a project I am working on for a very
large client. The upshot of this is the throughput of data will be pretty
massive, around 20,000 new rows in one of the tables per day. We also have to
keep this data online for a set
I am possibly looking to use PSGSQL in a project I am working on for a
very
large client. The upshot of this is the throughput of data will be
pretty
massive, around 20,000 new rows in one of the tables per day.
We also have tokeep this data online for a set period so after 5 or 6
weeks
it
No idea??
Thorsten Kraus schrieb:
Hi,
I designed a Java web application. The persistence layer is a
PostgreSQL database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism
via PostgreSQL login roles. So I can create several
Thorsten Kraus wrote:
No idea??
You'd need an authenticated user to call that stored procedure in the
first place. It is kind of a chicken-and-egg problem.
Usually people create a user for the webapp. This user makes the first
connection to the database.
After that you probably could define a
Thorsten Kraus wrote:
Hi,
I designed a Java web application. The persistence layer is a PostgreSQL
database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism
via PostgreSQL login roles. So I can create several database login
On Mon, Apr 02, 2007 at 11:53:50AM -0500, Anders Nilsson wrote:
The situation:
A loop that inserts thousands of values into a table.
In hopes of optimizing the bunches of inserts, I prepared
Sorry, but that won't work. ECPG only simulates statement preparation.
a statement like the
Hello,
Does anyone knows why it´s so slow to return a backup in Windows 2000 or
Windows 2003 for archives bigger than 80 MB ?
I do the same thing using others windows versions or linux, and it´s far
fast than this. What could it be ?
I´m using PostgreSQL 8.1 or lower version.
Regards,
On Sunday 01 April 2007 9:09 am, jlowery wrote:
I'm having a bit of a problem getting plpython's prepare to work
properly:
CREATE OR REPLACE FUNCTION batch_item_reversal(b batch_item)
RETURNS varchar AS
$BODY$
if b['reversal_flag'] == 'Y':
sql = plpy.prepare(
SELECT
Hi,
thanks for your answer. I cant use the username/password in my DSN
because I don't connect directly via JDBC to the database. I use
hibernate for all database actions. The username and password has to be
stored in the hibernate configuration file...
Bye,
Thorsten
Lutz Broedel schrieb:
In response to Thorsten Kraus [EMAIL PROTECTED]:
Hi,
thanks for your answer. I cant use the username/password in my DSN
because I don't connect directly via JDBC to the database. I use
hibernate for all database actions. The username and password has to be
stored in the hibernate
[EMAIL PROTECTED] wrote:
Original Message
Subject: Re: [GENERAL] SQLConnect failure
From: Bill Moran [EMAIL PROTECTED]
Date: Mon, April 02, 2007 2:54 pm
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
In response to [EMAIL PROTECTED]:
We have code that has
I have a similar situation. Here's what I do.
I have a stand-alone comment table:
Comments
id
timestamp
text
Then I have individual product tables to tie a table to a comment:
Table_A_Comment
id
id_ref_a references tableA
id_comment references
I need to do like 1000 inserts periodically from a web app. Is it better to
do 1000 inserts or 1 insert with the all 1000 rows? Is using copy command
faster than inserts?
thanks
On 4/2/07, Chris [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
Hi
I am trying to insert multiple values into a
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
I am trying to insert multiple values into a table like this.
INSERT INTO tab_name (col1, col2) VALUES (val1, val2), (val3, val4)
...
My production server runs in 8.1.5.
...
What to do?
Upgrade to 8.2. :)
Seriously, you should upgrade to
am Tue, dem 03.04.2007, um 7:19:15 -0700 mailte [EMAIL PROTECTED] folgendes:
I need to do like 1000 inserts periodically from a web app. Is it better to do
1000 inserts or 1 insert with the all 1000 rows? Is using copy command faster
than inserts?
You can do the massive Inserts within one
Hello everyone !
I have this query :
annonces= EXPLAIN ANALYZE SELECT * FROM annonces AS a WHERE
detect_time CURRENT_TIMESTAMP - '7 DAY'::INTERVAL
AND detect_time = '2006-10-30 16:17:45.064793'
AND vente
AND surface IS NOT NULL AND price IS NOT NULL
AND type_id IN
Hello,
Does anyone knows why it´s so slow to return a backup in Windows 2000 or
Windows 2003 for archives bigger than 80 MB ?
I do the same thing using others windows versions or linux, and it´s far
fast than this. What could it be ?
I´m using PostgreSQL 8.1 or lower version.
Regards,
You could originally connect to the database as some kind of power user.
Check the password against the pg_shadow view (you would need to md5 your
password somehow) and then do a SET SESSION AUTHORIZATION (or SET ROLE) to
change your permissions. Not sure how secure this would be but it's the
There are a few others.
http://freshmeat.net/projects/postgresql_autodoc
http://dbmstools.sourceforge.net/
http://sqlfairy.sourceforge.net/
are some of the ones with explicit postgresql support I've played
with in the past. I've had some luck using the ODBC or
JDBC based ones too.
I've been working for the past few weeks on porting a closed source
BitTorrent tracker to use PostgreSQL instead of MySQL for storing
statistical data, but I've run in to a rather large snag. The tracker in
question buffers its updates to the database, then makes them all at
once, sending
On Apr 3, 2007, at 9:56 AM, Jaime Silvela wrote:
I have a similar situation. Here's what I do.
I have a stand-alone comment table:
Comments
id
timestamp
text
Then I have individual product tables to tie a table to a comment:
Table_A_Comment
id
On Mon, 2007-04-02 at 22:24, Steve Gerhardt wrote:
I've been working for the past few weeks on porting a closed source
BitTorrent tracker to use PostgreSQL instead of MySQL for storing
statistical data, but I've run in to a rather large snag. The tracker in
question buffers its updates to the
This would be a possible way. Now the question is which algorithm
implementation of md5 PostgreSQL uses...
Bye,
Thorsten
Ben Trewern schrieb:
You could originally connect to the database as some kind of power user.
Check the password against the pg_shadow view (you would need to md5 your
On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote:
I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.
Can you step back and
I've written a web application where users can upload spreadsheets,
instead of having to key in forms. The spreadsheets get parsed and
INSERTED into a table, and with the INSERT gets added an identifier so
that I can always trace back what a particular row in the table
corresponds to.
I'd like
Listmail [EMAIL PROTECTED] writes:
It bitmapscans about half the table...
Which PG version is this exactly? We've fooled with the
choose_bitmap_and heuristics quite a bit ...
regards, tom lane
---(end of
I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.
Can you step back and work on how the app writes out data, so that it
opens
On Tue, 03 Apr 2007 19:23:31 +0200, Tom Lane [EMAIL PROTECTED] wrote:
Listmail [EMAIL PROTECTED] writes:
It bitmapscans about half the table...
Which PG version is this exactly? We've fooled with the
choose_bitmap_and heuristics quite a bit ...
regards, tom
Jaime Silvela wrote:
I've written a web application where users can upload spreadsheets,
instead of having to key in forms. The spreadsheets get parsed and
INSERTED into a table, and with the INSERT gets added an identifier so
that I can always trace back what a particular row in the table
Steve Gerhardt [EMAIL PROTECTED] writes:
# EXPLAIN ANALYZE UPDATE peers2...etc etc
QUERY PLAN
-
Merge Join (cost=262518.76..271950.65 rows=14933 width=153) (actual
time=8477.422..9216.893 rows=26917 loops=1)
Merge
David--
Mono is DotNet on SUSE
Heres the main site ..beware this is rather complicated to install and
configure but once Ic
you can run .NET Framework as a SUSE Binary Image then allow the GAC to pull
in assemblies
This link will get you started
http://www.mono-project.com/VMware_Image
I
Are there any implications with possibly doing this? will PG handle it?
Are there realworld systems using PG that have a massive amount of data
in them?
It's not how much data you have, it's how you query it.
You can have a table with 1000 rows and be dead slow if said rows are
I designed a Java web application. The persistence layer is a
PostgreSQL database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism
via PostgreSQL login roles. So I can create several database login
roles and set the database
Brian, that's not what I meant.
Parsing of the uploaded file is just for the purpose of extracting the
components of each spreadsheet row and constructing the INSERTs.
Actually, whenever I copy from a file, either using COPY or with a
custom importer, I put the data into a staging table, so
Listmail [EMAIL PROTECTED] writes:
On Tue, 03 Apr 2007 19:23:31 +0200, Tom Lane [EMAIL PROTECTED] wrote:
Listmail [EMAIL PROTECTED] writes:
It bitmapscans about half the table...
Which PG version is this exactly? We've fooled with the
choose_bitmap_and heuristics quite a bit ...
Jaime Silvela wrote:
Brian, that's not what I meant.
Parsing of the uploaded file is just for the purpose of extracting the
components of each spreadsheet row and constructing the INSERTs.
Actually, whenever I copy from a file, either using COPY or with a
custom importer, I put the data into a
Hmmm [ studies query a bit more... ] I think the reason why that index
is so expensive to use is exposed here:
Index Cond: ((detect_time (now() - '7
days'::interval)) AND (detect_time = '2006-10-30
16:17:45.064793'::timestamp without time zone))
Evidently detect_time is
That's sort of what I have already, and my problem is that the
portfolio_id field does not exist in the CSV files. I'd like to be able
to assign a portfolio_id, for the current file's entries. Another person
in the list suggested dynamically adding a column with the portfolio_id
to the file,
Hi,
I'm on 8.0.10 and there is a query I cannot quite get adequately fast.
Should it take 2.5s to sort these 442 rows? Are my settings bad? Is
my query stupid?
Would appreciate any tips.
Best regards,
Marcus
apa= explain analyze
apa- select
apa- ai.objectid as ai_objectid
apa- from
Hi
I've just stumbled across pgsnmpd. It works quite well,
though I haven't yet found a web-based monitoring
software that works well with pgsnmpd. The problem is
that pgsnmpd exportsa bunch of values _per_ database.
(The output of snmpwalk looks something like
PGSQL-MIB::pgsqlDbDatabase.1.1.3
Can anyone see why autovacuum or autoanalyze are not working?
proj02u20411=# select version();
version
PostgreSQL 8.2.3 on i686-pc-mingw32,
compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special)
(1 row)
proj02u20411=# explain analyze select * from
Richard Broersma Jr wrote:
Can anyone see why autovacuum or autoanalyze are not working?
Known bug, fixed in the 8.2.4-to-be code.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--- Alvaro Herrera [EMAIL PROTECTED] wrote:
Known bug, fixed in the 8.2.4-to-be code.
Okay. Thanks for the information.
Regards,
Richard Broersma Jr.
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Tim,
massive, around 20,000 new rows in one of the tables per day.
As an example...
I'm doing about 4000 inserts spread across about 1800 tables per minute.
Pisses it in with fsync off and the PC ( IBM x3650 1 CPU, 1 Gig memory ) on a
UPS.
Allan
The material contained in this email may be
I've just stumbled across pgsnmpd. It works quite well,
though I haven't yet found a web-based monitoring
software that works well with pgsnmpd. The problem is
that pgsnmpd exportsa bunch of values _per_ database.
(The output of snmpwalk looks something like
PGSQL-MIB::pgsqlDbDatabase.1.1.3 =
I've got an MS Access front end reporting system that has previously
used MS SQL server which I am moving to Postgres.
The front end has several hundred if not thousand inbuilt/hard-coded
queries, most of which aren't working for the following reasons:
1.) Access uses double quotes () as text
Philip Hallstrom wrote:
I've just stumbled across pgsnmpd. It works quite well,
though I haven't yet found a web-based monitoring
software that works well with pgsnmpd. The problem is
that pgsnmpd exportsa bunch of values _per_ database.
(The output of snmpwalk looks something like
Paul Lambert [EMAIL PROTECTED] writes:
Is there any way to change the text qualifier in PG
No. I suppose you could hack the Postgres lexer but you'd break
pretty much absolutely everything other than your Access code.
or the case sensitivity?
That could be attacked in a few ways, depending
Tom Lane wrote:
Paul Lambert [EMAIL PROTECTED] writes:
Is there any way to change the text qualifier in PG
No. I suppose you could hack the Postgres lexer but you'd break
pretty much absolutely everything other than your Access code.
or the case sensitivity?
That could be attacked in a
Hello all!
This is my first post! I am interested in finding out what queries have
been made against a particular database in postgres. The version of Postgres
is 8.0 running on Mandrake 10. The queries are made by client computers
over the network. What steps must I take to accomplish such a
Paul Lambert wrote:
I've got an MS Access front end reporting system that has previously
used MS SQL server which I am moving to Postgres.
The front end has several hundred if not thousand inbuilt/hard-coded
queries, most of which aren't working for the following reasons:
1.) Access uses
Paul Lambert wrote:
Tom Lane wrote:
Paul Lambert [EMAIL PROTECTED] writes:
Is there any way to change the text qualifier in PG
No. I suppose you could hack the Postgres lexer but you'd break
pretty much absolutely everything other than your Access code.
or the case sensitivity?
That
nextval() and sequences are not what I'm looking for. I want to
assign the same id to all the rows imported from the same file.
Let's say user A is working on portfolio_id 3, and decides to
upload a spreadsheet with new values. I want to be able to import
the spreadsheet into the staging
4wheels wrote:
Hello all!
This is my first post! I am interested in finding out what queries have
been made against a particular database in postgres. The version of Postgres
is 8.0 running on Mandrake 10. The queries are made by client computers
over the network. What steps must I take to
Paul Lambert wrote:
I've got an MS Access front end reporting system that has previously
used MS SQL server which I am moving to Postgres.
Are you using PassThrough queries? It is not clear
The front end has several hundred if not thousand inbuilt/hard-coded
queries, most of which
Joshua D. Drake wrote:
You could preface all your queries with something like:
select * from foo where lower(bar) = lower('qualifer');
But that seems a bit silly.
And also it would prevent the optimizer from using any indexes on
bar. Not a good idea.
Eddy
On Tue, 03 Apr 2007 12:45:54 -0400, Jaime Silvela [EMAIL PROTECTED] wrote:
I'd like to be able to do something like
COPY mytable (field-1, .. field-n, id = my_id) FROM file;
How do you get my_id? Can you get it in a trigger? Triggers still fire
with copy so if you can get a trigger to fill
On Tue, 03 Apr 2007 18:24:00 -0700, Joshua D. Drake [EMAIL PROTECTED] wrote:
Paul Lambert wrote:
Tom Lane wrote:
Paul Lambert [EMAIL PROTECTED] writes:
or the case sensitivity?
That could be attacked in a few ways, depending on whether you want
all text comparisons to be
Joshua D. Drake wrote:
You could preface all your queries with something like:
select * from foo where lower(bar) = lower('qualifer');
But that seems a bit silly.
Joshua D. Drake
I'm trying to avoid having to alter all of my queries, per the OP I've
got several hundred if not thousands
Paul Lambert wrote:
I've got an MS Access front end reporting system that has previously
used MS SQL server which I am moving to Postgres.
The front end has several hundred if not thousand inbuilt/hard-coded
queries, most of which aren't working for the following reasons:
1.) Access uses
Paul Lambert [EMAIL PROTECTED] writes:
Tom Lane wrote:
That could be attacked in a few ways, depending on whether you want
all text comparisons to be case-insensitive or only some (and if so
which some).
I don't have any case sensitive data - so if sensitivity could be
completely disabled
Edward Macnaghten wrote:
Joshua D. Drake wrote:
You could preface all your queries with something like:
select * from foo where lower(bar) = lower('qualifer');
But that seems a bit silly.
And also it would prevent the optimizer from using any indexes on
bar. Not a good idea.
You could
Marcus Engene [EMAIL PROTECTED] writes:
Should it take 2.5s to sort these 442 rows?
Limit (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.254..2651.093 rows=442 loops=1)
- Sort (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.251..2650.515 rows=442 loops=1)
Sort
Hi again,
I was thinking, in my slow query it seems the sorting is the villain.
Doing a simple qsort test I notice that:
[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe 430
real0m0.051s
user0m0.030s
sys 0m0.000s
[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe
Tom Lane skrev:
Marcus Engene [EMAIL PROTECTED] writes:
Should it take 2.5s to sort these 442 rows?
Limit (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.254..2651.093 rows=442 loops=1)
- Sort (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.251..2650.515
Klint Gore [EMAIL PROTECTED] writes:
Is there any way to create operators to point like to ilike? There
doesn't seem to be a like or ilike in pg_operator (not in 7.4 anyway).
Actually it's the other way 'round: if you look into gram.y you'll see
that LIKE is expanded as the operator ~~ and
2.) The Like function in SQL Server is case insensitive, PG it is case
sensitive. The ilike function is not recognised by Access and it tries
to turn that into a string, making my test (like ilike 'blah')
Has anyone had any experience with moving an access program from SQL
server to PG?
Paul,
we have contrib module mchar, which does what you need. We developed it
when porting from MS SQL one very popular in Russia accounting software.
It's available from http://v8.1c.ru/overview/postgres_patches_notes.htm,
in Russian. I don't rememeber about license, though.
Oleg
On Wed, 4
75 matches
Mail list logo