On Tue, Mar 20, 2012 at 11:54 PM, Geek Matter geekmat...@yahoo.com wrote:
any other large sites use postgresql? i need to make right descission coz my
decision will affect business that is related with $
The people who run the .info and .org domains... Lots more. google
is your friend.
--
On Wed, Mar 21, 2012 at 12:43 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Tue, Mar 20, 2012 at 11:54 PM, Geek Matter geekmat...@yahoo.com wrote:
any other large sites use postgresql? i need to make right descission coz my
decision will affect business that is related with $
The people
scott,
thanks for quick response you mean all the dowmain .info and .org domains are
using postgresql?
my background from SQL server which has powerful graphical tools for data
modeling, replication, and etc.
how about postgresql? does it has free graphical tools for modeling,
replication ?
On Wed, Mar 21, 2012 at 12:04 AM, Geek Matter geekmat...@yahoo.com wrote:
scott,
thanks for quick response you mean all the dowmain .info and .org domains
are using postgresql?
I am pretty sure he means the top level domains (registration, root
DNS server updates, etc) are all run off
Le mercredi 21 mars 2012 à 00:04 -0700, Geek Matter a écrit :
how about postgresql? does it has free graphical tools for modeling,
replication ?
In your original post, you wrote : for our large web apps; if that
really is the case, I suggest you invest some time into learning the
command
On linux, you can also do a
cp -rpuv. source destination
or script, something like this in good old bash:
ls directory | grep file extension copy.txt
list=$( /full path/copy.txt)
location=/target directory
scp $list user@host:$location
or rsync
rsync -avz --progress -e ssh
@all,
thanks so much for your valuable feedback and response. I will study deeply all
of your feedback. appreciated it.
Postgresql community are really great :)
From: Vincent Veyron vv.li...@wanadoo.fr
To: Geek Matter geekmat...@yahoo.com
Cc:
Geek Matter wrote:
Skype. And pgsql has some great replication solutions that actually
work
any other large sites use postgresql? i need to make right descission
coz my decision will affect
business that is related with $
http://archives.postgresql.org/pgsql-advocacy/2002-08/msg5.php
Le jeudi 01 mars 2012 à 14:01 -0800, Lonni J Friedman a écrit :
I've got a 3 node cluster (1 master/2 slaves) running 9.0.x with
streaming replication. I'm in the planning stages of upgrading to
9.1.x, and am looking into the most efficient way to do the upgrade
with the goal of minimizing
i just wondering, why mysql is more popular than postgresql ?
From: Vincent Veyron vv.li...@wanadoo.fr
To: Geek Matter geekmat...@yahoo.com
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Sent: Wednesday, March 21, 2012 3:48 PM
Subject: Re:
On Wed, Mar 21, 2012 at 2:46 AM, Geek Matter geekmat...@yahoo.com wrote:
i just wondering, why mysql is more popular than postgresql ?
Pretty blonde girls doing marketing
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Le mercredi 21 mars 2012 à 02:49 -0600, Abel Abraham Camarillo Ojeda a
écrit :
On Wed, Mar 21, 2012 at 2:46 AM, Geek Matter geekmat...@yahoo.com wrote:
i just wondering, why mysql is more popular than postgresql ?
Pretty blonde girls doing marketing
Not sure about that, but it would not
2012/3/21 Vincent Veyron vv.li...@wanadoo.fr:
Le mercredi 21 mars 2012 à 02:49 -0600, Abel Abraham Camarillo Ojeda a
écrit :
On Wed, Mar 21, 2012 at 2:46 AM, Geek Matter geekmat...@yahoo.com wrote:
i just wondering, why mysql is more popular than postgresql ?
Pretty blonde girls doing
On Wed, Mar 21, 2012 at 7:46 PM, Geek Matter geekmat...@yahoo.com wrote:
i just wondering, why mysql is more popular than postgresql ?
PHP and MySQL go together nicely. Both of them allow, even encourage,
sloppinesses of various sorts, and it's really easy to make yourself a
dynamic web page
[Forwarded to the list]
---BeginMessage---
Essentially they were a bigger earlier, and a little bit louder, and
created an initial following, which in turn got more articles and books
being published, more products being written for, and therefore they gained
the mind share or momentum or buzz.
What's with you guys? you're the second one in 7 minutes to email me
instead of the list ( just teasing ;-)
Regarding your question: I'm surprised you asked, or was it in jest?
Anyway, a bit a history is visible here :
http://www.postgresql.org/docs/9.1/static/history.html
--
Vincent Veyron
Thanks, I was here
http://www.postgresql.org/docs/9.1/static/sql-createrole.html.
Am 20.03.2012 um 16:55 schrieb Josh Kupershmidt:
On Tue, Mar 20, 2012 at 8:28 AM, Alexander Reichstadt l...@mac.com wrote:
Hi,
I look for a way to reproduce the encrypted string stored as a password by
On Wed, Mar 21, 2012 at 11:10, Vincent Veyron vv.li...@wanadoo.fr wrote:
However, I once read that the real reason is that mysql was available
when ISPs came of existence, circa 1995. It lacked important features of
an RDBMS (you can google the details), but it was enough to satisfy the
needs
Beautiful.
First I installed the framework's described in Kyng Chaos Readme/website for
PROJ and GEOS, required by PostGIS.
Then, I installed postgresql using the official postgres 9.1.3 installer, but
made the data directory /usr/local/pgsql-9.1/data, instead of the default
/Library/etc….,
Am 21.03.2012 12:35, schrieb Marti Raudsepp:
On Wed, Mar 21, 2012 at 11:10, Vincent Veyron vv.li...@wanadoo.fr wrote:
However, I once read that the real reason is that mysql was available
when ISPs came of existence, circa 1995. It lacked important features of
an RDBMS (you can google the
Marti Raudsepp, 21.03.2012 12:35:
E.g. VACUUM/ANALYZE needed to be ran manually and it used to take an
*exclusive* lock on tables, for longish periods, preventing any
queries! Failure to vacuum would cause the files to bloat without
limit and slow down your queries gradually. In the worst case,
On Wed, Mar 21, 2012 at 01:35:31PM +0200, Marti Raudsepp wrote:
Let's not forget that PostgreSQL sucked, too, back then.
Well, for some values of sucked. But I don't believe VACUUM was the
main issue. In particular,
wrong, it would bog down all your hardware resources. MySQL lacked
many
In short, MySQL offered the appearance of ease of use, which meant you
didn't need a DBA or even, really, to read the manual. For most
people it was good enough. It turned out that once you started trying
to scale it, you really did need all those features that the MySQL
3.2.3 and earlier
On Wed, Mar 21, 2012 at 2:04 AM, Geek Matter geekmat...@yahoo.com wrote:
scott,
thanks for quick response you mean all the dowmain .info and .org domains
are using postgresql?
my background from SQL server which has powerful graphical tools for data
modeling, replication, and etc.
how
I've got a question relating to how the postgres configuration is parsed:
If I write into the following into postgresql.conf:
shared_buffers = 24MB
shared_buffers = 32MB
and start up postgres, the command 'show shared_buffers;' answers
'32MB'. That means the later value in the configuration
I didn't catch this when it went by the first time, so sorry to have
missed it (and for the deep quoting):
On Wed, Mar 21, 2012 at 09:20:02AM -0500, Merlin Moncure wrote:
On Wed, Mar 21, 2012 at 2:04 AM, Geek Matter geekmat...@yahoo.com wrote:
scott,
thanks for quick response you mean all
por favor ya no quiero q sigan llegando correos a esta cuenta
ya he tratado de q desactivarme pero no puedo me siguen llegando a mi cuenta
On 03/21/2012 07:02 AM, Martin Gerdes wrote:
I've got a question relating to how the postgres configuration is parsed:
If I write into the following into postgresql.conf:
shared_buffers = 24MB
shared_buffers = 32MB
and start up postgres, the command 'show shared_buffers;' answers
'32MB'. That
On 03/20/2012 06:16 PM, Jim Green wrote:
It looks like alternatives are kind of complex to me, right now my
approach(perl dbi and prepared insert) would take about 8/9 mins to
insert a day's data. I think I'll probably just stick with it and
wait.
the autovacuum processes does a lot of io
Hello,
Today, I have encounterd a strange result and I want to trace it but I do not
know how. I have two views having union as in q3. q1 returns 236 rows, q2
returns 0 rows. I expected q3 to return 236 rows but I get 233 ...
q1: select * FROM view1 -- reurns 236 rows
q2: select * FROM
Thus view1 must be returning 3 pairs of duplicate rows which are then being
combined into 3 individual rows during the de-duplication pass.
Dave J.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of salah jubeh
Sent: Wednesday, March 21, 2012
Hello Dave J.
You are right. I used explain analyse and the last operation was unique and
that means remove all duplicates and I select distinct * from view1 and I get
233.
Regards
From: David Johnston pol...@yahoo.com
To: 'salah jubeh'
Thanks for the tips. I spent some more time investigating. It's
definitely pg_table_is_visible that's causing the problem. A \dt
schema.* is fairly fast (like you said, it doesn't apply
pg_table_is_visible at all). I tried adjusting the query in several
ways. Adding either
On 21 March 2012 11:01, Steve Crawford scrawf...@pinpointresearch.com wrote:
Something sounds wrong, here. XID wraps around somewhere around 4-billion
transactions which is a substantial multiple of the entire number of records
you are trying to insert. Do you have any unusual vacuum settings?
On Tue, Mar 20, 2012 at 11:28 PM, Jim Green
student.northwest...@gmail.comwrote:
On 20 March 2012 22:57, John R Pierce pie...@hogranch.com wrote:
avg() in the database is going to be a lot faster than copying the data
into
memory for an application to process.
I see..
As an example, I
On 03/21/2012 09:34 AM, Jim Green wrote:
On 21 March 2012 11:01, Steve Crawfordscrawf...@pinpointresearch.com wrote:
Something sounds wrong, here. XID wraps around somewhere around 4-billion
transactions which is a substantial multiple of the entire number of records
you are trying to insert.
On 3/21/2012 11:45 AM, Lee Hachadoorian wrote:
On Tue, Mar 20, 2012 at 11:28 PM, Jim Green
student.northwest...@gmail.com mailto:student.northwest...@gmail.com
wrote:
On 20 March 2012 22:57, John R Pierce pie...@hogranch.com
mailto:pie...@hogranch.com wrote:
avg() in the
On 03/21/2012 09:34 AM, Jim Green wrote:
On 21 March 2012 11:01, Steve Crawfordscrawf...@pinpointresearch.com wrote:
Something sounds wrong, here. XID wraps around somewhere around 4-billion
transactions which is a substantial multiple of the entire number of records
you are trying to insert.
All,
I'm trying to use a variable for the filename portion of \copy. I'm calling
psql on some SQL commands in a file from a shell script:
FILETOLOAD=/var/tmp/filetoload.${$}
OPTIONS=--variable=outfile='${FILETOLOAD}'
psql ${OPTIONS} -f /some/file.sql
In /some/file.sql:
\copy table(f1, f2,
On Wed, Mar 21, 2012 at 12:57 PM, Andy Colson a...@squeakycode.net wrote:
On 3/21/2012 11:45 AM, Lee Hachadoorian wrote:
On 20 March 2012 22:57, John R Pierce pie...@hogranch.com
mailto:pie...@hogranch.com wrote:
avg() in the database is going to be a lot faster than copying
Cody Cutrer c...@instructure.com writes:
On Tue, Mar 20, 2012 at 6:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
There's not really support for adding indexes to system catalogs
on-the-fly. I think it would work (barring concurrency issues)
for most catalogs, but pg_class has special limitations
That's awesome, thanks! Yeah, I doubt I'll do that to our production
database, but maybe I'll try it on a copy sometime down the line.
Adjusting the cost for pg_table_is_visible is working well enough so
far.
Cody Cutrer
On Wed, Mar 21, 2012 at 12:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Cody
Hi,
We are considering to migrate some of our databases to PostgreSQL.
We wonder if someone could give some hardware / configuration specs for
large PostgreSQL installations.
We're interested in:
- Number of CPUs
- Memory on the server
- shared_buffers
- Size of
On Wed, 21 Mar 2012 20:31:08 +0100
Kjetil Nygård polpo...@gmail.com wrote:
We are considering to migrate some of our databases to PostgreSQL.
We wonder if someone could give some hardware / configuration specs
for large PostgreSQL installations.
We're interested in:
- Number of
On Wed, 2012-03-21 at 20:45 +0100, Frank Lanitz wrote:
On Wed, 21 Mar 2012 20:31:08 +0100
Kjetil Nygård polpo...@gmail.com wrote:
We are considering to migrate some of our databases to PostgreSQL.
We wonder if someone could give some hardware / configuration specs
for large
On 03/21/2012 12:31 PM, Kjetil Nygård wrote:
Hi,
We are considering to migrate some of our databases to PostgreSQL.
We wonder if someone could give some hardware / configuration specs for
large PostgreSQL installations...
You need to tell us a lot more than large (a speaker-dependent
On 03/21/12 12:45 PM, Frank Lanitz wrote:
- Number of CPUs
- Memory on the server
- shared_buffers
- Size of the database on disk
I guess this is extremely depending on how big you database is ...
and how much concurrent access.48 CPU cores won't help if you're
only
On Tue, Mar 20, 2012 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I think Tom's correct about what the right behavior would be if
composite types supported defaults, but they don't, never have, and
maybe never will. I had a previous argument about this with Tom, and
lost, though I am not
Hi,
when trying to change a text column to bytea I am getting the following error:
SQL error:
ERROR: column comment cannot be cast to type bytea
In statement:
ALTER TABLE public.persons ALTER COLUMN comment TYPE bytea
I found that others had the same issue but I found no solution for it. I
I just found a thread that outlines this issue and how to solve it here:
http://archives.postgresql.org/pgsql-general/2005-12/msg01322.php
Thanks
Alex
Am 21.03.2012 um 20:56 schrieb Alexander Reichstadt:
Hi,
when trying to change a text column to bytea I am getting the following error:
On 03/21/12 12:31 PM, Kjetil Nygård wrote:
We wonder if someone could give some hardware / configuration specs for
large PostgreSQL installations.
We're interested in:
- Number of CPUs
- Memory on the server
- shared_buffers
- Size of the database on disk
oh,
On Wed, 2012-03-21 at 13:06 -0700, John R Pierce wrote:
On 03/21/12 12:31 PM, Kjetil Nygård wrote:
We wonder if someone could give some hardware / configuration specs for
large PostgreSQL installations.
We're interested in:
- Number of CPUs
- Memory on the server
-
On 03/21/2012 01:13 PM, Kjetil Nygård wrote:
I just hoped for some simple numbers...
That's exactly what we want in order to help. Since you said you are
considering a migration, you must have a pretty good idea of your
current data and workload.
There is no one-size-fits-all. Without some
Hello,
It seems that the program is thinking I'm passing a table but instead I'm
passing a query. Now, I could put a hack in place, create a view and pass
that to pgsql2shp but I thought I'd ask whether anyone else has seen this
behavior and has a way to force / trick the program to treat the
On Wed, Mar 21, 2012 at 2:13 PM, Kjetil Nygård polpo...@gmail.com wrote:
I understand that IO performance, transactions/s, 24/7 vs office hours,
data-complexity etc is also needed to really say how much beating a
database can handle.
I just hoped for some simple numbers, but other relevant
Hey,
In an attempt to NOT pollute the thread started by Kjetil Nygård, I decided
to ask a very similar question with likely different data.
I am interested in hearing recommendations on hardware specs in terms of
Drives/RAM/shared_buffers/CPUs. I have been doing some research/testing,
and am
On 03/21/12 1:13 PM, Kjetil Nygård wrote:
I just hoped for some simple numbers, but other relevant performance
numbers etc would be nice as well :-)
42!
--
john r pierceN 37, W 122
santa cruz ca mid-left coast
--
Sent via pgsql-general
On Wed, 2012-03-21 at 09:12 +0100, Henk Bronk wrote:
On linux, you can also do a
cp -rpuv. source destination
My point was that we should not take shortcuts that avoid the work of a
full base backup for the replicas until we've determined a safe way to
do that.
As far as I know, nobody has
On 03/21/12 2:18 PM, Jason Herr wrote:
I have my own theories based on what I've read and my puttering. I
think I can get away with a disk for the OS, disk for the WAL, disk
for the large table (tablespaces) and a disk for the rest. And when I
say disk I mean storage device. I'm thinking
On Wed, Mar 21, 2012 at 02:58:43PM -0700, John R Pierce wrote:
On 03/21/12 2:18 PM, Jason Herr wrote:
I have my own theories based on what I've read and my puttering.
I think I can get away with a disk for the OS, disk for the WAL,
disk for the large table (tablespaces) and a disk for the
On Wed, Mar 21, 2012 at 2:56 PM, Alexander Reichstadt l...@mac.com wrote:
Hi,
when trying to change a text column to bytea I am getting the following
error:
*SQL error:*
ERROR: column comment cannot be cast to type bytea
*In statement:*
ALTER TABLE public.persons ALTER COLUMN comment
I presently have about 40 databases on an aging server which runs both
Postgresql and Apache. The databases presently consume about 20GB and I
expect them to be consuming around 40GB in a year or more if demand for
our services expand as we hope.
The present database is 2 x Quad core E5420 Xeon
We've used Raptors in production for a few years.
They've been about as reliable as you'd expect for hard
drives, with the additional excitement of a firmware bug
early on that led to data loss and considerable expense.
New machines deployed since November last year have 710 SSDs.
No problems
On 03/21/12 3:20 PM, Martijn van Oosterhout wrote:
That, and a good RAID controller with BBU cache will go a long way to
relieving the pain of fsync.
even better than BBU cache is the newer 'flash backed caches'. works
the same, but uses a supercap rather than a battery, and backs the cache
64 matches
Mail list logo