Re: [GENERAL] SELECT statement with sub-queries

2017-05-28 Thread Amitabh Kant
On Sun, May 28, 2017 at 10:33 PM, Michelle Konzack  wrote:

> Hello *,
>
> I try to get columns from my database with a singel SELECT, but I stuck.
>
> I have 3 tables like:
>
>
> 1)  categories (serial,cat)
>
> 2)  manufacturers (serial,m_name)
>
> 3)  products (serial,category,manufacturer,p_name)
>
>
> where the "category" and "manufacturer"
> are numerical IDs from the two tables above.
>
> So I like to replace the numerical IDs with the appropriated values,
> mean
>
> SELECT * FROM products WHERE category IN
> (SELECT categories.cat FROM categories WHERE
> categories.serial==products.category);
>
> But I get:
>
> ERROR:  operator does not exist: integer==integer
> LINE1: ...gories.cat FROM categories WHERE  categories.serial==products.
> category);
>  ^
> Hmm, "serial" is an "integer" and the "category" and "manufacturer" too.
>
> So whats wrong with it?
>
>
>
>
>
>
>
> --
> Michelle KonzackMiila ITSystems @ TDnet
> GNU/Linux Developer 00372-54541400
>

Wouldn't a simple join like the one below suffice:

Select a.*,b.cat from products as a, categories as b where
a.category=b.serial

or am I missing something?


Re: [GENERAL] all serial type was changed to 1

2017-05-01 Thread Amitabh Kant
On Mon, May 1, 2017 at 11:21 AM, Max Wang  wrote:

> Hi All,
>
>
>
> We have a PostgreSQL database. There are 26 tables and we use serial type
> as primary key.  We had a insert error as “duplicate key value violates
> unique constraint, DETAIL:  Key (id)=(1) already exists.” one weeks ago. I
> checked and found all tables’ id were reset to 1.
>
>
>
> I checked database log and did not find any useful information.  I am not
> sure why this happen. The only script which connect to this database is a
> Python script and only do normal insert/update/delete actions.
>
>
>
> Please give me some suggestions if you happen to know something about this
> issue. I appreciate any feedback you might have.
>
>
>
> I am very new to PostgreSQL and this mail list. Please let me know if I
> did not something wrong.
>
>
>
> Thank you.
>
>
>
> Regards,
>
> Max
>


By any chance, has it to do anything with the Cycle option of sequences:

https://www.postgresql.org/docs/9.5/static/sql-createsequence.html


Re: [GENERAL] pg_upgrade error on FreeBSD (9.1 -> 9.5)

2017-01-22 Thread Amitabh Kant
On Sat, Jan 21, 2017 at 9:53 PM, Tom Lane  wrote:

> Amitabh Kant  writes:
> > command: "/var/tmp/pgbin.SPOsRj4D/bin/pg_ctl" -w -l
> "pg_upgrade_server.log"
> > -D "/usr/local/pgsql/data91" -o "-p 50432 -b  -c listen_addresses='' -c
> > unix_socket_permissions=0700 -c unix_socket_directory='/usr/
> local/pgsql'"
>
> Note the unix_socket_directory parameter, which is indeed being applied
> because we can see it again in the ps output:
>
> > pgsql 26636   0.0  1.4 66960 14512  -  Is4:08AM   0:00.06
> > /var/tmp/pgbin.SPOsRj4D/bin/postgres -D /usr/local/pgsql/data91 -p
> 50432 -b
> > -c listen_addresses= -c unix_socket_permissions=0700 -c
> > unix_socket_directory=/usr/local/pgsql
>
> However, your psql is looking for the socket in /tmp:
>
> > $ psql -p 50432 -d template1
> > psql: could not connect to server: No such file or directory
> > Is the server running locally and accepting
> > connections on Unix domain socket "/tmp/.s.PGSQL.50432"?
>
> You could successfully connect to that server with
> "psql -p 50432 -h /usr/local/pgsql ...", I expect.
>
> The question is why pg_upgrade issued that option and then failed to
> cope with the consequences.  I suspect it has something to do with one
> installation being configured with different default socket directory
> than the other, but I don't have enough facts.
>
> regards, tom lane
>

Yes, it does connect  using Unix domain socket as you suggested. PG 9.5 is
the stock install as present on FreeBSD. I will have to check the script
that installs PG 9.1  in an alternate location for any changes from the
default.

regards

Amitabh


[GENERAL] pg_upgrade error on FreeBSD (9.1 -> 9.5)

2017-01-21 Thread Amitabh Kant
Hi

I am trying to upgrade an exisitng Postgresql cluster from  9.1 to 9.5 on
FreeBSD. pg_upgrade fails with the following error:
[Verbose Output: http://pastebin.com/YhR8vD03]
==
$ /usr/local/bin/pg_upgrade -k  -d /usr/local/pgsql/data91 -D
/usr/local/pgsql/data --old-bindir=/var/tmp/pgbin.SPOsRj4D/bin
--new-bindir=/usr/local/bin --check

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

pg_ctl failed to start the old server, or connection failed
Failure, exiting


$ cat pg_upgrade_server.log

-
  pg_upgrade run on Sat Jan 21 04:08:13 2017
-

command: "/var/tmp/pgbin.SPOsRj4D/bin/pg_ctl" -w -l "pg_upgrade_server.log"
-D "/usr/local/pgsql/data91" -o "-p 50432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/usr/local/pgsql'"
start >> "pg_upgrade_server.log" 2>&1
waiting for server to start stopped waiting
pg_ctl: could not start server
Examine the log output.
==

On checking the process list, the old postgres server seems to have
started, but I am unable to connect to it using psql.
==
$ ps auwwx |grep post
pgsql 26636   0.0  1.4 66960 14512  -  Is4:08AM   0:00.06
/var/tmp/pgbin.SPOsRj4D/bin/postgres -D /usr/local/pgsql/data91 -p 50432 -b
-c listen_addresses= -c unix_socket_permissions=0700 -c
unix_socket_directory=/usr/local/pgsql
pgsql 26638   0.0  1.4 66960 14516  -  Ss4:08AM   0:00.23 postgres:
writer process(postgres)
pgsql 26639   0.0  1.4 66960 14516  -  Ss4:08AM   0:00.15 postgres: wal
writer process(postgres)
pgsql 26640   0.0  0.9 26720  9568  -  Ss4:08AM   0:00.02 postgres:
stats collector process(postgres)

$ psql -p 50432 -d template1
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.50432"?
==


If I run the pg_ctl command directly with just the minimum parameters, it
still gives me an error, but seems to work correctly.
==
$ /var/tmp/pgbin.SPOsRj4D/bin/pg_ctl -o "-p 50432 " -w -l
"pg_upgrade_server.log" -D "/usr/local/pgsql/data91"  start
waiting for server to start stopped waiting
pg_ctl: could not start server
Examine the log output.

$ cat pg_upgrade_server.log

$ psql -p 50432 -d template1
psql (9.5.5, server 9.1.20)
Type "help" for help.

template1=#
==


Where am I going wrong?


Amitabh


Detailed steps that I followed for upgrade:

Since postgresql installed on FreeBSD through its ports/packages doesn't
have version specific binaries or data directory till  PG 9.5 (9.6 does
have version specific data directory), use of pg_upgrade requires a little
bit of work:

i) Stopped the current PG 9.1 instance. Renamed the data directory to
data91 (under /usr/local/pgsql).
ii) Installed PG 9.1 binaries in a different location
(/var/tmp/pgbin.SPOsRj4D/bin). Set owner and permission for pgsql
user(default superuser in FreeBSD)
iii) Removed PG 9.1 installation from server & installed PG 9.5 in it's
default location (under /usr/local/bin)
iv) Initiaized DB instance using initdb command (PG 9.5 data location is
/usr/local/pgsql/data)
v) Ran the following command for upgrade check:
/usr/local/bin/pg_upgrade -k  -d /usr/local/pgsql/data91 -D
/usr/local/pgsql/data --old-bindir=/var/tmp/pgbin.SPOsRj4D/bin
--new-bindir=/usr/local/bin --check --verbose

Output: http://pastebin.com/YhR8vD03


OS: FreeBSD 10.3 (running as virtualbox guest OS, host is Mac OS)


Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

2016-12-30 Thread Amitabh Kant
On Fri, Dec 30, 2016 at 12:06 PM, ajmcello  wrote:

> Reducing worker mem shaved about 12 minutes off the query time.. Thanks
> for the suggestion. I lowered it to 10MB instead of 100MB
>
> [SNIP]
>
> >>> [postgresql.conf]
> >>> max_connections = 10
> >>> max_files_per_process = 100
> >>> shared_buffers = 24GB
> >>> max_locks_per_transaction  = 1000
> >>> effective_cache_size = 50GB
> >>> work_mem = 100MB
> >>> maintenance_work_mem = 2GB
> >>> log_min_duration_statement = 1
> >>> checkpoint_completion_target = 0.9
> >>> wal_buffers = 32MB
> >>> default_statistics_target = 100
> >>> listen_addresses = '*'
> >>> port = 5432
> >>> ssl = off
> >>> wal_sync_method = fdatasync
> >>> synchronous_commit = on
> >>> fsync = off
> >>> wal_level = minimal
> >>> #client_min_messages = fatal
> >>> #log_min_messages = fatal
> >>> #log_min_error_statement = fatal
> >>> datestyle = 'iso, mdy'
> >>> debug_pretty_print = off
> >>> debug_print_parse = off
> >>> debug_print_plan = off
> >>> debug_print_rewritten = off
> >>> default_text_search_config = 'pg_catalog.english'
> >>> enable_bitmapscan = on
> >>> enable_hashagg = on
> >>> enable_hashjoin = on
> >>> enable_indexonlyscan = on
> >>> enable_indexscan = on
> >>> enable_material = on
> >>> enable_mergejoin = on
> >>> enable_nestloop = on
> >>> enable_seqscan = on
> >>> enable_sort = on
> >>> enable_tidscan = on
> >>> from_collapse_limit = 8
> >>> geqo = on
> >>> geqo_threshold = 12
> >>> log_checkpoints = off
> >>>
> >>> log_connections = off
> >>> log_disconnections = off
> >>> log_duration = off
> >>> log_executor_stats = off
> >>> log_hostname = off
> >>> log_parser_stats = off
> >>> log_planner_stats = off
> >>> log_replication_commands = off
> >>> log_statement_stats = off
> >>> log_timezone = 'UTC'
> >>> max_wal_size = 1GB
> >>> min_wal_size = 80MB
> >>> shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
> >>> stats_temp_directory = 'pg_stat_tmp'
> >>> timezone = 'US/Pacific'
> >>> track_activities = on
> >>> track_counts = on
> >>> track_io_timing = off
> >>>
> >>>
> >>> Thanks in advance.
> >>>
> >>>
> >>>
> >>>
> >>
> >>
>


The number of connections that you are attempting from Postgres is way too
high. You should be using a connection pooler like pgbouncer, and reduce
the number of connections at  postgres level.

Amitabh


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-16 Thread Amitabh Kant
On Tue, Aug 16, 2016 at 1:08 PM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 29/07/2016 21:06, Larry Rosenman wrote:
>
>> On 2016-07-29 12:59, Bruce Momjian wrote:
>>
>>> On Fri, Jul 29, 2016 at 07:49:36PM +0200, Maeldron T. wrote:
>>>
 And yes, I hate upgrading PostgreSQL especially on FreeBSD where
 pg_upgrade
 isn’t really an option.

>>>
>>> Is that because it is hard to install the old and new clusters on the
>>> same server on FreeBSD?
>>>
>>> The current FreeBSD Ports collection ports only allow ONE version to be
>> installed at a time.
>>
>>
>>
> In our installation, pgsql is one of the very few packages that we prefer
> to deal by hand.
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>From my discussion with one of the  maintainers of the port on FreeBSD,
support for multiple versions is being worked upon. Hopefully we should
have it some time soon.

--
Amitabh


[GENERAL] Re: Moving Postgresql 9.1 instance from FreeBSD 9.0 machine to FreeBSD 10.2

2015-12-20 Thread Amitabh Kant
On Sun, Dec 20, 2015 at 6:22 PM, Amitabh Kant  wrote:

> Thanks Andreas .. I thought of going the replication way .. But having
> done a sample run couple of hours ago, it seems it will work out without
> making any changes . Will update this thread once I have done the final run
> .
>
> Amitabh
>
>
> On Sunday 20 December 2015, Andreas Kretschmer 
> wrote:
>
>> Read the doc about Replikation. I think you can simply set up the New
>> machine AS streaming Replikation slave and promote it than AS master - with
>> no downtime.
>>
>> Am 20. Dezember 2015 02:50:57 MEZ, schrieb Amitabh Kant <
>> amitabhk...@gmail.com>:
>>>
>>> Hi
>>>
>>> I have a Postgresql 9.1 instance running on FreeBSD 9.0 (64 bit). The
>>> machine is now running out of space and no extra hard disk can be added to
>>> the machine. I am planning to move the instance to another machine which is
>>> running FreeBSD 10.2 (64 bit) with the same Postgresql version.
>>>
>>> One idea is to take a dump and transfer the data, which is my fallback
>>> plan. The other is to simply copy over the data directory since the
>>> postgresql version is same (64 bit OS with PG 9.1) on both machine. This
>>> will reduce a lot of down time as the data is around 1 TB .
>>>
>>> Old Machine: FreeBSD 9.0 (64 bit) with Postgresql 9.1 (UFS file system)
>>>
>>> New Machine: FreeBSD 10.2 (64 bit) with Postgresql 9.1 (UFS file system)
>>>
>>> The directory structure and others hardware would remain the same in
>>> both machine, apart from a increased capacity of hard disk and RAM in the
>>> new machine.
>>>
>>> Just wondering if anybody has already done such a move or has any
>>> suggestions if I should or should not go the directory copy route?
>>>
>>>
>>> Thanks
>>>
>>> Amitabh
>>>
>>
>> --
>> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
>> gesendet.
>>
>

Just did a transfer without a hitch on a 1 TB (approx) production database
. Everything is working fine. Saved a ton of time and complications.

Amitabh


[GENERAL] Re: Moving Postgresql 9.1 instance from FreeBSD 9.0 machine to FreeBSD 10.2

2015-12-20 Thread Amitabh Kant
Thanks Andreas .. I thought of going the replication way .. But having done
a sample run couple of hours ago, it seems it will work out without making
any changes . Will update this thread once I have done the final run .

Amitabh

On Sunday 20 December 2015, Andreas Kretschmer 
wrote:

> Read the doc about Replikation. I think you can simply set up the New
> machine AS streaming Replikation slave and promote it than AS master - with
> no downtime.
>
> Am 20. Dezember 2015 02:50:57 MEZ, schrieb Amitabh Kant <
> amitabhk...@gmail.com
> >:
>>
>> Hi
>>
>> I have a Postgresql 9.1 instance running on FreeBSD 9.0 (64 bit). The
>> machine is now running out of space and no extra hard disk can be added to
>> the machine. I am planning to move the instance to another machine which is
>> running FreeBSD 10.2 (64 bit) with the same Postgresql version.
>>
>> One idea is to take a dump and transfer the data, which is my fallback
>> plan. The other is to simply copy over the data directory since the
>> postgresql version is same (64 bit OS with PG 9.1) on both machine. This
>> will reduce a lot of down time as the data is around 1 TB .
>>
>> Old Machine: FreeBSD 9.0 (64 bit) with Postgresql 9.1 (UFS file system)
>>
>> New Machine: FreeBSD 10.2 (64 bit) with Postgresql 9.1 (UFS file system)
>>
>> The directory structure and others hardware would remain the same in both
>> machine, apart from a increased capacity of hard disk and RAM in the new
>> machine.
>>
>> Just wondering if anybody has already done such a move or has any
>> suggestions if I should or should not go the directory copy route?
>>
>>
>> Thanks
>>
>> Amitabh
>>
>
> --
> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
> gesendet.
>


[GENERAL] Moving Postgresql 9.1 instance from FreeBSD 9.0 machine to FreeBSD 10.2

2015-12-19 Thread Amitabh Kant
Hi

I have a Postgresql 9.1 instance running on FreeBSD 9.0 (64 bit). The
machine is now running out of space and no extra hard disk can be added to
the machine. I am planning to move the instance to another machine which is
running FreeBSD 10.2 (64 bit) with the same Postgresql version.

One idea is to take a dump and transfer the data, which is my fallback
plan. The other is to simply copy over the data directory since the
postgresql version is same (64 bit OS with PG 9.1) on both machine. This
will reduce a lot of down time as the data is around 1 TB .

Old Machine: FreeBSD 9.0 (64 bit) with Postgresql 9.1 (UFS file system)

New Machine: FreeBSD 10.2 (64 bit) with Postgresql 9.1 (UFS file system)

The directory structure and others hardware would remain the same in both
machine, apart from a increased capacity of hard disk and RAM in the new
machine.

Just wondering if anybody has already done such a move or has any
suggestions if I should or should not go the directory copy route?


Thanks

Amitabh


Re: [GENERAL] Upgrade postgres cluster on FreeBSD using pg_upgrade

2015-07-19 Thread Amitabh Kant
On Mon, Jul 20, 2015 at 1:57 AM, Joseph Kregloh 
wrote:

> It's actually simpler than you think. Here are the steps that I follow:
>
> 1. Install 9.4 in another directory, if you compile it from the ports make
> suere you add the PREFIX flag. For example: cd
> /usr/ports/databases/postgresql94-server/ && make install clean
> PREFIX=/opt, this will install 9.4 on the /opt directory.
> 2. Install the contrib the same way.
> 3. Initialize the 9.4 database in another directory /opt/bin/initdb -D
> /usr/local/pgsql_94/data -E UTF8.
> 4. Make sure the two databases are compatible: /opt/bin/pg_upgrade -d
> /usr/local/pgsql/data -D /usr/local/pgsql_94/data -b /usr/local/bin -B
> /opt/bin -p 5001 -P 5002 -c
> 5. Once everything is complete, run the scripts to delete old data and
> analyze the new db
> 6. Deinstall 9.0 and 9.4.
> 7. Install 9.4 using the defaults.
> 8. Move the /usr/local/pgsql_94/data to /usr/local/pgsql/data
> 9. Copy any settings you want to restore and that's it.
>
> Above is the basic steps as I remember from memory. But I've successfully
> upgraded many databases this way.
>
> On Sun, Jul 19, 2015 at 1:35 PM, Amitabh Kant 
> wrote:
>
>> Hi
>>
>> I need to upgrade a PG cluster from 9.1 to 9.4 running on dedicated
>> FreeBSD 9.2 server. Earlier I followed the simple pg_dump / pg_restore, but
>> now that the cluster has grown to around 700 GB (1 TB total HD size), I am
>> now inclined towards using pg_upgrade.
>>
>> pg_upgrade requires both binary to be present at the same time, which
>> seems difficult as I have installed using ports. I don't want to compile PG
>> myself unless that is the only way out. Reading the mailing lists and
>> previous questions, I see following options:
>>
>>
> Do you use pkg? Don't be afraid of building from source, give you more
> options and you can see what is actually installed in terms of dependencies.
>
>
>> a) Use the work directory of port to compile the binaries and use it with
>> pg_upgrade
>>
>>
> You install it in a different location.
>
>
>> b) use jails as noted in some of the discussions online, however no idea
>> whether it would work in my case.
>>
>>
> Jails work, but in this case it would be more trouble. If you had it in a
> jail already then that would be different story.
>
>
>> c) Somehow modify the ports options so each versions binary is installed
>> separately. I have no idea how to do it, maybe need to ask this on the
>> FreeBSD mailing list.
>>
>
> This I explained above.
>
>
> -Joseph Kregloh
>

I had a feeling I was missing something simple. I was looking at DESTDIR
instead of PREFIX. I will try it out. Seems pretty elegant and simple.

I do use pkg on production servers, but I have my own pkg repository where
I build packages to be deployed across multiple servers.

Thanks for the help.


With regards

Amitabh


[GENERAL] Upgrade postgres cluster on FreeBSD using pg_upgrade

2015-07-19 Thread Amitabh Kant
Hi

I need to upgrade a PG cluster from 9.1 to 9.4 running on dedicated FreeBSD
9.2 server. Earlier I followed the simple pg_dump / pg_restore, but now
that the cluster has grown to around 700 GB (1 TB total HD size), I am now
inclined towards using pg_upgrade.

pg_upgrade requires both binary to be present at the same time, which seems
difficult as I have installed using ports. I don't want to compile PG
myself unless that is the only way out. Reading the mailing lists and
previous questions, I see following options:

a) Use the work directory of port to compile the binaries and use it with
pg_upgrade

b) use jails as noted in some of the discussions online, however no idea
whether it would work in my case.

c) Somehow modify the ports options so each versions binary is installed
separately. I have no idea how to do it, maybe need to ask this on the
FreeBSD mailing list.

What is the route generally preferred by those running PG on FreeBSD? Is
there something simple which I have missed out?


With regards

Amitabh


Re: [GENERAL] Recovering database from crashed HD (bad sectors)

2015-07-18 Thread Amitabh Kant
On Sun, Jul 19, 2015 at 4:10 AM, Alvaro Herrera 
wrote:

> Tom Lane wrote:
> > Amitabh Kant  writes:
> > > As for running the sql command as suggested by Tom, here is the result:
> > > template1=# select * from pg_class where pg_relation_filenode(oid) =
> 11678;
> >
> > >  pg_class |   11 |  83 | 0 |   10 | 0 |
> > >   0 | 0 |8 |   281 | 0 |
>0
> > > | t   | f   | p  | r   |   26 |
> > > 0 | t  | f  | f   | f  | f
> > > |  662 | {=r/pgsql} |
> >
> > That's about the worst possible answer :-(.  Without pg_class, you have
> > little hope of telling which is which among the other files; and there
> > would be no real commonality with the contents of pg_class from other
> > databases in the installation, so no way to jury-rig something.
> Moreover,
> > because pg_class is consulted *very* early in backend startup, it seems
> > entirely likely that the failure you're seeing is only the tip of the
> > iceberg; there very possibly are other files that are also missing or
> > badly damaged.
>
> I would look for the file in the partition's lost+found directory; with
> luck, the file is there.  pg_filedump can identify the number of
> attributes each tuple has in each file, if there are many such files;
> you need to look for a file whose tuples have 26 attributes (relnatts in
> the above row).
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>

There is no lost+found directory in the recovered files. Since the DB was
not critical, and can easily be replaced, we are not inclined towards
sending it for professional recovery. As Tom suggested, this would just be
the tip iceberg, as even during recovery, there were lots of read errors
from the disk. I will recreate the DB from the backups.

Thanks for the help and insights though.

With regards

Amitabh


Re: [GENERAL] Recovering database from crashed HD (bad sectors)

2015-07-18 Thread Amitabh Kant
I am just experimenting with the crashed DB. Before I tried anything else,
I did make a complete copy of the recovered directory tree.

As for running the sql command as suggested by Tom, here is the result:

template1=# select * from pg_class where pg_relation_filenode(oid) = 11678;

 relname  | relnamespace | reltype | reloftype | relowner | relam |
relfilenode | reltablespace | relpages | reltuples | reltoastrelid |
reltoastidxid | relhasindex | relisshared | relpersistence | relkind |
relnatts | relchecks | relhasoids | relhaspkey | relhasrules |
relhastriggers | relhassubclass | relfrozenxid |   relacl   | reloptions

--+--+-+---+--+---+-+---+--+---+---+---+-+-++-+--+---+++-+++--++

 pg_class |   11 |  83 | 0 |   10 | 0 |
  0 | 0 |8 |   281 | 0 | 0
| t   | f   | p  | r   |   26 |
0 | t  | f  | f   | f  | f
|  662 | {=r/pgsql} |

(1 row)

>From what I can understand, this is not a index as reltype is non-zero . Is
there anything else that I can try?


With regards

Amitabh

On Sat, Jul 18, 2015 at 7:42 PM, Tom Lane  wrote:

> Amitabh Kant  writes:
> > A development box hard disk failed which was running a PG instance with
> > multiple databases on it.  I got the data recovered with some bad sector
> > errors. Ran another instance of PG (same version), and was to able to
> take
> > dump (using pg_dump) of all but one database. For one database I am
> getting
> > the following error:
>
> > pg_dump -Fc alpha_45 > alpha_45.dump
> > pg_dump: [archiver (db)] connection to database "alpha_45" failed: FATAL:
> >  could not open file "base/525035/11678": No such file or directory
>
> > These are the only two files in the directory similar to the one above:
> > /usr/local/pgsql/data/base/525035/11678_fsm
> > /usr/local/pgsql/data/base/525035/11678_vm
>
> > Is there any hope of recovering this DB, or should I start looking into
> > restoring from old backups?
>
> Well, as others have said, if the data is worth money to you then it
> would be worthwhile hiring a pro.  If you just want to experiment,
> the first thing to do is to find out which system catalog or index
> that is, which you should be able to do by connecting to any of the
> non-broken databases in the same installation and issuing
> select * from pg_class where pg_relation_filenode(oid) = 11678;
>
> If you're lucky and it's only an index, you could try connecting to the
> busted database with PGOPTIONS=-P (ignore_system_indexes) and reindexing
> the broken index.  If it's a catalog, whether recovery is possible would
> depend on which one.
>
> In any case, it would be prudent to make a tar-style copy of the whole
> $PGDATA tree (with the postmaster shut down) before experimenting,
> so that you can undo any catastrophic mistakes.
>
> regards, tom lane
>


[GENERAL] Recovering database from crashed HD (bad sectors)

2015-07-18 Thread Amitabh Kant
Hi

A development box hard disk failed which was running a PG instance with
multiple databases on it.  I got the data recovered with some bad sector
errors. Ran another instance of PG (same version), and was to able to take
dump (using pg_dump) of all but one database. For one database I am getting
the following error:

pg_dump -Fc alpha_45 > alpha_45.dump
pg_dump: [archiver (db)] connection to database "alpha_45" failed: FATAL:
 could not open file "base/525035/11678": No such file or directory

These are the only two files in the directory similar to the one above:
/usr/local/pgsql/data/base/525035/11678_fsm
/usr/local/pgsql/data/base/525035/11678_vm

Is there any hope of recovering this DB, or should I start looking into
restoring from old backups? Data loss is not a concern, I just would like
to know if I should even try working on it. I looked into this page
http://www.postgresql.org/docs/9.1/static/runtime-config-developer.html,
and tried using "zero_damaged_pages = on" in postgresql.conf, but it was of
no help.


OS: FreeBSD 9.3
PG Version: 9.1


With regards

Amitabh


Re: [GENERAL] Linux vs FreeBSD

2014-04-03 Thread Amitabh Kant
On Fri, Apr 4, 2014 at 9:33 AM, François Beausoleil wrote:

> Hi all!
>
> Does PG perform that much better on FreeBSD? I have some performance
> issues on a Ubuntu 12.04 which I'd like to resolve. iowait varies a lot,
> between 5 and 50%. Does FreeBSD better schedule I/O, which could alleviate
> some of the issues, or not at all? I have no experience administering
> FreeBSD, but I'm willing to learn if I'll get some performance enhancements
> out of the switch.
>
> Our workload is lots of data import, followed by many queries to summarize
> (daily and weekly reports). Our main table is a wide table that represents
> Twitter and Facebook interactions. Most of our reports work on a week's
> worth of data (table is partitioned by week), and the tables are
> approximately 25 GB plus 5 GB of indices, per week. Of course, while
> reports are ongoing, we're also importing next week's data.
>
> The host is a dedicated hardware machine at online.fr: 128 GB RAM, 2 x
> 3TB disk in RAID 1 configuration.
>
> I started thinking of this after reading "PostgreSQL pain points" at
> https://lwn.net/Articles/591723/. In the comments, bronson says FreeBSD
> does not exhibit the same problems (slow fsync, double buffering). On the
> list here, I've read about problems with certain kernel versions on Ubuntu.
>
> I'm not expecting anything magical, just some general guidelines and
> hints. Did anybody do the migration and was happier after?
>
> Thanks for any hints!
> François Beausoleil
>
> $ uname -a
> Linux munn.ca.seevibes.com 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3
> 17:37:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
>
> $ psql -U postgres -c "select version()"
>version
>
> -
>  PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc
> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>
> /proc/cpuinfo says: 8 CPUs, identified as "Intel(R) Xeon(R) CPU E5-2609 0
> @ 2.40GHz"
>
>

Cannot give you a comparison, but running couple of dedicated PG servers
(9.1 & 9.2) on FreeBSD 9.x. Not seen much of a problem, apart from tuning
some sysctl variables for higher memory usage. My hardware uses either SAS
or SSD disks. RAM varies between 32 to 128 GB between various servers. My
workload is more of lots of small read and writes.

Amitabh


Re: [GENERAL] initdb error

2012-12-15 Thread Amitabh Kant
David

I just tried installing PG 9.2.2 on FreeBSD 8.3 and initdb went smoothly
without any problems. the only difference would be that I had it running in
a virtualbox instance. My config were as follows;

FreeBSD 8.3-RELEASE-p5 (binary updated through freebsd-update)
PG 9.2.2 (installed through ports)
root filesystem on UFS
ZFS mounted on /example
PG data directory set to /example/pgsql/data

relevant rc.conf entries:
postgresql_enable="YES"
postgresql_data="/example/pgsql/data"

running "/usr/local/etc/rc.d/postgresql initdb" works perfectly. Just
needed to set the correct permissions on /example/pgsql directory.

Amitabh




On Sat, Dec 15, 2012 at 10:32 AM, David Noel  wrote:

> > You need to talk to some FreeBSD kernel hackers about why link()
> > might be failing here.  Since you see it on UFS too, we can probably
> > exonerate the ZFS filesystem-specific code.
> >
> > I did some googling and found that EPERM can be issued if the filesystem
> > doesn't support hard links (which shouldn't apply to ZFS I trust).
> > Also, Linux has a "protected_hardlinks" option that causes certain
> > attempts at creating hard links to fail --- but our use-case here
> > doesn't fall foul of any of those restrictions AFAICS, and of course
> > FreeBSD isn't Linux.  Still, I wonder if you're running into some
> > misdesigned or misimplemented security restriction.  You might want
> > to look at your kernel parameters and see if any of them look like
> > they might have to do with restricting hard-link operations.
> >
> > Also, since Amitabh failed to duplicate the failure on both earlier
> > and later FreeBSD kernels, and we've not heard reports of this from
> > anybody else either, it seems more than possible that it's a plain
> > old bug in the specific kernel version you're using.
> >
> > As a short-term workaround, I'd suggest rebuilding with
> > HAVE_WORKING_LINK disabled.  (Just remove that #define from
> > src/include/pg_config_manual.h and rebuild.)
>
> OK, thanks. I've gotten in touch with the freebsd-hackers mailing
> list. Hopefully we'll be able to get this one figured out.
>
> I compiled a GENERIC kernel and tried it again. Still no luck. But at
> least we know now it wasn't a configuration error.
>


Re: [GENERAL] initdb error

2012-12-14 Thread Amitabh Kant
On Fri, Dec 14, 2012 at 4:28 PM, David Noel  wrote:

> > Did you use ports to install postgresql?
>
> Yes
>
> > What is the version of postgresql and freebsd you are using?
>
> postgresql client and server v. 9.2.2. If all else fails I could try
> downgrading to a previous version of postgres.
>
> FreeBSD 8.3-RELEASE-p5
>
> > I am getting a different output while running
> > the initdb command through the rc script, and it's not using the -D path
> to
> > initialize the cluster, it falls back to the default location
> > /usr/local/pgsql/data .  Setting postgresql_data="/usr/local/pgsql1/data"
> > in /etc/rc.conf makes it initialize at the right location .  I did not
> had
> > to mess with an permissions and the rc commands were all run as root. The
> > above was tested on FreeBSD 9.1 with Postgresql 9.2 on UFS.
>
> I didn't have any luck with the rc script but I was able to use it to
> get a ktrace dump as root (ktrace as user pgsql doesn't seem to work).
> So hopefully that will show something(!)
>

David

If it helps, my versions were FreeBSD 8.1 + PG 9.0.5 using UFS on a
dedicated server and FreeBSD 9.1 (available on FTP sites but not announced
yet) + PG 9.2.2 using UFS on a virtual box instance. Looks like ZFS is the
only major variable here.

Amitabh


Re: [GENERAL] initdb error

2012-12-13 Thread Amitabh Kant
On Fri, Dec 14, 2012 at 3:10 AM, David Noel  wrote:

> On 12/13/12, Tom Lane  wrote:
> > David Noel  writes:
> >> /zdb is a zfs volume I've created for cvs and postgres.
> >
> > zfs eh?  What happens if you point initdb at a non-zfs volume?
> >
> > (I"m wondering if zfs has issues with the O_DIRECT flag that we'll
> > probably try to use with pg_xlog files.)
>
> I /boot off of a UFS volume so I created a directory there, chown and
> chmod'ed it, then ran initdb again. Same error, unfortunately.
>
>
David

Did you use ports to install postgresql? What is the version of postgresql
and freebsd you are using? I am getting a different output while running
the initdb command through the rc script, and it's not using the -D path to
initialize the cluster, it falls back to the default location
/usr/local/pgsql/data .  Setting postgresql_data="/usr/local/pgsql1/data"
in /etc/rc.conf makes it initialize at the right location .  I did not had
to mess with an permissions and the rc commands were all run as root. The
above was tested on FreeBSD 9.1 with Postgresql 9.2 on UFS.

On FreeBSD 8.1 with Postgresql 9.1 on UFS  installed though ports, it was
the same story, except that I had to change ownership to user pgsql for the
/usr/local/pgsql1 directory

Amitabh


Re: [GENERAL] initdb error

2012-12-13 Thread Amitabh Kant
On Thu, Dec 13, 2012 at 10:05 PM, Tom Lane  wrote:

> David Noel  writes:
> > I've tried initdb directly:
> > initdb -D /zdb/pgsql/data
> > ...and still seem to wind up with the error:
> > creating template1 database in /zdb/pgsql/data/base/1 ... FATAL:
> > could not open file "pg_xlog/00010001" (log file 0,
> > segment 1): No such file or directory
>
> Hm, that eliminates my first theory that the rc.d script is using
> initdb's --xlogdir switch to try to put pg_xlog someplace that doesn't
> exist on your system.  But it seems like this must be some variant of
> that.  A simple permissions problem is not very credible, because at
> this point initdb has successfully made the data directory and a bunch
> of files within it already.
>
> You could get more information by using the --noclean switch to prevent
> removal of the datadir after failure, and then having a look at the
> debris.  Is there a pg_xlog subdirectory inside /zdb/pgsql/data, and if
> so what permissions has it got?  Perhaps it is not a directory, but a
> symlink to somewhere else?
>
> regards, tom lane
>
>
Tom

rc.d script  doesn't seem to be using --xlogdir anywhere.  I have uploaded
the script here:
http://pastebin.com/ahAyZ2tP .


Amitabh


Re: [GENERAL] initdb error

2012-12-13 Thread Amitabh Kant
On Thu, Dec 13, 2012 at 10:00 PM, Adrian Klaver wrote:

> On 12/13/2012 08:18 AM, David Noel wrote:
>
>> On 12/13/12, Adrian Klaver  wrote:
>>
>>> On 12/13/2012 07:38 AM, David Noel wrote:
>>>
 I'm running into the following error message when running initdb
 (FreeBSD
 host):

 ygg# /usr/local/etc/rc.d/postgresql initdb -D /zdb/pgsql/data --debug
 The files belonging to this database system will be owned by user
 "pgsql".
 This user must also own the server process.

 The database cluster will be initialized with locales
 COLLATE:  C
 CTYPE:en_US.UTF-8
 MESSAGES: en_US.UTF-8
 MONETARY: en_US.UTF-8
 NUMERIC:  en_US.UTF-8
 TIME: en_US.UTF-8
 The default text search configuration will be set to "english".

 creating directory /zdb/pgsql/data ... ok
 creating subdirectories ... ok
 selecting default max_connections ... 100
 selecting default shared_buffers ... 32MB
 creating configuration files ... ok
 creating template1 database in /zdb/pgsql/data/base/1 ... FATAL:
 could not open file "pg_xlog/**00010001" (log file 0,
 segment 1): No such file or directory
 child process exited with exit code 1
 initdb: removing data directory "/zdb/pgsql/data"

 My best guess is that it has something to do with permissions, but I
 really have no idea. Has anyone seen this before and found a way
 around it?

>>>
>>>
>>> Not quite what is in the the init script in rc.d, have you tried running
>>> the initdb command directly? As you say I believe there are permissions
>>> problems. So following the instructions found below may solve your
>>> problem:
>>>
>>> http://www.postgresql.org/**docs/9.2/interactive/app-**initdb.html
>>>
>>>
>>>
>>>
 -David



>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@gmail.com
>>>
>>>
>>
>> I've tried initdb directly:
>>
>> initdb -D /zdb/pgsql/data
>>
>> ...and also through pg_ctl:
>>
>> pg_ctl initdb -D /zdb/pgsql/data
>>
>> ...and still seem to wind up with the error:
>>
>> creating template1 database in /zdb/pgsql/data/base/1 ... FATAL:
>> could not open file "pg_xlog/**00010001" (log file 0,
>> segment 1): No such file or directory
>>
>
> You are doing the above as the database user ex:postgres?
>
> The database user has permissions on /zdb/pgsql/data?
>
>
>
>> -David
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>
>
The rc.d script executes the command as the correct user. This is the line
in rc.d script that runs the initdb command:
su -l -c ${postgresql_class} ${postgresql_user} -c "exec
/usr/local/bin/initdb ${postgresql_initdb_flags} -D ${postgresql_data}"

Not sure about his permission though


Re: [GENERAL] Postgres DB Migration from 8.3 to 9.1

2012-10-12 Thread Amitabh Kant
On Fri, Oct 12, 2012 at 7:45 AM, Vishalakshi Navaneethakrishnan <
nvishalak...@sirahu.com> wrote:

> Hi Friends,
>
> We have our production environment database server in Postgres 8.3
> version. we have planned to upgrade to lastest version 9.1. Dump from 8.3
>  and restore in Postgres 9.1 takes more than 5 hours. Any other quick
> method to upgrade from 8.3 to 9.1. We need to reduce our downtime  below 1
> hour. Any Possibilities..?
>
> Thanks in Advance.
>
> --
> Best Regards,
> Vishalakshi.N
>
>
Try using the -j  option to speed up restore process. See
http://www.postgresql.org/docs/9.1/static/app-pgrestore.html . Not  sure
though whether it will bring it up within your range.

Amitabh


Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Amitabh Kant
On Fri, Sep 7, 2012 at 4:10 AM, Andrew Barnham wrote:

> Scratch that. An immediate show stopping pitfall occurs to me: the
> necessity to match CPU/OS Architecture between primary server and replicate
> target.  Doubtful that there are any consumer NAS products out there
> running linux on 64bit/intel
>
>

FreeNAS is based on FreeBSD 8.2 and is available in 64 bit arch.


Amitabh


Re: [GENERAL] corrupted statistics file "pg_stat_tmp/pgstat.stat"

2012-08-15 Thread Amitabh Kant
I run the same config (FreeBSD 9 with PG 9.1.x) on couple of servers, and
they seem to be working fine without any error messages. The only other
setting I have in my sysctl.conf is kern.maxfiles .

Amitabh

On Wed, Aug 15, 2012 at 1:27 PM, Carl von Clausewitz  wrote:

> Hi All,
>
> ** **
>
> I’ve just made a clean install for PostgreSQL 9.1
> (postgresql-server-9.1.4, postgresql-contrib-9.1.4) on a FreeBSD 9 (FreeBSD
> 9.0-RELEASE-p3 FreeBSD 9.0-RELEASE-p3 #0: Tue Jun 12 02:52:29 UTC 2012
> r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64). *
> ***
>
> ** **
>
> I’ve restored from TAR backup our databases, and everything looked fine.
> Without changing any setting in postgresql.conf (or in kernel settings) –
> only “track_counts=on”, after 2-3 days, I’m receiving huge number (~5-10
> PCS in every second) of error messages like that in /var/log/postgresql.log:
> 
>
> ** **
>
> *** Aug 15 06:27:26 eurodb postgres[77652]: [43-1] WARNING:  corrupted
> statistics file "pg_stat_tmp/pgstat.stat"
>
> ** **
>
> The logs are going to the syslog, and there are no more Warning, Error, or
> other messages. If I do a full Vacuum on the database, and I restart the
> system, everything works fine, no more error messages, until 2-3 days – and
> everything starts from the beginning. 
>
> ** **
>
> Two weeks ago, we just bought a new server, I’ve installed the same
> version of FreeBSD and PostgreSQL, and there is the same problem on that
> machine too – without any usage, or load, this is a test server for us now.
> 
>
> ** **
>
> The disks (SATA3 HDD), and the controllers has been tested, they are OK.
> All the directories’ permissons are the defaults, pg_stat_temp is owned by
> pgsql:pgsql – so I think this is not a permission issue. 
>
> ** **
>
> I’m trying to tune my config with pgtune, and I’ve made some changes on
> sysctl for the following: 
>
> ** **
>
> kern.ipc.semmni="512"
>
> kern.ipc.semmns="1024"
>
> kern.ipc.semume="64"
>
> kern.ipc.semmnu="512"
>
> ** **
>
> I will restart the machines today, and will see, what’s going on. Did
> anybody has seen this problem before? On our early version 9.0.8 PostgreSQL
> server, there wasn’t any kind of problem, but it was a FreeBSD 8.2. How can
> I investigate, what could be the problem? Kernel settings? Or any other
> idea? 
>
> ** **
>
> Thank you in advance. 
>
> ** **
>
> Best regards,
>
> Csaba
>


[GENERAL] Conditional left join

2011-12-01 Thread Amitabh Kant
I have the following table structure on Postgres 8.4 :

STRUCTURE: tblunit
unit_id [integer]
unit_location [character varying]

DATA:
1,'location1'
2,'location2'
3,'location3'


STRUCTURE: tbloperator
operator_id [integer]
operator_name [character varying]

DATA:
1,'operator1'
2,'operator2'
3,'operator3'
4,'operator4'
5,'operator5'
6,'operator6'


STRUCTURE: tbloperatorschedule
operator_schedule_id [bigint]
operator_id [integer] {Foreign key tbloperator->operator_id}
schedule_start_time [timestamp without time zone]
schedule_end_time [timestamp without time zone]
unit_id [bigint] {Foreign key tblunit->unit_id}

DATA:
1,1,'2011-12-01 01:00:00','2011-12-01 02:00:00',1
2,5,'2011-12-01 02:30:00','2011-12-01 04:50:00',1
3,2,'2011-12-01 04:55:00','2011-12-01 10:20:00',1
4,1,'2011-12-01 03:00:00','2011-12-01 05:00:00',2
5,3,'2011-12-01 05:30:00','2011-12-01 09:50:00',2
6,4,'2011-12-01 09:55:00','2011-12-01 13:20:00',2
7,6,'2011-12-01 14:00:00','2011-12-01 18:00:00',2
8,5,'2011-12-01 06:30:00','2011-12-01 14:50:00',3
9,2,'2011-12-01 14:55:00','2011-12-01 20:20:00',3


STRUCTURE: tbldata
data_id [bigint]
event_time [timestamp without time zone]
data_text [character varying]
unit_id [bigint] {Foreign key tblunit->unit_id}

DATA:
1,'2011-12-01 02:30:00','abc',1
2,'2011-12-01 06:28:00','abc',2
3,'2011-12-01 11:10:00','abc',3
4,'2011-12-01 21:30:00','abc',3


I am trying (through conditional left join?) to fetch all records of
tbldata and the operator name from tbloperators who was operating the unit
at event time. If no operator was present, it should return null.

Resulting set:
1,'2011-12-01 02:30:00','abc',1,operator5
2,'2011-12-01 06:28:00','abc',2,operator3
3,'2011-12-01 11:10:00','abc',3,operator5
4,'2011-12-01 21:30:00','abc',3,NULL


The sql query to create the tables and sample data is attached or can be
seen at http://pastebin.com/Fy2t3H9S . Is it possible to get a condition
within a left join or any other way to fetch the desired data.


Amitabh


query.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Amitabh Kant
On Sun, Nov 20, 2011 at 1:12 PM, Phoenix Kiula wrote:

> 
>
> Another idea.
>
> I use CSF/LFD firewall.
>
> For TCP_IN, I have enabled "6432" port number.
>
> Do I also need to enable it elsewhere, such as TCP_OUT or UDP_IN etc?
>

Could you just try disabling the firewall for once?

Amitabh


Re: [GENERAL] Installed. Now what?

2011-11-19 Thread Amitabh Kant
On Sun, Nov 20, 2011 at 12:02 PM, Phoenix Kiula wrote:

> On Sun, Nov 20, 2011 at 2:16 PM, Phoenix Kiula 
> wrote:
> > On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant 
> wrote:
> >>
> >> On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra  wrote:
> >>>
> >>> Dne 20.11.2011 04:21, Phoenix Kiula napsal(a):
> >>> 
> >>> My guess is that you actually require a password when connecting to the
> >>> database, but you haven't specified a password in the pgbouncer.ini
> >>> file. You have to specify it in the MYDB line, i.e. something like
> >>>
> >>> [databases]
> >>> MYDB  = host=127.0.0.1 dbname=MYDB user=MYUSER client_encoding=utf8
> >>> port=5432 password='mypassword'
> >>>
> >>>
> >>> The auth_file is used only for connecting to the pgbouncer, it's not
> >>> forwarded to the database server - the pgbouncer opens the connection
> on
> >>> behalf of the users, and you may actually have a completely different
> >>> users on the connection pooler.
> >
> >
> > OK. So I specified the password enclosed in double quotes.
> >
> > [databases]
> > MYDB  = host=127.0.0.1 dbname=MYDB user=MYUSER client_encoding=utf8
> > port=5432 password="mypassword"
> >
> >
> > Then restarted pgbouncer:
> >
> >   service pgbouncer restart
> >
> > And this shows up as this:
> >
> >   > lsof -i | grep pgbouncer
> >   pgbouncer  8558 postgres7u  IPv4 26187618   TCP
> > localhost:lds-distrib (LISTEN)
> >
> > Is this normal? Shouldn't the port number be somewhere? What's
> "lds-distrib"?
>
>
>
> I changed the port to the usual 6432 in the pgbouncer.ini. Restarted.
> Now I see this:
>
>
> > lsof -i | grep pgbounc
> pgbouncer 10854 postgres7u  IPv4 26257796   TCP localhost:6432
> (LISTEN)
>
>
> So this is live and working. Pgbouncer is working. And yet, this is a
> problem:
>
>
> > psql MYDB -E "MYDB_MYDB" -p 6432 -W
> Password for user MYDB_MYDB:
> psql: ERROR:  no working server connection
>
>
> From the log file:
>
>
> 2011-11-20 01:28:57.775 10854 WARNING server login failed: FATAL
> password authentication failed for user "MYDB_MYDB"
> 2011-11-20 01:28:57.775 10854 LOG S-0x1ae2efc0:
> MYDB/MYDB_MYDB@127.0.0.1:5432 closing because: login failed (age=0)
> 2011-11-20 01:29:46.413 10854 LOG Stats: 0 req/s, in 0 b/s, out 0
> b/s,query 0 us
>
>
> The password I am entering in the terminal is right for sure. I've
> tried it a few times, checked the caps lock, etc. Also, if the log
> carries this "FATAL password authentication failed", why does the
> terminal give the vague error "no working server connection"?
>
> Thanks.
>

Just a trial: try password without quotes in your pgbouncer config file.
That's how I have specified in mine, and it is working.

Amitabh


Re: [GENERAL] Installed. Now what?

2011-11-19 Thread Amitabh Kant
On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra  wrote:

> Dne 20.11.2011 04:21, Phoenix Kiula napsal(a):
> 
> My guess is that you actually require a password when connecting to the
> database, but you haven't specified a password in the pgbouncer.ini
> file. You have to specify it in the MYDB line, i.e. something like
>
> [databases]
> MYDB  = host=127.0.0.1 dbname=MYDB user=MYUSER client_encoding=utf8
> port=5432 password='mypassword'
>
>
> The auth_file is used only for connecting to the pgbouncer, it's not
> forwarded to the database server - the pgbouncer opens the connection on
> behalf of the users, and you may actually have a completely different
> users on the connection pooler.
>
> Tomas
>
>
I just checked my pgbouncer config file, and ye it does require a password
in the db connection line.

Amitabh


Re: [GENERAL] Installed. Now what?

2011-11-19 Thread Amitabh Kant
On Sun, Nov 20, 2011 at 4:14 AM, Phoenix Kiula wrote:

> On Sun, Nov 20, 2011 at 3:35 AM, Adrian Klaver 
> wrote:
>
> [databases]
> MYDB  = host=127.0.0.1 dbname=MYDB user=MYUSER client_encoding=utf8
> port=5432
>
> ;; Configuation section
> [pgbouncer]
> logfile = /var/log/pgbouncer.log
> pidfile = /var/run/pgbouncer/pgbouncer.pid
> ; ip address or * which means all ip-s
> listen_addr = 127.0.0.1
> listen_port = 6543
> auth_type = trust
> auth_file = /var/lib/pgsql/pgbouncer.txt
>
> admin_users = postgres
> stats_users = stats, root
> pool_mode = session
> server_reset_query = DISCARD ALL
>
> ;;; Connection limits
> ; total number of clients that can connect
> max_client_conn = 1500
> default_pool_size = 50
>
>
I am assuming the difference in the port numbers between your config file
and php code is a typing error.

Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
credentials to your database? If I remember correctly, it should have the
username and password to your database.

Amitabh


Re: [GENERAL] Selecting all records which are in upper case

2011-08-27 Thread Amitabh Kant
On Sat, Aug 27, 2011 at 3:40 PM, Martijn van Oosterhout
wrote:

> On Sat, Aug 27, 2011 at 03:12:44PM +0530, Amitabh Kant wrote:
> > Hello
> >
> > I have a simple table 'location' :
> > id -> Int (associated with a sequence)
> > name -> Character varying (100)
> >
> > I have to delete all records where values in name field are all in upper
> > case. For example, if the test data is as follows:
>
> Might not work if you have non-ascii characters (but your example code
> breaks there too), but what about:
>
> DELETE ... WHERE upper(name) = name;
>
> Have a nice day,
> --
> Martijn van Oosterhout  http://svana.org/kleptog/
>

Thanks Martin. Definitely a much simpler way. I also cross-checked it on my
table, and it does work on non-ascii characters. It is only returning me
upper case entries. All other entries remain unaffected.

Amitabh


[GENERAL] Selecting all records which are in upper case

2011-08-27 Thread Amitabh Kant
Hello

I have a simple table 'location' :
id -> Int (associated with a sequence)
name -> Character varying (100)

I have to delete all records where values in name field are all in upper
case. For example, if the test data is as follows:

idname
1abcc
2Abc dsase
3CDF FDER
4Amcddd FFR
5EE DFEW
6Sedcd

Only reecords #3 and #5 are to be deleted. The closest I could reach was
this:
"delete from location where (ascii(substring(name from 1 for 1)) between 65
and 90) and (ascii(substring(name from char_length(name) for 1)) between 65
and 90)"
The problem with this query is it would also delete record #4.

How do I get it to select only those records in which all characters are in
uppercase?


Amitabh


Re: [GENERAL] Postgres on SSD

2011-08-10 Thread Amitabh Kant
There have been several discussions for SSD in recent months although not
specific to Fusion IO drives.

See http://archives.postgresql.org/pgsql-general/2011-04/msg00460.php . You
can search the archives for more such reference.


Amitabh

2011/8/11 Ondrej Ivanič 

> Hi,
>
> 2011/8/10 Tomas Vondra :
> > On 10 Srpen 2011, 1:17, Ondrej Ivanič wrote:
> >> - What needs to be changed at Postgres/Operating system level? The
> >> obvious one is to change random_page_cost (now: 2) and seq_page_cost
> >> (now: 4). What else should I look at?
> >
> > Are you sure about this? I'm not quite sure setting seq_page_cost=4 and
> > random_page_cost=2 makes sense. Usually seq_page_cost is lower than
> > random_page_cost, so I wonder why have you set it like that.
>
> Ups! Well spotted Tomas! The actual values are:
> random_page_cost = 2
> seq_page_cost = 1
>
> --
> Ondrej Ivanic
> (ondrej.iva...@gmail.com)
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Backup & Restore a database in PostgreSQL

2011-08-08 Thread Amitabh Kant
Hi Siva

Not sure if it would help, but try passing -O in your pg_restore command.

Amitabh

On Mon, Aug 8, 2011 at 2:04 PM, Siva Palanisamy  wrote:

> Hi John,
>
> Thanks a lot for your reply. As usual Backup worked perfectly. When I tried
> restore using the command you provided, I got the below list of errors!
> Please help me out on this.
>
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 14; 1255 16384 FUNCTION
> plpgsql_call_handler() postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner
> of function public.plpgsql_call_handler
>Command was: DROP FUNCTION public.plpgsql_call_handler();
> pg_restore: [archiver (db)] Error from TOC entry 276; 2612 16387 PROCEDURAL
> LANGUAGE plpgsql
> pg_restore: [archiver (db)] could not execute query: ERROR:  must be
> superuser to drop procedural language
>Command was: DROP PROCEDURAL LANGUAGE plpgsql;
> pg_restore: [archiver (db)] Error from TOC entry 5; 2615 2200 SCHEMA public
> postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner
> of schema public
>Command was: DROP SCHEMA public;
> pg_restore: [archiver (db)] could not execute query: ERROR:  schema
> "public" already exists
>Command was: CREATE SCHEMA public;
> pg_restore: [archiver (db)] Error from TOC entry 1566; 0 0 COMMENT SCHEMA
> public postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner
> of schema public
>Command was: COMMENT ON SCHEMA public IS 'Standard public schema';
> pg_restore: [archiver (db)] Error from TOC entry 276; 2612 16387 PROCEDURAL
> LANGUAGE plpgsql
> pg_restore: [archiver (db)] could not execute query: ERROR:  must be
> superuser to create procedural language
>Command was: CREATE PROCEDURAL LANGUAGE plpgsql;
> pg_restore: [archiver (db)] Error from TOC entry 14; 1255 16384 FUNCTION
> plpgsql_call_handler() postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  permission
> denied for language c
>Command was: CREATE FUNCTION plpgsql_call_handler() RETURNS
> language_handler
>AS '$libdir/plpgsql', 'plpgsql_call_handler'
>LANGUAG...
> pg_restore: WARNING:  no privileges could be revoked
> pg_restore: WARNING:  no privileges could be revoked
> pg_restore: WARNING:  no privileges were granted
> pg_restore: WARNING:  no privileges were granted
> WARNING: errors ignored on restore: 7
>
> Thanks and Regards,
> Siva.
>
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
> Sent: Monday, August 08, 2011 1:45 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Backup & Restore a database in PostgreSQL
>
> On 08/08/11 1:01 AM, Siva Palanisamy wrote:
> >
> > Hi All,
> >
> > I am also a newbie here! I need to backup a database and restore it
> > into the target machine where the database may already present or
> > might not. If it exists, I want the "restore" command to overwrite,
> > otherwise, just create a new one.
> >
> > I tried using the commands:
> >
> > (1) BACKUP: pg_dump -h localhost -U username db > dump_file.out;
> >
> > RESTORE: pg_dump -h localhost -U username db < dump_file.out;
> >
> > (2) BACKUP:
> >
> > pg_dump -h localhost -U username -Ft db > dump_file.tar;
> >
> > RESTORE:
> >
> > pg_restore -h localhost -U username -d db dump_file.tar;
> >
> > "Backup" worked perfect in the above 2 scenarios whereas "restore"
> > didn't yield the exact results. For testing it, I took the back-up and
> > intentionally deleted few records in a table. I then restored the
> > database in the same machine where the database exists. I expected the
> > deleted records to come back as I was restoring the one which has the
> > complete data. It didn't yield proper results. And I wonder why..
> >
> > I believe I might be doing something marginally wrong. I would
> > appreciate if any geek over here to guide me the "restore" command
> > properly.
> >
> >
>
> specify -c on the pg_restore, and it will drop the database objects and
> recreate them
>
> pg_dump -Fc -h localhost -U user -f dumpfile.pg dbname
>
> pg_restore -c -h localhost -U user -d dbname dumpfile.pg
>
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
> ::DISCLAIMER::
>
> ---
>
> The contents of this e-mail and any attachment(s) are confidential and
> intended for the named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its
> affiliates. Any views or opinions presented in
> this email are solely those of the author and may not necessarily refle

Re: [GENERAL] getting postgres server on freebsd startup?

2011-06-22 Thread Amitabh Kant
On Thu, Jun 16, 2011 at 10:59 PM, Jeff Hamann wrote:


> Please don't respond with "Why don't you just use the ports collection?"
> There's reasons - like: 1) need to build from source, 3) it's for a
> tutorial, and 3) postgresql90-server isn't building.
>
>  Respectfully,
> Jeff.
>
>
>
While I can't answer your other questions, I have just completed  a fresh
build of postgresql90-server on FreeBSD 8.2 (amd64 arch) without any
problems using the ports. What error did you receive when using the port
build?

With regards

Amitabh


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-06-19 Thread Amitabh Kant
On Mon, Jun 20, 2011 at 1:43 AM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> Hello Cedric and others,
>
> On Sun, Jun 19, 2011 at 9:56 PM, Cédric Villemain
>  wrote:
> > 2011/6/19 Alexander Farber :
> >> [pgbouncer]
> >> logfile = /var/log/pgbouncer.log
> >> pidfile = /var/run/pgbouncer/pgbouncer.pid
> >> listen_port = 6432
> >> unix_socket_dir = /tmp
> >> auth_type = md5
> >> auth_file = /var/lib/pgsql/data/global/pg_auth
> >> pool_mode = transaction
> >> server_check_delay = 10
> >> max_client_conn = 200
> >> default_pool_size = 20
>
> >> My php script displaying player stats:
> >> http://preferans.de/user.php?id=OK493430777441
> >> will sometimes exit with the PDO error:
> >>
> >>  SQLSTATE[26000]: Invalid sql statement name:
> >>  7 ERROR: prepared statement
> >>  "pdo_stmt_0016" does not exist
> >>
>


> why add a begin/commit if I only
> have SELECT statements
> there (in the default mode) and
> the data isn't critical to me
> (just some player statistics and
> notes by other players - i.e.
> a statistic or note is ok to be lost
> occasionally)?
>
> Also I've changed my PHP-script
> to non-persistent connections:
>
>   $options = array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION);
>$db = new PDO(sprintf('pgsql:host=%s port=%u; dbname=%s',
>DBHOST, DBPORT, DBNAME), DBUSER, DBPASS, $options);
>
> and restarted Apache 2.2.3,
> but that error is still there:
>
> SQLSTATE[26000]: Invalid sql
> statement name: 7 ERROR: prepared
> statement "pdo_stmt_000a" does not exist
>
> Regards
> Alex
>
>
Try setting "set server_reset_query = DISCARD ALL;" in your pgbouncer
configuration file.

Amitabh


Re: [GENERAL] Partitions and indexes

2011-02-26 Thread Amitabh Kant
On Sun, Feb 27, 2011 at 12:30 AM, John R Pierce  wrote:

> On 02/26/11 10:42 AM, Amitabh Kant wrote:
>
>> On Sat, Feb 26, 2011 at 11:52 PM, John R Pierce > pie...@hogranch.com>> wrote:
>>
>>On 02/26/11 10:01 AM, Alban Hertroys wrote:
>>
>>    On 26 Feb 2011, at 18:04, Amitabh Kant wrote:
>>
>>Now if I partition the table T2 based on field T1id,
>>making sure that each distinct T1id is provided its own
>>child table
>>
>>Table T2C1 (inherited from T2, T1id field only contains 1
>>for all rows)
>>Table T2C2 (inherited from T2, T1id field only contains 2
>>for all rows)
>>-- --
>>
>>What I would like to know here is that do I need to add an
>>index for T1id field for either T2 or it's inherited
>>tables (T2C1/T2C2 etc)?
>>
>>No. Either would be rather pointless. In the child tables all
>>the values in that index would have the same exact value,
>>which you don't need as constraint exclusion already pointed
>>the planner to the right table. In the parent table there
>>wouldn't be any data to index.
>>
>>
>>the whole idea of one table per row sounds rather odd and
>>pointless to me.<http://www.postgresql.org/mailpref/pgsql-general>
>>
>>
>>
>> Not sure I am getting you correctly, but every table will have multiple
>> rows of data, but for every row in a given table, the value of the FK would
>> be the same.
>>
>
> ah, I misunderstood the original description.
>
> you're not likely to query T2 by T1id, are you?   Doing so would return all
> of one of those inherited tables
>
>
Almost all queries will have T1id as one of its parameter, although not the
only parameter.

Amitabh


Re: [GENERAL] Partitions and indexes

2011-02-26 Thread Amitabh Kant
On Sat, Feb 26, 2011 at 11:52 PM, John R Pierce  wrote:

> On 02/26/11 10:01 AM, Alban Hertroys wrote:
>
>> On 26 Feb 2011, at 18:04, Amitabh Kant wrote:
>>
>>  Now if I partition the table T2 based on field T1id, making sure that
>>> each distinct T1id is provided its own child table
>>>
>>> Table T2C1 (inherited from T2, T1id field only contains 1 for all rows)
>>> Table T2C2 (inherited from T2, T1id field only contains 2 for all rows)
>>> --
>>> --
>>>
>>> What I would like to know here is that do I need to add an index for T1id
>>> field for either T2 or it's inherited tables (T2C1/T2C2 etc)?
>>>
>> No. Either would be rather pointless. In the child tables all the values
>> in that index would have the same exact value, which you don't need as
>> constraint exclusion already pointed the planner to the right table. In the
>> parent table there wouldn't be any data to index.
>>
>
> the whole idea of one table per row sounds rather odd and pointless to me.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Not sure I am getting you correctly, but every table will have multiple rows
of data, but for every row in a given table, the value of the FK would be
the same.

Amitabh


Re: [GENERAL] Partitions and indexes

2011-02-26 Thread Amitabh Kant
On Fri, Feb 25, 2011 at 1:17 PM, Chetan Suttraway <
chetan.suttra...@enterprisedb.com> wrote:

>
>
> On Wed, Feb 23, 2011 at 12:52 PM, Amitabh Kant wrote:
>
>> Hello
>>
>> If I have partitioned a table based on a foreign key in a manner where
>> every child table will only have data for single value of the foreign key,
>> do I need to create a index for the foreign key in the primary and/or child
>> tables? I am using version 8.4
>>
>> With regards
>>
>> Amitabh
>>
>>
>> Could you post a detailed set of queries?
>
>
> --
> Chetan Sutrave
> http://www.enterprisedb.com
>
>
A simplified representation would be:

Table T1
id int (PK)
name varchar
--
--

Table T2
id int (PK)
T1id int (FK to T1->id)
--
--

Now if I partition the table T2 based on field T1id, making sure that each
distinct T1id is provided its own child table

Table T2C1 (inherited from T2, T1id field only contains 1 for all rows)
Table T2C2 (inherited from T2, T1id field only contains 2 for all rows)
-- 
--

What I would like to know here is that do I need to add an index for T1id
field for either T2 or it's inherited tables (T2C1/T2C2 etc)?

Amitabh


[GENERAL] Partitions and indexes

2011-02-22 Thread Amitabh Kant
Hello

If I have partitioned a table based on a foreign key in a manner where every
child table will only have data for single value of the foreign key, do I
need to create a index for the foreign key in the primary and/or child
tables? I am using version 8.4

With regards

Amitabh


Re: [GENERAL] error while autovacuuming

2010-08-27 Thread Amitabh Kant
On Fri, Aug 27, 2010 at 3:21 PM, tamanna madaan <
tamanna.ma...@globallogic.com> wrote:

>  Hi All
>
> Any idea about this problem ??
>
> Thanks..
> Tamanna
>
>
> -Original Message-
> From: tamanna madaan
> Sent: Fri 8/20/2010 11:54 AM
> To: pgsql-general@postgresql.org
> Subject: error while autovacuuming
>
> Hi All
>
> I am using a cluster setup with two nodes in it . postgres version is 8.1.2
> and slony 1.1.5 is being used for replication.
> I have autovacuum enabled. But it was  giving the following error while
> vacuuming template0 database.
>
> 2010-08-18 18:36:14 UTC LOG: autovacuum: processing database "template0"
> 2010-08-18 18:36:14 UTC ERROR: could not access status of transaction
> 3222599682
> 2010-08-18 18:36:14 UTC DETAIL: could not open file "pg_clog/0C01": No such
> file or directory
>
> Because of this error autovacuum is not able to process other databses in
> postgres including my databse 'my_database'.
> which is causing database to slow down.
>
> I googled about this problem and found that this may be because of some
> data corruption. But I don't see any source of data corruption
> in my database. Can anyone please let me know what could be causing this
> error. We generally don't use template0 database.
> So what could be causing this data corruption.
>
> Following below steps have solved the problem  for now :
>
>  1.  set 'datallowconn' to true for template0 database in pg_database
> table.
>  2. Stop postgres
>  3. create a zero filled 0C01 file in /var/lib/pgsql/data/pg_clog folder.
>  4. start postgres
>  5. Execute `vacuum freeze` for all the databases i.e template0 ,
> template1, postgres and my_database
>  6. set 'datallowconn' to false for template0 database in pg_database
> table.
>
> But I doubt if the problem has been solved permanently or it can reapper
> again. If it has been solved permanenlty then how.
> I just created a zero filled 0C01 file of 256 K size. It doesn't have any
> meaningful data. How this can resolve the problem permanently??
>
> Please reply ...
>
> Thanks...
> Tamanna
>
>   Any chance you can upgrade your version to 8.4.x series? If I am not
wrong, there has been a significant change in autovaccum between the
versions. In case you cannot, I would still suggest you to update your
version to 8.1.21 which should atleast cover any known bugs.

With regards

Amitabh


Re: [GENERAL] InitDB: Bad system call

2010-08-09 Thread Amitabh Kant
On Mon, Aug 9, 2010 at 6:01 PM, Thom Brown  wrote:

>
> See http://www.postgresql.org/docs/9.0/static/kernel-resources.html
> and the section under NetBSD/OpenBSD.
>
> --
> Thom Brown
> Registered Linux user: #516935
>
>
Thom

Not sure if it's a typo, but shouldn't he be looking under FreeBSD section
as he is running FreeBSD 7.0?


Amitabh Kant


Re: [GENERAL] pgtune

2010-08-09 Thread Amitabh Kant
2010/8/9 Sim Zacks 

>
>
> I just found out about pgtune and am trying it out on my server.
>
>
> I have 2.5 questions:
>
> 1) Are these settings the maximum that the server will handle, if it is
> strictly dedicated to postgresql? Meaning if I am running other stuff on
> the server as well, this would be a bad idea.
>
>
> 1a) If I have some intense plpython stored procedures, do they run in
> the postgresql memory space (ie using the memory settings from the
> postgresql.conf, or do they run under their own memory space and then I
> would have to take that into account when allocating postgresql memory?
>
>
> 2) If it sets my max_connections to 80 and would like to set it at 300,
> what would be the appropriate setting to lower at its expense?
>
>
> Sim
>

Look at the options available in pgtune


  -M TOTALMEMORY, --memory=TOTALMEMORY
Total system memory, will attempt to detect if
unspecified
  -T DBTYPE, --type=DBTYPE
Database type, defaults to Mixed, valid options are
DW, OLTP, Web, Mixed, Desktop
  -c CONNECTIONS, --connections=CONNECTIONS
Maximum number of expected connections, default
depends on database type

For question 1, you can set the type of server you want. For question 2, you
can pass the -c parameter and it would adjust the other parameters. Not sure
of 1a though.


Amitabh Kant


Re: [GENERAL] MySQL versus Postgres

2010-08-07 Thread Amitabh Kant
2010/8/7 சிவகுமார் மா 

> 3. The default configuration settings for PostgreSQL are not optimal
> for performance. Can there be a recommended configuration file in the
> installation (assuming certain amount of RAM and processor type) ?
>
> Ma Sivakumar
> http://masivakumar.blogspot.com
>
>

pgtune [http://pgfoundry.org/projects/pgtune/]  is already available for
this purpose.


Amitabh Kant


Re: [GENERAL] db size and VACUUM ANALYZE

2010-02-12 Thread Amitabh Kant
On Fri, Feb 12, 2010 at 10:40 PM, Marcin Krol  wrote:

> Hello,
>
> The db in the application I maintain but didn't write (it obviously
> makes use of PG, v 8.3), has been systematically growing in size from
> about 600M to 1.6G.
>
> At the same time, the performance of the app has degraded significantly
> (several times).
>
> So I've done VACUUM ANALYZE on entire db. Nothing. The db did not
> decrease in size, the performance stayed the same.
>
> So I backed it up using pg_dump, deleted database, and recreated it from
> backup.
>
> The size of db on disk went down to 600M, performance recovered to the
> original level.
>
> Why that is so? I thought that VACUUM ANALYZE does everything that is
> needed to optimize disk usage?
>
> Regards,
> mk
>
>

You need to do VACUUM FULL ANALYZE to claim the disk space, but this creates
a exclusive lock on the tables.

See http://www.postgresql.org/docs/8.3/static/sql-vacuum.html


With regards

Amitabh Kant


Re: [GENERAL] Comparing bit in an integer field

2009-11-16 Thread Amitabh Kant
On Mon, Nov 16, 2009 at 7:53 PM, A. Kretschmer <
andreas.kretsch...@schollglas.com> wrote:

> In response to Amitabh Kant :
>
> test=# select * from t1;
>  a1
> 
>  0
>  12
>  8
>  0
>  1
>  10
>  7
>  19
> (8 rows)
>
> test=*# select a1 from t1 where (a1>>2)::bit = B'1';
>  a1
> 
>  12
>  7
> (2 rows)
>
>
> Regards, Andreas
>

Thanks. That should solve my problem.

Amitabh


[GENERAL] Comparing bit in an integer field

2009-11-16 Thread Amitabh Kant
Hello

I need to compare the bit values of a integer field in my table. For
example, I have a table called "t1" with just one field "a1" having
following values:

a1
==
0
12
8
0
1
10
7
19

I am trying to fetch all records where the 3rd binary bit is 1, which from
the above example should be 12 (1100) and 7 (0111). If I convert the
values to binary and store it as string, I can easily compare them using
substr, but would rather like to retain the integer field. I have tried
using the get_bit function, but it seems my understanding of the function is
not correct.

I would appreciate any help in this regard.


Amitabh


Re: [GENERAL] Performance evaluation of PostgreSQL's historic releases

2009-09-29 Thread Amitabh Kant
On Tue, Sep 29, 2009 at 1:06 PM, György Vilmos wrote:

> Hello,
>
> I've done a benchmark of recent versions of PostgreSQL's last five major
> releases to see, how performance has changed during the past years from
> version to version.
> You can find the article here:
> http://suckit.blog.hu/2009/09/26/postgresql_history
>
> Thanks for working on this great piece of software!
>
> -- 
>
>
Hello

Thanks for the benchmark. Did you configure FreeBSD kernel parameters too,
or used the default values?

With regards

Amitabh


Re: [GENERAL] PostgreSQL versus MySQL for GPS Data

2009-03-20 Thread Amitabh Kant
You would get better results if you posted in mysql forums.

http://forums.mysql.com/


Amitabh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL versus MySQL for GPS Data

2009-03-19 Thread Amitabh Kant
Just to add to this list, I have been using Postgresql to store data
for multiple GPS applications handling more than 150-200 vehicles.
Some of the tables that I have are running into 20 - 25 million rows
at the max, and on average 10 million rows. I am yet to see a problem
from the database side, although must admit that I receive data every
10 seconds from the devices.

I am sure that optimizing the postgresql.conf files, and using postgis
would be of great help down the road.

With regards

Amitabh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general