On 08/07/2012 07:06 PM, karthi keyan wrote:
For some interlink application purpose , i need to *migrate data into
Sql server 2008*.
The simplest way is usually to connect with psql and export CSV data with:
\copy (SELECT ) to '/path/to/file.csv' csv
or for a whole table:
\copy t
hello admin,
I am using Postgresql for my application development, which is very robust
and secure to use.
For some interlink application purpose , i need to *migrate data into Sql
server 2008*.
So please refer me, or give some samples, how i can migrate the data
Regards
Karthik
Hello,
I am doing table partitioning, all is ok except that after executing 'insert'
sql statement I can't get affected rows, it always be 0. After searching on the
documents, I found that row changes inside trigger function is not visible to
the top level statement.
Partition table using a tr
On 08/08/2012 09:39 AM, Stephen Frost wrote:
Terry,
* Terry Schmitt (tschm...@schmittworks.com) wrote:
So far, executing pg_dumpall
seems to be fairly reliable for finding the corrupt objects after my
initial data load, but unfortunately much of the corruption has been with
indexes which pgdump
Michael,
* Michael O'Donnell (odonne...@usgs.gov) wrote:
> I am trying to authenticate PostgreSQL 9.0 login roles against LDAP/Active
> directory (AD). PostgreSQL 9.0 is installed on a Windows 2008 R2 64bit. My
> pg_hba.conf setting looks like the following:
My first reaction to this, to be hon
Hello,
I am trying to authenticate PostgreSQL 9.0 login roles against LDAP/Active
directory (AD). PostgreSQL 9.0 is installed on a Windows 2008 R2 64bit. My
pg_hba.conf setting looks like the following:
host samenet ldapldapserver=
ldapprefix="DOMAIN\"
I am populating the , , , an
Terry,
* Terry Schmitt (tschm...@schmittworks.com) wrote:
> So far, executing pg_dumpall
> seems to be fairly reliable for finding the corrupt objects after my
> initial data load, but unfortunately much of the corruption has been with
> indexes which pgdump will not expose.
Shouldn't be too hard
Thanks Craig.
"# Brad's el-ghetto do-our-storage-stacks-lie?-script" I like it already :)
I may play around with that. Looks interesting. For everyone else, here's a
post describing the use of diskchecker:
http://brad.livejournal.com/2116715.html
I experimented with sysbench today, which was som
Terry,
* Terry Schmitt (tschm...@schmittworks.com) wrote:
> The new environment is RHEL 6.x guests running inside Redhat Virtualization
> using XFS and LVM.
That's quite the shift, yet you left out any details on this piece..
How is the VM connected to the NetApp LUN? What kind of options have
On 08/08/2012 06:23 AM, Terry Schmitt wrote:
Anyone have a solid method to test if fdatasync is working correctly or
thoughts on troubleshooting this?
Try diskchecker.pl
https://gist.github.com/3177656
The other obvious step is that you've changed three things, so start
isolation testing.
Simon,
While I agree with your reply in general and am working that angle and
more, I'm hoping to add to my personal tool kit and gain more insight into
methods to test fsync and prove without a doubt that it is functioning
properly on any given system no matter what type of database I'm running.
On 7 August 2012 23:23, Terry Schmitt wrote:
> I have a pretty strange issue that I'm looking for ideas on.
> I'm using Postgres Plus Advanced Server 9.1, but I believe this problem is
> relevant to Postgres Community. It is certainly possible to be a EDB bug and
> I am already working with them
Hi All,
I have a pretty strange issue that I'm looking for ideas on.
I'm using Postgres Plus Advanced Server 9.1, but I believe this problem is
relevant to Postgres Community. It is certainly possible to be a EDB bug
and I am already working with them on this.
We are migrating 1TB+ from Oracle to
13 matches
Mail list logo