If you could use foreign data wrapper to connect
https://github.com/tds-fdw/tds_fdw then you can skip the migration back and
for to CSV.
You could even do partial migrations if needed (it could impact some
queries' speed though).
Pablo
On Fri, May 3, 2019 at 6:37 AM Adrian Klaver
wrote:
> On 5
Hi,
On 2019-05-03 11:06:09 -0700, Igal Sapir wrote:
> Is it possible to connect to Postgres for notifications via telnet? This
> is obviously more for learning/experimenting purposes.
No. The protocol is too complicated to make that realistically doable /
useful.
> I expected a simple way to c
On 5/3/19 10:14 AM, Julie Nishimura wrote:
Guys,
Do you know what does this message mean?
POSTGRES_FSM_RELATIONS=CRITICAL: DB control (host:xxx) fsm relations used:
76628 of 8 (96%)
What is generating above?
Postgres version?
FYI, it seems you piggybacked this post(along with the Pgadmin
there is a port of pgadmin3 from bigSQL, support postgresql up to 10.
pgadmin3 in debian buster support also postgesql 10.
https://metadata.ftp-master.debian.org/changelogs/main/p/pgadmin3/pgadmin3_1.22.2-5_changelog
what version of pgadmin3 did you use? pgadmin3_1.22 should support
postgresql 9.
Hi Adrian,
Please find the requested details.
What OS(and version) are you using?
Ans:
bash-4.4$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.9.2
PRETTY_NAME="Alpine Linux v3.9"
HOME_URL="https://alpinelinux.org/";
BUG_REPORT_URL="https://bugs.alpinelinux.org/";
bash-4.4$
bash
Thanks!
From: Tom Lane
Sent: Friday, May 3, 2019 11:25 AM
To: Julie Nishimura
Cc: Adrian Klaver; pgsql-gene...@postgresql.org
Subject: Re: Pgadmin III
Julie Nishimura writes:
> Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am
> gett
Julie Nishimura writes:
> Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am
> getting this error:
> An error has occurred:
> Column not found in pgSet: rolcatupdate
> Do you know which version of Pgadmin should I use to avoid this? I am on
> windows 7. Thanks
Develo
On 5/3/19 10:24 AM, Daulat Ram wrote:
Hi Adrian,
Please find the requested details.
What OS(and version) are you using?
Ans:
bash-4.4$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.9.2
PRETTY_NAME="Alpine Linux v3.9"
HOME_URL="https://alpinelinux.org/";
BUG_REPORT_URL="https://
Is it possible to connect to Postgres for notifications via telnet? This
is obviously more for learning/experimenting purposes.
I expected a simple way to connect and consume notifications but can not
find any example or documentation on how to do that.
Any ideas?
Thanks,
Igal
Guys,
Do you know what does this message mean?
POSTGRES_FSM_RELATIONS=CRITICAL: DB control (host:xxx) fsm relations used:
76628 of 8 (96%)
Is this caused by someone deleting a bunch of old data and not vacuuming?
Thanks!
From: Julie Nishimura
Sent: Frid
If anyone ever needs, I wrote this 1-liner bash loop to create 16 temp
files of 640MB random data each (well, 2-liner if you count the "config"
line):
$ COUNT=16; TMPDIR=/pgdata/tmp/
$ for ((i=1; i<=6; i++)); do dd if=/dev/zero of="/pgdata/tmp/$(cat
/dev/urandom | tr -cd 'a-f0-9' | head -c 20).tmp
On 5/3/19 8:56 AM, Daulat Ram wrote:
Hello team,
We are getting below issue while creating a function in Potsgres 11.2
nagios=# create or replace function diskf (filesystem text, warn int,
err int) returns text as $BODY$
nagios$# use warnings;
nagios$# use strict;
nagios$# my $fs = $_[0];
Hello team,
We are getting below issue while creating a function in Potsgres 11.2
nagios=# create or replace function diskf (filesystem text, warn int, err int)
returns text as $BODY$
nagios$# use warnings;
nagios$# use strict;
nagios$# my $fs = $_[0];
nagios$# my $w = $_[1];
nagios$# my $e = $
On 5/3/19 10:05 AM, Guntry Vinod wrote:
Hi Team,
Here we go. I will give the problem in more detail
Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
Step2:There is table in PostGre where we are suppose to upload the dump
Step3:We are using copy command to uploa
Looping Nikhil and Shiva who are from Mainframe, DB2.
Nikhil/Shiva I am trying explain the problem to the team but there few
questions which needs your intervention.
Regards,
Vinod
-Original Message-
From: Adrian Klaver
Sent: Friday, May 3, 2019 9:47 PM
To: Guntry Vinod ; Ravi Krishna
Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am
getting this error:
An error has occurred:
Column not found in pgSet: rolcatupdate
Do you know which version of Pgadmin should I use to avoid this? I am on
windows 7. Thanks
Hi,
On Fri, May 3, 2019 at 11:20 AM Michael Nolan wrote:
>
>
>
> I'm still not clear what the backslash is for, it is ONLY to separate first
> and last name? Can you change it to some other character?
>
> Others have suggested you're in a Windows environment, that might limit your
> options.
I'm still not clear what the backslash is for, it is ONLY to separate first
and last name? Can you change it to some other character?
Others have suggested you're in a Windows environment, that might limit
your options. How big is the file, is it possible to copy it to another
server to manipul
On 5/3/19 9:05 AM, Guntry Vinod wrote:
Hi Team,
Here we go. I will give the problem in more detail
Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
Step2:There is table in PostGre where we are suppose to upload the dump
Step3:We are using copy command to upload d
On 5/3/19 9:05 AM, Guntry Vinod wrote:
Hi Team,
Here we go. I will give the problem in more detail
Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
The above is what we need information on:
1) Is it output as CSV or text?
2) What are the parameters used to out
Jeff,
On Fri, May 3, 2019 at 6:56 AM Jeff Janes wrote:
> On Wed, May 1, 2019 at 10:25 PM Igal Sapir wrote:
>
>>
>> I have a scheduled process that runs daily to delete old data and do full
>> vacuum. Not sure why this happened (again).
>>
>
> If you are doing a regularly scheduled "vacuum full
>
> Hope Iam detail this time :-)
>
Unfortunately still not enough. Can you post sample of the data here. And
what command you used in DB2. Pls post the SQL used in DB2 to dump the data.
rihad wrote:
> On 05/03/2019 05:35 PM, Daniel Verite wrote:
> > For non-English text, I would recommend C.UTF-8 over "C" because of
>
> BTW, there's no C.UTF-8 inside pg_collation, and running select
> pg_import_system_collations('pg_catalog') doesn't bring it in, at least
> not on Free
Hi Team,
Here we go. I will give the problem in more detail
Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
Step2:There is table in PostGre where we are suppose to upload the dump
Step3:We are using copy command to upload dump to the table using (COPY
<> from 'C:
On 05/03/2019 05:35 PM, Daniel Verite wrote:
For non-English text, I would recommend C.UTF-8 over "C" because of
BTW, there's no C.UTF-8 inside pg_collation, and running select
pg_import_system_collations('pg_catalog') doesn't bring it in, at least
not on FreeBSD 11.2.
>
> I don't think we've seen enough representative data to know exactly what the
> backslash is doing. It doesn't appear to be an escape, based on the sole
> example I've seen it appears to be a data separator between first name and
> last name.
>
> It seems increasingly likely to me that you
On Fri, May 3, 2019 at 9:35 AM Ravi Krishna wrote:
> >
> > In what format are you dumping the DB2 data and with what specifications
> e.g. quoting?
> >
>
> DB2's export command quotes the data with "". So while loading, shouldn't
> that take care of delimiter-in-the-data issue ?
>
I don't think
On 5/3/19 7:35 AM, Ravi Krishna wrote:
In what format are you dumping the DB2 data and with what specifications e.g.
quoting?
DB2's export command quotes the data with "". So while loading, shouldn't that
take care of delimiter-in-the-data issue ?
In the original post the only info was:
On 05/03/2019 05:35 PM, Daniel Verite wrote:
rihad wrote:
Thanks, I'm a bit confused here. AFAIK indexes are used for at least two
things: for speed and for skipping the ORDER BY step (since btree
indexes are already sorted). Will such an "upgrade-immune" C.UTF-8 index
still work correc
>
> In what format are you dumping the DB2 data and with what specifications e.g.
> quoting?
>
DB2's export command quotes the data with "". So while loading, shouldn't that
take care of delimiter-in-the-data issue ?
On Fri, May 3, 2019 at 10:04:44AM -0400, Bruce Momjian wrote:
> On thing the original poster might be missing is that the copy DELIMITER
> is used between fields, while backslash is used as an escape before a
> single character. While it might be tempting to try to redefine the
> escape character
On Fri, May 3, 2019 at 06:55:55AM -0700, Adrian Klaver wrote:
> On 5/2/19 10:48 PM, Guntry Vinod wrote:
>
> Please do not top post. Inline/bottom posting is the preferred style on this
> list.
> > Hi Team,
> >
> > We are using the below command
> >
> > COPY <> from 'C:\Data_Dump\ABC.txt' DELIMI
On Wed, May 1, 2019 at 10:25 PM Igal Sapir wrote:
>
> I have a scheduled process that runs daily to delete old data and do full
> vacuum. Not sure why this happened (again).
>
If you are doing a regularly scheduled "vacuum full", you are almost
certainly doing something wrong. Are these "vacuu
On 5/2/19 10:48 PM, Guntry Vinod wrote:
Please do not top post. Inline/bottom posting is the preferred style on
this list.
Hi Team,
We are using the below command
COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';
The above shows what you are doing on the input into Postgres.
We still do no
On 5/3/19 4:47 AM, Saupe Stefan wrote:
I'd like to use RLS to 'hide' or 'deactivate' data at some point that
some rows are not visible to the application user anymore.
Let's say user a owns the data and can see all his data.
The application user 'b' can only select,update,delete... 'active' da
rihad wrote:
> Thanks, I'm a bit confused here. AFAIK indexes are used for at least two
> things: for speed and for skipping the ORDER BY step (since btree
> indexes are already sorted). Will such an "upgrade-immune" C.UTF-8 index
> still work correctly for table lookups?
If the lookup
On 5/3/19 4:56 AM, Matthias Apitz wrote:
Hello,
We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte
On 5/3/19 6:09 AM, Matthias Apitz wrote:
El día Friday, May 03, 2019 a las 07:38:23AM -0500, Ron escribió:
On 5/3/19 6:56 AM, Matthias Apitz wrote:
Hello,
We're investigating the migration of our LMS (Library Managment System)
>from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in fi
El día Friday, May 03, 2019 a las 07:38:23AM -0500, Ron escribió:
> On 5/3/19 6:56 AM, Matthias Apitz wrote:
> >Hello,
> >
> >We're investigating the migration of our LMS (Library Managment System)
> >from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
> >around 400 columns, s
## Matthias Apitz (g...@unixarea.de):
> Re/ the migration of the data itself, are there any use case studies
> which could we keep in mind?
https://wiki.postgresql.org/images/e/e7/Pgconfeu_2013_-_Jens_Wilke_-_Sybase_to_PostgreSQL.pdf
Regards,
Christoph
--
Spare Space
On 5/3/19 6:56 AM, Matthias Apitz wrote:
Hello,
We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte.
Hello,
We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte. The interfaces contain any kind of
langu
I'd like to use RLS to 'hide' or 'deactivate' data at some point that some rows
are not visible to the application user anymore.
Let's say user a owns the data and can see all his data.
The application user 'b' can only select,update,delete... 'active' data, but is
also able to 'deactivate' curre
pá 3. 5. 2019 v 8:19 odesílatel Laurenz Albe
napsal:
> On Thu, 2019-05-02 at 16:55 +, Mark Zellers wrote:
> > I thought I needed the prototype table to be able to define functions
> and procedures that refer to the temporary table but do not create it.
> >
> > Perhaps my assumption that I nee
El día Friday, May 03, 2019 a las 09:04:34AM +, Guntry Vinod escribió:
> The postgre is running on Windows platform.
Maybe you haven't read completely through the post you are top posting
on. It was clear to me (from the used file name syntax) that you are on
Windows; that's why I said:
> Wh
Team,
We had also tried importing the data by converting the data to a CSV file using
\copy TABLE_NAME FROM 'G:\DB_Backup\FILE.csv' (format csv, null '\N');
Regards,
Biswa
-Original Message-
From: Guntry Vinod
Sent: Friday, May 3, 2019 2:35 PM
To: Matthias Apitz
Cc: Andrew Gierth ;
The postgre is running on Windows platform.
-Original Message-
From: Matthias Apitz
Sent: Friday, May 3, 2019 2:32 PM
To: Guntry Vinod
Cc: Andrew Gierth ; pgsql-gene...@postgresql.org;
Adrian Klaver ; ravikris...@mail.com; Venkatamurali
Krishna Gottuparthi ; Biswa Ranjan Dash
Subjec
El día Friday, May 03, 2019 a las 08:45:02AM +, Guntry Vinod escribió:
> Hi Andrew,
>
> So you mean to say we need to replace \\ in data. If so the data what we
> receive is huge chunk(we cannot open in notepad++ also) .
>
> ...
Hi Guntry,
What about piping the data on a Linux or any othe
Hi Andrew,
So you mean to say we need to replace \\ in data. If so the data what we
receive is huge chunk(we cannot open in notepad++ also) .
If we can pass the CSV instead of .txt or any other format. Do we have any
solution. if Yes Can you please give me some example.
Many Thanks,
Vinod
--
> "Guntry" == Guntry Vinod writes:
Guntry> Hi Team,
Guntry> We are using the below command
Guntry> COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';
COPY in postgresql expects one of two data formats; since you did not
specify CSV, in this case it's expecting the default postgresql fo
Hi Team,
We are using the below command
COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';
Regards,
Vinod
-Original Message-
From: Adrian Klaver
Sent: Thursday, May 2, 2019 8:58 PM
To: Guntry Vinod ; ravikris...@mail.com
Cc: pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottup
51 matches
Mail list logo