It's working now...
Final code:
ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET NOT
> NULL;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET
> DEFAULT 1000;
> COMMIT TRANSACTION;
>
>
>
>
> 1) You attached users_code_seq() to a trigger on the users table.
>
yes
> 2) You have a where clause: company_id = NEW.id
> 3) NEW refers to users
> 4) NEW.id is obstensibly a USER ID
>
No...
CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> DECLARE code
On 6 May 2016 at 02:29, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Thu, May 5, 2016 at 3:54 AM, Alban Hertroys <haram...@gmail.com> wrote:
>
>>
>> > On 05 May 2016, at 8:42, drum.lu...@gmail.com wrote:
>>
>> > The final functi
>
>
>> SELECT client_code_increment INTO STRICT NEW.code FROM
>> public.companies WHERE id =
>> NEW.id ORDER BY client_code_increment DESC;
>>
>
>
>
> I am pretty sure the above line is wrong. NEW.id refers to users.id, not
> the companies.id. Also, the implementation presents a
I'm just having some problem when doing:
INSERT INTO public.users
> (id,email,encrypted_password,sign_in_count,created_at,updated_at,company_id)
> VALUES (66,'tes...@test.com','password','0','2016-05-03
> 00:01:01','2016-05-03 00:01:01','15');
- see that I'm not providing the "code" column
On 5 May 2016 at 16:56, drum.lu...@gmail.com <drum.lu...@gmail.com> wrote:
> CREATE OR REPLACE FUNCTION users_code_seq()
>>RETURNS "trigger" AS $$
>> BEGIN
>>
>> IF (TG_OP = 'INSERT') THEN
>> UPDATE public.company_seqs
>
> CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> BEGIN
>
> IF (TG_OP = 'INSERT') THEN
> UPDATE public.company_seqs SET last_seq = (last_seq + 1) WHERE
> company_id = NEW.company_id;
>
> ELSEIF NEW.code IS NULL THEN
> SELECT last_seq
>
>
>
> 1) I just may be over-sensitive to this, but after Adrian Klaver referred
> you to a ten-years old post that the above looks an awful lot similar too,
> it sure would be nice to see some attribution
> rather than claiming it as your own with "...what *I* did..."
>
I would expect a
- This is what I did...
-- Creating the tableCREATE TABLE public.company_seqs(company_id
BIGINT NOT NULL,
last_seq BIGINT NOT NULL DEFAULT 1000,CONSTRAINT company_seqs_pk
PRIMARY KEY (company_id));
-- Creating the function
CREATE OR REPLACE FUNCTION users_code_seq()
RETURNS "trigger" AS'
>
>
>
> I agree that having thousands of sequences can be hard to manage,
> especially in a function, but you did not state that fact before,
> only that you wanted separate sequences for each company. That
> being said, here is an alternate solution.
>
Yep.. that was my mistake.
>
> 1. CREATE
On 4 May 2016 at 01:18, Melvin Davidson wrote:
>
>
> On Tue, May 3, 2016 at 1:21 AM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>>
>>> Well.. I don't need to add a constraint if I already have a default
>>> value, that's right...
>>>
>>
>> Wrong
>>
>> David
Hi all,
I'm trying to get the query below a better performance.. but just don't
know what else I can do...
Please, have a look and let me know if you can help somehow.. also.. if you
need some extra data jet ask me please.
* Note that the gorfs.inode_segments table is 1.7TB size
I have the
On 3 May 2016 at 12:44, drum.lu...@gmail.com <drum.lu...@gmail.com> wrote:
> This is what I've done:
>
>
> -- 1 - Creating the Sequence:
>
> CREATE SEQUENCE users_code_seq
>> INCREMENT 1
>> MINVALUE 1
>> MAXVALUE 9223372036854775807
>
This is what I've done:
-- 1 - Creating the Sequence:
CREATE SEQUENCE users_code_seq
> INCREMENT 1
> MINVALUE 1
> MAXVALUE 9223372036854775807
> START 1000;
> CACHE 1;
-- 2 - Setting the DEFAULT
ALTER TABLE public.users ALTER COLUMN code SET DEFAULT
>
>
>
> Generically speaking, if the total of dx_scan + idx_tup_read +
> idx_tup_fetch are 0, then it is an _indication_ that those indexes should
> be dropped.
> You should also consider how long those indexes have existed and how often
> queries are executed.
>
> A good practice would be to save
>
>
>>
> Index size and index usage are unrelated. Modifications to the index to
> keep it in sync with the table do not count as "usage" - only reading it
> for where clause use counts.
>
> David J.
>
>
>
So only those with* 0 size*, should be deleted? Is that you're saying?
Can you be
The index that I've created and is working is:
Index without typecasting:
> CREATE INDEX CONCURRENTLY ix_clientids2 ON gorfs.inode_segments USING
> btree (full_path);
Thanks for the help, guys!
Melvin, that Query you sent is very interesting..
SELECT n.nspname as schema,
>i.relname
Sorry @Melvin, sent the previous email just to you..
That's a great one, too! Cheers!
Well.. the index creation did not help...
if possible please have a look on the explain analyze results:
http://explain.depesz.com/s/rHOU
What else can I do?
*The indexes I created is:*
- CREATE INDEX
>
> To clarify, the index is based on a function called "split_part()
> The WHERE clause is only referencing the full_part column, so the planner
> cannot associate the index with the full_part column.
>
Thanks for the explanation, Melvin.
It would be simple like:
CREATE INDEX CONCURRENTLY
>
>
>
> Well, a little more information would be useful like:
>
Ops.. yes sure.. sorry about that.
> 1. What is the PostgreSQL version?
>
PostgreSQL 9.2
> 2. What is the O/S?
>
Linux Centos 6.7 64 bits
> 3. What is the structure of gorfs.inode_segments?
>
Table inode_segments: (I'll
Hi all,
I've got the following index on the gorfs.inode_segments table:
>
> CREATE INDEX ix_clientids
> ON gorfs.inode_segments
> USING btree
> (("split_part"("full_path"::"text", '/'::"text", 4)::integer))
> WHERE "gorfs"."is_kaminski_note_path"("full_path"::"text");
And I'm running
The problem was a trigger in my DB, when I disabled it the data started to
be updated.
Lucas
So when I run:
UPDATE ja_jobs t2
> SET time_job = t1.time_job
> FROM junk.ja_test t1
> WHERE t2.id = t1.id
> AND t2.time_job IS DISTINCT FROM t1.time_job;
I get:
UPDATE 2202
So I check the data by doing:
select * FROM public.ja_jobs WHERE id = 14574527
And the "time_job" field is null
I've got two tables:
- ja_jobs
- junk.ja_jobs_23856
I need to update the null column ja_jobs.time_job with the data from the
table junk.ja_jobs_23856
So I'm doing:
>
> UPDATE public.ja_jobs AS b
> SET time_job = a.time_job
> FROM junk.ja_jobs_23856 AS a
> WHERE a.id = b.id
> AND a.clientid
>
>
>
> If I am following, this duplicates the information in
> companies.client_code_increment, in that they both return the last non-user
> code. Of course this assumes, as David mentioned, that the client is not
> using a numeric code system. Then you are left trying to figure whether a
>
On 21 April 2016 at 09:44, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
> On 04/19/2016 07:34 PM, drum.lu...@gmail.com wrote:
>
>> Information. eg.:
>>
>>
>> The schema for the tables.
>>
>> Why is not just adding a
Just forgot to say:
[...]
>
>
>>
>> Does increment_client_code relate to users or some other table, say
>> clients?
>>
>>
> nope.. there is no link between them
>
>
If the users.code is empty/null, then the trigger has to get the last
number from client_code_increment and put on the
>
> Information. eg.:
>
> The schema for the tables.
>
> Why is not just adding a DEFAULT value to the users.code not an option?
>
>
The customer can add their own value to the users.code column.
That's why I can't have a default value.
> What the default code should be or how it is to be
On 20 April 2016 at 10:38, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Tue, Apr 19, 2016 at 3:23 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>> I've got two tables:
>>
>> - users
&
Hi all,
I've got two tables:
- users
- companies
I'm trying to create a function that:
- if users.code is empty, it gives a default value
- And the increment_client_code in company should auto increment for the
next client code
What I've done so far:
DROP FUNCTION IF EXISTS
Hi all,
Currently, I have four servers:
- 1 Master server
- 1 Slave server (read-only)
- 2 Slaves servers (hot_standby)
We're having I/O and size issue, currently running SATA disks.
So we'll need to change our disks to SAN.
*Question:*
Can I use the same SAN array to a Master and
I'm using a MASTER server and a SLAVE as read-only as well.
The results I'm posting here is related to the *master* server.
> We're gonna need better stats. iostat, iotop, vmstat etc will all break
> down your io between reads and writes, random vs sequential etc.
>
I'll try to get more data
On 14 April 2016 at 08:52, Joshua D. Drake <j...@commandprompt.com> wrote:
> On 04/13/2016 01:43 PM, drum.lu...@gmail.com wrote:
>
> Question:
>>
>> I know that might not be the best option, but by increasing the RAM and
>> the CACHE would help, right?
&g
Hi all,
At the moment I'm having 100% I/O during the day. My server has SATA HDs,
and it can't be changed now.
So, to solve the problem (or at least try) I was thinking about double the
RAM, and by doing that, increasing the cache.
[image: Inline images 1]
The server has 128GB today:
Hi there.
I'm currently using postgres 9.2.
As you can see below, my "template1" database was being accessed:
[image: Inline images 2]
That server is a 4-day-old backup DB - does a gzip of pg_dump, excluding
some tables; also 4-day old replication using WAL-archive with 345600s
delay; also
Sorry guys... *WRONG *email
On 17 March 2016 at 14:29, drum.lu...@gmail.com <drum.lu...@gmail.com>
wrote:
>
>>>
>> SELECT 'test_' || substring(uuid_generate_v4()::text, 1, 1);
>>
>> Or use a sequence, or just compute "ROW_NUMBER() OVER ()" and tac
>
>>
>>
> I don't deal with Hot Standby's in my day-to-day but if you DROP SCHEMA
> all of the spaced consumed by indexes and tables in that schema will be
> freed. The vacuum might make a small difference in performance on the
> system catalogs (pg_class, stats, etc) that were updated but with
>
>
>
> I see a lot of other problems: you have 3 independet tables. Your 2 queries
> (selects) returns 2 independet results, you can't use that for insert into
> the
> 3rd table. And i think, you are looking for an update, not insert. So you
> have
> to define how your tables are linked together
On 17 March 2016 at 10:21, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Wed, Mar 16, 2016 at 1:59 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>>
>> 1 - The problem here is that a VACUUM FULL will lock all the DB to wirte,
&
On 18 March 2016 at 03:23, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
> On 03/16/2016 07:07 PM, drum.lu...@gmail.com wrote:
>
>>
>>
>>
>>
>> I see a lot of other problems: you have 3 independet tables. Your 2
>> queries
>>
>
>
>>
> SELECT 'test_' || substring(uuid_generate_v4()::text, 1, 1);
>
> Or use a sequence, or just compute "ROW_NUMBER() OVER ()" and tack that on.
>
> David J.
>
>
>
Well.. I was able to do it by using:
> INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
> FROM
Hi all,
I've got four servers:
1 - Master
2 - Slave Hot Standby (Same hardware)
3 - Slave Hot Standby (Same hardware)
4 - Slave Hot Standby (VM - Very slow machine)
On the master server, I've got a schema named "GORFS" with 80 GB, according
to this SQL:
SELECT schema_name,
>
I'm trying to insert data from TABLE A to TABLE B.
1 - Select billable_id from dm.billable
2 - Select mobiuser_id from ja_mobiusers
3 - Insert the billable_id and the mobiuser_id to the dm.billables_links
table.
*FYI -* It has to be in the same transaction because the mobiuser_id must
go to the
Hi all,
Can you please provide me a Query that tells me how much space is a Schema
in my DB?
I'm using one but don't think it's right
>
> SELECT schema_name,
>pg_size_pretty(sum(table_size)::bigint),
>(sum(table_size) / pg_database_size(current_database())) * 100 as a
> FROM
>
>>>
>> I wouldn't ask if I wouldn't have tested it!
>>
>> Will have a look.
>>
>>
> I didn't asked if you tested what you did post. I asked if you tried
> anything else before asking to be fed the answer. If you did it would be
> nice to include those other attempts.
>
> David J.
>
>
>
On 16 March 2016 at 10:30, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Tue, Mar 15, 2016 at 2:01 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>>
>>>>
>>> Not reading the documentation for functions you've ne
>
>
>>
> Not reading the documentation for functions you've never heard of makes
> the list.
>
> David J.
>
INSERT INTO junk.test1 (account_id, code, info)SELECT account_id,
uuid_generate_v4(), infoFROM junk.test2;
It works but I get data like:
abc77f31-0ee6-44fd-b954-08a3a3aa7b28
>
>
>
> On the target table, I've got a CONSTRAINT:
>
>> ALTER TABLE dm.billables
>> ADD CONSTRAINT uc_billable_code_unique_per_account UNIQUE("account_id",
>> "code");
>
>
> So I'm importing a CSV file with repeated values on the field "code"
> Example:
>
>> 'Interpreting Normal/AH'
>>
On 15 March 2016 at 10:46, James Keener wrote:
> Is a uuid a valid value in the application making use of the data? Why
> can't you add the column to table b and then import, or use create the uuid
> in the import select clause? I'm also having trouble understanding the
>
On 15 March 2016 at 10:29, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Mon, Mar 14, 2016 at 2:13 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>>
> favorite
>> <http://dba.stackexchange.com/questions/
On 3 March 2016 at 10:33, Vitaly Burovoy <vitaly.buro...@gmail.com> wrote:
> On 3/2/16, drum.lu...@gmail.com <drum.lu...@gmail.com> wrote:
> > Hi all...
> >
> > I'm working on a Slow Query. It's faster now (It was 20sec before) but
> > still not good.
>
Hi all...
I'm working on a Slow Query. It's faster now (It was 20sec before) but
still not good.
Can you have a look and see if you can find something?
Cheers
Query:
WITH jobs AS (
SELECT
job.id,
job.clientid,
CONCAT(customer.company, ' ', customer.name_first, ' ',
eve
>
>
> On Sun, Feb 28, 2016 at 9:12 AM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>>
>> Which command would be to export the binary data for a table?
>>
>> I was unable to find it...
>>
>> Thanks
>>
>
>
Have just used the COPY WITH Binary
Thank you.
sider is that this isn't including any on-disk
> space required for your change traffic in the WAL. Your $PGDATA will
> always be larger than the sum of all your databases sizes...
>
>
>> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrot
>
>
>>> *Question:*
Payments in a Pending state cannot be invoiced and are excluded from
the Invoice Runs section, but they are showing in the count mechanic.
How can I solve this?
>>>
>>> In 9.2 you probably need to convert the count into a conditional sum:
>>>
>>>
Hi there
Wanna see how size a schema is in my PostgreSQL 9.2
Got two queries - they return different values... can u please check?
cheers;
Query 1:
SELECT schema_name,
pg_size_pretty(sum(table_size)::bigint) as "disk space",
(sum(table_size) / pg_database_size(current_database())) *
On 1 March 2016 at 11:35, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Mon, Feb 29, 2016 at 2:56 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> *Question:*
>>
>> Payments in a Pending state cannot be invoiced and are ex
There is a number beside each color flag, and a number in the "sub status"
drop down. Instead of "the number of jobs that have billable items that can
be invoiced, it shows the "the number of jobs that have "something"
billable but not yet invoiced."
And since those payments cannot be invoiced
This has been solved...
The difference between the files must be the indexes...
All good now.
Thank you
On 26 February 2016 at 17:26, drum.lu...@gmail.com <drum.lu...@gmail.com>
wrote:
> Hi all,
>
>
> I'm doing the pg_restore now in a 1.5TB file:
>
> *# ls -l
Hi all,
Which command would be to export the binary data for a table?
I was unable to find it...
Thanks
Hi all,
I'm doing the pg_restore now in a 1.5TB file:
*# ls -la*
postgres postgres 1575324616939 Feb 20 13:55 devdb_0.sql
But, the restore has gone over 1.6 TB
*# \l+*
1639 GB
How is that even possible?
*pg_restore command:*
/usr/pgsql-9.2/bin/pg_restore -d dbname --exit-on-error
Hi Alban! Sorry.. that was my mistake
Original Query:
SELECT concat(company, ' ', customer_name_first, ' ',
customer_name_last) AS customer,
sum(revenue) AS revenue,
sum(i.quantity) AS quantity,
sum(i.cost) AS costFROM
( SELECT account.id,
job.customerid,
Anyone can help with that please?
Cheers
On Thursday, 11 February 2016, drum.lu...@gmail.com <drum.lu...@gmail.com>
wrote:
> oh ok!
>
> thanks
>
>
>
> Lucas Possamai
>
> - kinghost.co.nz
> <http://forum.kinghost.co.nz/memberlist.php?mode=viewpro
Hi all
I'm having this error message:
Input data for column `%1$s` is not compatible with data type
But I was unable to find the table which is having this issue
Would be possible someone help me with that?
Thanks
Complete error link:
https://bitbucket.org/snippets/lucaspossamai/7r9yX
ebruary 2016 at 09:11, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all
>>
>> I'm having this error message:
>>
>> Input data for column `%
ebruary 2016 at 10:27, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:
> On 02/10/2016 12:11 PM, David G. Johnston wrote:
>
>> On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com
>> <mailto:drum.lu...@gmail.com> <drum.lu...@gmail.com
>> <mailto:drum.lu...@g
>
>
> FYI, it is always helpful (and polite) to state the PostgreSQL VERSION and
> O/S for archive documentation purposes!
>
That's right. My mistake... I'm using PostgreSQL 9.2.
>
> Note that various postgresql.conf options, system memory & hardware also
> play a factor here, in addition to
Hi all,
I've got a slow query and I'm trying to make it faster.
*New Query:*
SELECT concat(client.company, ' ', client.name_first, ' ',
> client.name_last) AS customer,
>sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
> bill_item.quantity) AS revenue,
>
>
>
>> On 4 February 2016 at 12:03, David G. Johnston
>> <david.g.johns...@gmail.com <mailto:david.g.johns...@gmail.com>> wrote:
>>
>> On Wed, Feb 3, 2016 at 3:48 PM, drum.lu...@gmail.com
>> <mailto:drum.lu...@gmail.com> <drum.l
Hi all,
Below is an example of the auto-generated update query, with
client-supplied keys (_iid). There's a fatal error when _iid is not
numeric. However; this should accept any value.
*Question:* How could I do something that would allow _iid to be more than
just an INT?
WITH
in_rows AS (
On 4 February 2016 at 12:03, David G. Johnston <david.g.johns...@gmail.com>
wrote:
> On Wed, Feb 3, 2016 at 3:48 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
>> Hi all,
>>
>> Below is an example of the auto-generated update query, with
t;https://m.do.co/c/a0d31c597a03>
On 2 February 2016 at 13:21, drum.lu...@gmail.com <drum.lu...@gmail.com>
wrote:
> Hi all,
>
> *masterdb01 -> slave01 -> slave02 -> slave03*
>
> *testmaster01 (Full copy from masterdb01)*
>
> I'm trying to refresh my mai
Hi all,
*masterdb01 -> slave01 -> slave02 -> slave03*
*testmaster01 (Full copy from masterdb01)*
I'm trying to refresh my main DB, by running this command on my test server:
ssh postgres@slave02 "/usr/pgsql-9.2/bin/pg_dump
> --exclude-table-data='junk.*' --format=custom master_db_name" |
>
Hi there!
I'm running this command: *(Trying to copy a full DB (2TB) from a
hot-standby server to a master (test) server)*
ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
--exclude-table-data='junk.*' -—format=custom master_db" |
/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
Hi all, need some help to add a constraint to an existing table (with data).
I'm running the command:
*Query:*
ALTER TABLE integrations.accounts DROP CONSTRAINT IF EXISTS
cc_at_least_one_setting_needed,
ADD CONSTRAINT cc_at_least_one_setting_needed CHECK
(("qb_settings" IS NOT NULL) or
>
> It is customary to restrict emails to a single list or at least make
> others aware when you do have a legitmate need to cross-post.
>
> Specifically your post on -admin <“Break” in plpgsql Function - PostgreSQL
> 9.2>
>
> Yes, it didn't belong on -admin in the first place but posting it here
On 19 January 2016 at 11:44, dinesh kumar <dineshkuma...@gmail.com> wrote:
> Hello,
>
> On Mon, Jan 18, 2016 at 1:37 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> I've created a function that allows me to do an huge update.
>>
>> Bu
I've created a function that allows me to do an huge update.
But I need to limit this function. I need to do 50k rows (example) and then
stop it. After that I need to continue from the rows that I've stopped...
I'll have to modify the call function *select batch_number()* as well.
How can I do
and it worked.
Thank you!
Lucas Possamai
kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile=2=e999f8370385657a65d41d5ff60b0b38>
On 10 January 2016 at 19:22, drum.lu...@gmail.com <drum.lu...@gmail.com>
wrote:
> What is the --pgdata=- in your original command?
om> wrote:
> On Fri, Jan 8, 2016 at 8:44 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
> Hi, I'm a bit too lazy to try suss out the exact reasons for your failure,
> but here is a reasonably thorough guide to set up replication:
> http://dba.stackexchange.com/a/53
klen <brick...@gmail.com> wrote:
> On Sat, Jan 9, 2016 at 1:54 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
>> Hi,
>>
>> If you are able to stop Postgres on the slave you are taking the base
>>> backup from, you could do this:
>&g
st.php?mode=viewprofile=2=e999f8370385657a65d41d5ff60b0b38>
On 10 January 2016 at 11:19, bricklen <brick...@gmail.com> wrote:
> On Sat, Jan 9, 2016 at 2:10 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
>> I could stop the slave then But I'm afraid getting it back onl
.kinghost.co.nz/memberlist.php?mode=viewprofile=2=e999f8370385657a65d41d5ff60b0b38>
On 10 January 2016 at 11:31, bricklen <brick...@gmail.com> wrote:
>
>
> On Sat, Jan 9, 2016 at 2:22 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
>> *rsync would be so
m.kinghost.co.nz/memberlist.php?mode=viewprofile=2=e999f8370385657a65d41d5ff60b0b38>
On 10 January 2016 at 10:34, bricklen <brick...@gmail.com> wrote:
>
> On Sat, Jan 9, 2016 at 12:36 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Do you have any other id
t;http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile=2=e999f8370385657a65d41d5ff60b0b38>
On 10 January 2016 at 10:53, bricklen <brick...@gmail.com> wrote:
> On Sat, Jan 9, 2016 at 1:49 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
>> Hi,
>>
>> If th
Should I point of replication new slave to same DB?
Lucas
On Sunday, 10 January 2016, bricklen <brick...@gmail.com> wrote:
> On Sat, Jan 9, 2016 at 2:35 PM, drum.lu...@gmail.com
> <javascript:_e(%7B%7D,'cvml','drum.lu...@gmail.com');> <
> drum.lu...@gmail.com
>
13:31, bricklen <brick...@gmail.com> wrote:
> Bottom-posting is the convention in the postgresql lists, and makes it
> easier to follow a long thread.
>
> On Sat, Jan 9, 2016 at 3:16 PM, drum.lu...@gmail.com <drum.lu...@gmail.com
> > wrote:
>
>> My servers are not in th
I've started a new SLAVE PostgreSQL server set up.
** NOTE: I run the pg_basebackup from another STANDBY SERVER. Not from the
MASTER*
1 - screen -t basebackup
2 - su - postgres
3 - cd ~/9.2/data/
4 - ssh postgres@slave01 'pg_basebackup --pgdata=- --format=tar
--label=bb_master --progress
88 matches
Mail list logo