Valid until

2024-05-14 Thread Rama Krishnan
Hi team,

I have a question about the "valid until" parameter in the "create role"
command.

I recently set a user's password validity to "2024-05-13", but despite
this, the user is still able to connect to the database. Is this a bug, or
is the "valid until" parameter just for identification purposes, indicating
when the password will expire?

Could you please clarify this for me?


Thanks

Ram


About postgres pg_upgrade

2024-04-05 Thread Rama Krishnan
Hi Team,

Could you please explain me how does pg_upgrade works one of my friends it
works based on pg_restore I am bit confused


About rsync

2024-03-11 Thread Rama Krishnan
Hi Team,


I have primary and stand by setup in that customer using wal_keep_segment
very low number and database size were also high so that cilent asking with
me in order use pgbase backup can't we use rsyn

Here my question is if the wal is is presented on primary and not in
standby then rsync will work but if the wal files were missed at primary if
I am using rsync to the db will be work here please explain me

Thanks
RK


PITR partial timing

2023-10-17 Thread Rama Krishnan
Hi team,

I  m testing PITR

1.if I am doing full recovery what ever the new tables were create will not
be appear how to handle it

2. I have given my logs which time log I have to choose for PITR partial
recovery


2023-10-16 16:22:35 UTC [16561]: [13-1] user=postgres,db=postgresLOG:
 statement: create database pumastore;
2023-10-16 16:22:35 UTC [16561]: [14-1] user=postgres,db=postgresLOG:
 duration: 70.433 ms
2023-10-16 16:23:02 UTC [16561]: [15-1] user=postgres,db=postgresLOG:
 statement: CREATE TABLE categories (
   category integer NOT NULL,
   categoryname character varying(50) NOT NULL
);
2023-10-16 16:23:02 UTC [16561]: [16-1] user=postgres,db=postgresLOG:
 duration: 6.328 ms
2023-10-16 16:23:28 UTC [16561]: [17-1] user=postgres,db=postgresLOG:
 statement: COPY categories (category, categoryname) FROM stdin;
2023-10-16 16:23:41 UTC [16561]: [18-1] user=postgres,db=postgresLOG:
 duration: 12695.122 ms
2023-10-16 16:23:52 UTC [16561]: [19-1] user=postgres,db=postgresLOG:
 statement: table categories;
2023-10-16 16:23:52 UTC [16561]: [20-1] user=postgres,db=postgresLOG:
 duration: 0.573 ms
2023-10-16 16:24:20 UTC [16561]: [21-1] user=postgres,db=postgresLOG:
 statement: select pg_switch_wal();
2023-10-16 16:24:20 UTC [16561]: [22-1] user=postgres,db=postgresLOG:
 duration: 61.904 ms

2023-10-16 16:26:25 UTC [16561]: [23-1] user=postgres,db=postgresLOG:
 statement: drop table categories;
2023-10-16 16:26:25 UTC [16561]: [24-1] user=postgres,db=postgresLOG:
 duration: 5.165 ms


How to store query result into another table using stored procedure

2023-06-09 Thread Rama Krishnan
Hi All,


I have a table like below

Create table if not exists digi_card(
 Digi_card_id varchar(100),
created_date timestamp,
updated_date timestamp,
 status varchar(50),
 reason varchar(50)
);

Sample values:

Insert into digi_card values ('ee4422', '2019-03-01 00:25:00', '2021-03-31
22:33:00','Active','NULL');
Insert into digi_card values ('ee4423', '2019-08-01 00:25:00', '2022-07-31
00:33:00','Undigiized ','Move');
Insert into digi_card values ('ee4424', '2021-03-01 00:25:00', '2023-02-27
08:33:00','Active','NULL');


I want to display the card which was deleted after 24 months from the
corresponding  created month and the results should be store on the
temporary tables so i have written the below stored procedure

CREATE or REPLACE PROCEDURE deleted_cards_count_test(start_date TIMESTAMP,
end_date TIMESTAMP) AS $$
DECLARE
current_date TIMESTAMP;
month_start_date TIMESTAMP;
month_end_date TIMESTAMP;
month24_end_date TIMESTAMP;
no_deleted_cards bigint;
BEGIN
current_date := start_date;
month_end_date := to_char(date_trunc('month', current_date) + interval '24
month - 1 day' + interval '23 hours 59 minutes 5 seconds','-MM-DD
HH24:MI:SS');
Create temporary table if not exists temp_teport_results(
month_start_date TIMESTAMP,
no_deleted_cards bigint
);
 EXECUTE format('
SELECT COUNT(1) filter (where status =''Undigitized'' and reason is null
and updated_date between %L and %L) no_deleted_cards from digi_card where
created_date between %L and %L
group by months',current_date,month_end_date)INTO no_deleted_cards;
Insert into temp_teport_results (month_start_date,no_deleted_cards) VALUES
(month_start_date,no_deleted_cards);
--- display result
 select * from temp_teport_results;
END;
$$ LANGUAGE plpgsql;


It was created successfully, but when I called this procedure with
parameters. i am getting this below error ,Pls guide me to fix the issue

CALL deleted_cards_count_test( '2019-03-01 00:00:00',  '2021-03-31
23:59:59');
ERROR:  too few arguments for format()
CONTEXT:  PL/pgSQL function deleted_cards_count_test(timestamp without time
zone,timestamp without time zone) line 16 at EXECUTE







Regards

A.Rama Krishnan


Vacuum full issue

2023-02-16 Thread Rama Krishnan
Hi All,

One of my friend project they did vacuum full  last week since one of the
table column data type were CLOB after that activity the dev team found out
some of the data were missing or corrupt (XML) , please explain me will
vacuum full have any limitation like that?

Thanks
Ramakrishnan


How to select unique records in PostgreSQL

2022-11-24 Thread Rama Krishnan
Hi All,


I want to get the unique wallet_id from this table even it was repeated on
multiple occasions I should calculate only once as well as if the wallet_id
was calculated on previous month it shouldn't be calculate on next months


create table student_id
(
student_wallet_id int,
card_id int,
created_date date
)

insert into  student_id values ('w1', 'c1', '2022-04-01')


insert into  student_id values ('w1', 'c2', '2022-04-03')


insert into  student_id values ('w1', 'c3', '2022-04-04')

insert into  student_id values ('w1', 'c4', '2022-05-01')
insert into  student_id values ('w2', 'c1', '2022-05-01')
insert into  student_id values ('w2', 'c2', '2022-05-04')
insert into  student_id values ('w3', 'c4', '2022-05-05')
insert into  student_id values ('w3', 'c6', '2022-06-03)
insert into  student_id values ('w3', 'c7', '2022-06-05')
insert into  student_id values ('w3', 'c8', '2022-06-07')
expected output:
---

months  count_wallet_id
2022-04  1
2022-05  2
2022-05  0


Regards

A.Rama Krishnan


Value Too long varchar(100)

2022-10-27 Thread Rama Krishnan
Hi team,



We are getting csv file from client to upload data in my db table , one
particular column I. E clinet description column contains more than 100
character  hence I am getting value too long varchar (100) so we decided to
upload db only first 100 characters. How to use this thing in copy command


About foreign data wrapper

2022-10-17 Thread Rama Krishnan
Hi all,

What Is the difference between dblink and foreign data wrapper?


Thanks
Ramakrishnan


Re: Postgresql acid components

2022-09-14 Thread Rama Krishnan
Hi Adrian

Thanks for your valuable reply.

On Wed, 14 Sep, 2022, 01:11 Adrian Klaver, 
wrote:

> On 9/12/22 09:34, Rama Krishnan wrote:
> > Hi all,
> >
> > In ACID property which postgresql components is supporting it
> >
> >
>
> The database as whole is ACID compliant per:
>
> https://www.postgresql.org/about/
>
> "...has been ACID-compliant since 2001...".
>
>  From what I remember there are some commands that do not run in a
> transaction block:
>
> https://www.postgresql.org/docs/current/sql-createdatabase.html
>
> https://www.postgresql.org/docs/current/sql-vacuum.html
>
> Some forms of below do not run in transaction block:
>
> https://www.postgresql.org/docs/14/sql-reindex.html
>
> https://www.postgresql.org/docs/14/sql-altersubscription.html
>
> https://www.postgresql.org/docs/14/sql-alterdatabase.html
>
> Your best bet is to look at the commands listed here:
>
> https://www.postgresql.org/docs/14/sql-commands.html
>
> to check before using for first time.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Postgresql acid components

2022-09-13 Thread Rama Krishnan
Hi all,

In ACID property which postgresql components is supporting it


Sub:column "" is of type bigint but expression is of type character varying

2022-08-17 Thread Rama Krishnan
Hi All,

I am having table name called tickets


\d tickets

Column|Type | Collation |
Nullable | Default
--+-+---+--+-
 id   | bigint  |
  | not null |
 ticket_purchase_no| bigint ||
not null |

this below table contains more than 2 years old data
\d tickets_archive

Column|Type | Collation |
Nullable | Default
--+-+---+--+-
 id   | bigint  |
  | not null |
 ticket_purchase_no| bigint ||
not null |




i have purged the old data from orginal table when i am restoring the data
from archive table into orignal table i am getting the error


*insert into tickets select * from tickets_archive;*


 column "*ticket_purchase_no*" is of type bigint but expression is of type
character varying
Regards

A.Rama Krishnan


Re:

2022-07-26 Thread Rama Krishnan
Hi Adrian


Thanks for your reply,

My actual db size was 320G while I am taking custom format and moving into
directly S3 it took more than one day so I am trying to use directory
format because  parllel option (-j option) supports ik directory format.

What is size of table?

I m having two Database example

01. Cricket 320G
02.badminton 250G

What sort of time interval are you seeing?

I am doing purge data to keep 1 year data n db more than year data I am
going yo take dump backup for future reports purpose.

What problem is it causing?

The normal custom format backup took more than day

On Tue, 26 Jul, 2022, 20:34 Adrian Klaver, 
wrote:

> On 7/26/22 06:27, Rama Krishnan wrote:
> > Hi all,
> >
> > How to take a table backup using directory format?
>
> pg_dump -d  -U   -t  -Fd -f 
>
> > I am having huge size of table when I am using a pg_dump it tooks more
> > time kindly suggest me
>
> Not sure what the above means, so:
>
> What is size of table?
>
> What sort of time interval are you seeing?
>
> What problem is it causing?
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[no subject]

2022-07-26 Thread Rama Krishnan
Hi all,

How to take a table backup using directory format?
I am having huge size of table when I am using a pg_dump it tooks more time
kindly suggest me


Re: Batch process

2022-07-20 Thread Rama Krishnan
Hi Adrian,

Thanks for the update.


Is it possible to achieve 1 records deletion on every iteration because
my original table contains 5M records during the deletion process it
consumes more cpu and resources.

On Wed, 20 Jul, 2022, 20:37 Adrian Klaver, 
wrote:

> On 7/19/22 22:08, Rama Krishnan wrote:
> > Hi All,
> >
> > I am doing purge activity my sales table contains 5M records I am going
> > to delete more than 1 year data (which was 3M) records so it was running
> > more so I want to do batch wise deletion through plsql
> >
> >
> >
> >
> >   created or replace function data_purge() returns void as$$
> > Declare
> > Begin
> > Drop table test_old;
> > Create table test_old as select * from sales where bill_date > -interval '1 year';
> >
> > Delete table sales where sales_id in (select sales_id from test_old;
> >
> > End;
> > $$ language plpgsql;
> >
> >
> > Kindly guide me
> >
>
> This looks like a case, going forward for partitioning:
>
> https://www.postgresql.org/docs/14/ddl-partitioning.html
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Batch process

2022-07-19 Thread Rama Krishnan
Hi All,

I am doing purge activity my sales table contains 5M records I am going to
delete more than 1 year data (which was 3M) records so it was running more
so I want to do batch wise deletion through plsql




 created or replace function data_purge() returns void as$$
Declare
Begin
Drop table test_old;
Create table test_old as select * from sales where bill_date

Function inside query status

2022-06-30 Thread Rama Krishnan
Hi All,


Is it possible to check the query execution status inside the function.



 created or replace function data_purge() returns void as$$
Declare
Begin
Drop table test_old;
Create table test_old as select * from sales where bill_date

[no subject]

2022-06-16 Thread Rama Krishnan
Hi ,

What type of temp database should i use to maintain logs or export logs?

Thanks
Rama-krishnan


How to get response message

2022-06-10 Thread Rama Krishnan
Hi All,

I am want to delete old records using function so my senior has function
like below but I want to get response of this particular inside query
wheter it is successful or failure

How to get response of the function status

 create or replace function data_purge() returns void as$$
Declare
Begin
Drop table test_old;
Create table test_old as select * from sales where bill_date

[no subject]

2022-05-23 Thread Rama Krishnan
Hi,

After pg_repack do we face any perfomancr issue or any other issues in
production DB


[no subject]

2022-05-16 Thread Rama Krishnan
Hi All,

How to install pg_repack and use?


Thanks
RamaKrishnan


Re: Vertical partition

2022-05-06 Thread Rama Krishnan
Thanks a lot.  Which means normal primary key and foreign key relationship
right can u pls send me any reference link

On Sat, 7 May, 2022, 06:21 David G. Johnston, 
wrote:

>
>
> On Friday, May 6, 2022, Rama Krishnan  wrote:
>
>> Hi ,
>>
>> Can you pls tell us how to do a vertical partition in postgresql
>>
>
>
> Manually.  “Create table” with the columns you want in each.  You FK
> column will also be your PK column on the non-primary table.
>
> David J.
>
>


Vertical partition

2022-05-06 Thread Rama Krishnan
Hi ,

Can you pls tell us how to do a vertical partition in postgresql


Thanks
RamaKrishnan


Performance issue questions

2022-02-09 Thread Rama Krishnan
Hi all,

It was a famous interview question

If the table contains multiple indexes what kind of performance issue or
impact will be there.


Doubt in pgbouncer

2021-11-03 Thread Rama Krishnan
Hi All,

What is the limit value for fixing connection timeout issues?

As i searched in google and verified pgbouncer document
client_login_timeout

If a client connects but does not manage to log in in this amount of time,
it will be disconnected. Mainly needed to avoid dead connections stalling
SUSPEND and thus online restart. [seconds]

Default: 60.0

Can you pls suggest the best value ? or how to decide this parameter value



Regards

A.Rama Krishnan


Re: Logical replication from Rds into on-premise

2021-07-27 Thread Rama Krishnan
Awesome, thanks! Can you please share docs my Rds version is 12.3

On Tue, 27 Jul, 2021, 19:00 Miles Elam,  wrote:

> On Tue, Jul 27, 2021 at 4:38 AM Dave Cramer 
> wrote:
>
>>
>> Does RDS allow logical replication
>>
>
> Yes, it does. I believe it was patched for v9.6, but v10 and above support
> it out of the box, and the RDS version of PostgreSQL shares that support. I
> have used it with v10 and v11, and it works exactly like the standard PG
> docs say it should.
>
> - Miles
>
>


Re: Logical replication from Rds into on-premise

2021-07-27 Thread Rama Krishnan
Hi Dave,

Rds support logical replication but the question of how to make replication
between RDS into on premise

On Tue, 27 Jul, 2021, 17:08 Dave Cramer,  wrote:

>
> Dave Cramer
> www.postgres.rocks
>
>
> On Mon, 26 Jul 2021 at 18:43, Cory Nemelka  wrote:
>
>>
>> On Mon, Jul 26, 2021 at 12:06 PM Rama Krishnan 
>> wrote:
>>
>>> Hi all,
>>>
>>> I have a postgres server on Aws RDS no i want to replicate the data or
>>> logical replication into the on-premise server. I have gone through DMS
>>> provides the service buy it pricing was high. Do we have any option or
>>> method to achieve this?
>>>
>>> Thanks
>>>
>>> RamaKrishnan
>>>
>>
>> I would start here:
>> https://www.postgresql.org/docs/12/sql-createpublication.html
>>
>> make sure all your tables have primary keys. :D
>>
>
> Does RDS allow logical replication
>


Logical replication from Rds into on-premise

2021-07-26 Thread Rama Krishnan
Hi all,

I have a postgres server on Aws RDS no i want to replicate the data or
logical replication into the on-premise server. I have gone through DMS
provides the service buy it pricing was high. Do we have any option or
method to achieve this?

Thanks

RamaKrishnan


Re: Doubt on pgbouncer

2021-07-22 Thread Rama Krishnan
Ok, thank you sir

On Thu, 22 Jul, 2021, 20:23 Ninad Shah,  wrote:

> Pgbouncer doesn't provide such a functionality. Kindly consider using
> pgpool.
>
> On Thu, 22 Jul 2021 at 15:58, Ganesh Korde  wrote:
>
>> Try using HAProxy.
>>
>> Regards,
>> Ganesh Korde.
>>
>> On Sat, 3 Jul 2021, 9:06 pm Rama Krishnan,  wrote:
>>
>>>
>>> Hi Team,
>>>
>>> How can I split read and write queries using pgbouncer
>>>
>>> Regards
>>>
>>> A.Rama Krishnan
>>>
>>>
>>> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>>>  Virus-free.
>>> www.avast.com
>>> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>>> <#m_-3565042833678223020_m_2585993038985863414_m_3619964688060668000_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>
>>


Re: Doubt on pgbouncer

2021-07-04 Thread Rama Krishnan
Thank you for the clarification.

On Sat, 3 Jul, 2021, 23:20 Ben Chobot,  wrote:

> Rama Krishnan wrote on 7/3/21 8:35 AM:
>
>
> Hi Team,
>
> How can I split read and write queries using pgbouncer
>
>
> You do it with your application. Make a pgbouncer database for read/write
> queries and point it at your postgresql primary, and then make a second
> pgbouncer database for read-only queries and point it at a replica. Have
> your application connect to the proper database, depending upon its needs.
>
> It's not transparent, but it is effective.
>


Re: Doubt on pgbouncer

2021-07-03 Thread Rama Krishnan
Hi Bruce

Thank you for the clarification. Can you pls share any links or docs about
my doubt that can be helpful for me further information

On Sat, 3 Jul, 2021, 21:21 Bruce Momjian,  wrote:

> On Sat, Jul  3, 2021 at 09:05:58PM +0530, Rama Krishnan wrote:
> >
> > Hi Team,
> >
> > How can I split read and write queries using pgbouncer
>
> No.  I think only PgPool can do that because it parses the SQL.
>
> --
>   Bruce Momjian  https://momjian.us
>   EDB  https://enterprisedb.com
>
>   If only the physical world exists, free will is an illusion.
>
>


Doubt on pgbouncer

2021-07-03 Thread Rama Krishnan
Hi Team,

How can I split read and write queries using pgbouncer

Regards

A.Rama Krishnan


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Query processing in postgresql

2021-04-15 Thread Rama Krishnan
Thank you so much a drain

Thanks

Rama Krishnan

On Thu, 15 Apr, 2021, 22:40 Adrian Klaver, 
wrote:

> On 4/15/21 10:04 AM, Rama Krishnan wrote:
> > Hi Guys,
> >
> > How the select statement has been processed in the postgresql
>
> https://www.postgresql.org/developer/backend/
>
> Click on tags to get more information.
>
> >
> >
> > Thanks
> >
> > RK
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Query processing in postgresql

2021-04-15 Thread Rama Krishnan
Hi Guys,

How the select statement has been processed in the postgresql


Thanks

RK


Import data from one instance another

2021-03-09 Thread Rama Krishnan
Hi Team,

I have two instances called A and B, in A instances I have one table name
called sports I want to import data from A into B instances I found foreign
data wrapper is the best solution when the table is huge here I have doubt
is it foreign data wrapper and foreign tables are same?


Thanks

Rama Krishnan


Re: Doubt in pgbouncer

2020-10-01 Thread Rama Krishnan
Yeah.  Rob

On Fri, 2 Oct, 2020, 05:14 Rob Sargent,  wrote:

>
>
> > On Oct 1, 2020, at 5:42 PM, Rama Krishnan  wrote:
> >
> > Hi Friends,
> >
> > By using pg bouncer can we split read and queries
> >
> > Thank
> > Rk
>
> Did you mean reads vs writes?
>


Re: Doubt in pgbouncer

2020-10-01 Thread Rama Krishnan
Yeah rob can  we spilt read/write queries using pgbouncer

On Fri, 2 Oct, 2020, 08:43 Rama Krishnan,  wrote:

> Yeah.  Rob
>
> On Fri, 2 Oct, 2020, 05:14 Rob Sargent,  wrote:
>
>>
>>
>> > On Oct 1, 2020, at 5:42 PM, Rama Krishnan  wrote:
>> >
>> > Hi Friends,
>> >
>> > By using pg bouncer can we split read and queries
>> >
>> > Thank
>> > Rk
>>
>> Did you mean reads vs writes?
>>
>


Doubt in pgbouncer

2020-10-01 Thread Rama Krishnan
Hi Friends,

By using pg bouncer can we split read and queries

Thank
Rk


Re: Need a depth information about bloat

2020-08-09 Thread Rama Krishnan
Thanks for your reply brother ✌


On Mon, 10 Aug, 2020, 09:47 Adrian Klaver, 
wrote:

> On 8/9/20 8:01 PM, Rama Krishnan wrote:
> > Hi Mentors,
> >
> >
> > I am searching for an understanding of bloat in PostgreSQL
> >
> > What is bloat?
> >
> > How does it work?
> >
> > What is the use of bloat?
> >
> > Limitations of bloat?
>
> Read this section of the docs:
>
> https://www.postgresql.org/docs/12/routine-vacuuming.html
>
> and if you still have questions then come back with them.
>
> >
> > Thanks
> >
> > Rama Krishnan
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Need a depth information about bloat

2020-08-09 Thread Rama Krishnan
Thank you for the clarification brother 

On Mon, 10 Aug, 2020, 09:47 Rob Sargent,  wrote:

>
>
> > On Aug 9, 2020, at 9:01 PM, Rama Krishnan  wrote:
> >
> > 
> > Hi Mentors,
> >
> >
> > I am searching for an understanding of bloat in PostgreSQL
> >
> > What is bloat?
> >
> > How does it work?
> >
> > What is the use of bloat?
> >
> > Limitations of bloat?
> >
> > Thanks
> >
> > Rama Krishnan
> Bloat is not a feature of postgreSQL, it’s a consequence. It has no
> redeeming value.
> Updates to tuples generally cause a new tuples to be constructed and the
> original to be marked as disposable. It is the job of the vacuum process to
> rid the table of this record. Until this happens the marked record as part
> of the bloat for that table.


Need a depth information about bloat

2020-08-09 Thread Rama Krishnan
Hi Mentors,


I am searching for an understanding of bloat in PostgreSQL

What is bloat?

How does it work?

What is the use of bloat?

Limitations of bloat?

Thanks

Rama Krishnan


Re: Doubt in pgbase

2020-08-05 Thread Rama Krishnan
Thanks, I'll check it out.

On Thu, 6 Aug, 2020, 07:51 Ian Barwick,  wrote:

> On 2020/08/06 11:10, Rama Krishnan wrote:
> > Hi Experts,
> >
> >
> >
> > Can anyone explain me how the pg base backup will works?
>
> Have you seen the documentation page?
>
>  https://www.postgresql.org/docs/current/app-pgbasebackup.html
>
> It contains an explanation of how pg_basebackup works.
>
>
> Regards
>
> Ian Barwick
>
>
> --
> Ian Barwick   https://www.2ndQuadrant.com/
>   PostgreSQL Development, 24x7 Support, Training & Services
>


Doubt in pgbase

2020-08-05 Thread Rama Krishnan
Hi Experts,



Can anyone explain me how the pg base backup will works?

Thanks

Rama Krishnan


How does vacuum works in postgresql

2020-07-25 Thread Rama Krishnan
Hi Experts,

I read this tutorials blog on edb it says vacuum has three phases
1.ingest phase
2.pruning phase
3.cleaning phase

Apart from this blog, no one has explained about the vacuum working


I m getting confusion on each can anyone explain me

https://www.enterprisedb.com/postgres-tutorials/how-does-vacuum-work-postgresql


Thanks& Regards
RK


Re: Doubt in mvcc

2020-07-13 Thread Rama Krishnan
Hi Francisco,

Still, I have a doubt as per your example both are trying to update the
same tuple so it may produce two different copies right?

I read some blocks they mentioned drawback above two things


It may lead to lost update also



For example, two transactions are going to increase the amount on the same
account by $100 . The first transaction reads the current value ($1000) and
then the second transaction reads the same value. The first transaction
increases the amount (this gives $1100) and writes this value. The second
transaction acts the same way: it gets the same $1100 and writes this
value. As a result, the customer lost $100.


Will u please provide more details


Serialize is the solution to this issue.

On Mon, 13 Jul, 2020, 14:12 Francisco Olarte, 
wrote:

> Rama:
>
> On Mon, Jul 13, 2020 at 9:52 AM Rama Krishnan 
> wrote:
> > I m preparing for interview one of the recruiter asked me mvcc drawbacks
> as i told due to mvcc it use more space and need to perform maintenance
> activity.
> > Another one is the same data causes an update conflict because two
> different transactions can update the same version of the row.
> >  he told its wrong, kindly tell me will you please tell me its correct
> or wrong?
>
> I'm not sure I understand your question too well, you may want to
> refresh/expand.
>
> One interpretation is, on a pure MVCC contest, two transactions, say 5
> and 6, could try to update a tuple valid for [1,) and end up
> generating two new tuples, [5,), [6,) and closing the original at
> either [1,5) or [1,6) .
>
> That's why MVCC is just a piece, locking is other. On a MVCC the
> tuples are locked while a transaction manipulates them. Other
> transactions may read them, which is why readers do not block writers,
> but two updates on the same tuple serialize.
>
>
> Francisco Olarte.
>


Doubt in mvcc

2020-07-13 Thread Rama Krishnan
Hi sir,

I m preparing for interview one of the recruiter asked me mvcc drawbacks as
i told due to mvcc it use more space and need to perform maintenance
activity.

Another one is the same data causes an update conflict because two
different transactions can update the same version of the row.
 he told its wrong, kindly tell me will you please tell me its correct or
wrong?


Thanks
RK