WAL replication standby server: query

2024-10-15 Thread KK CHN
List ,

I am trying  to  configure a WAL replicated  standby server(EDB16) ..   In
the archive command  I used a dedicated  "/data/archive " directory  for
WAL archiving as follows, as I don't want any WAL overwriting in the
default WAL directory and loss of WAL files.

I have
archive_command=  ' cp %p /data/archive/%f '

pg_hba.confI have

host replication all 10.255.10.0/24  md5

My doubt is whether the standby Postgres server(EDB16) reads WAL files by
itself   from the primary server's  explicit WAL archive folder
/data/archive  by its own  ?
OR
  The standby can read  only from the default WAL location  ie; pg_wal
 directory of primary server ?

Please  enlighten me.

If the standby can read only the default pg_wal folder, then how can we
make the standby server to read from the explicit  " /data/archive "
folder for WAL syncing automatically to the standby ?

Any input is much appreciated.

Thank you,
Krishane


Re: CLOSE_WAIT pileup and Application Timeout

2024-10-07 Thread KK CHN
On Mon, Oct 7, 2024 at 12:07 AM Alvaro Herrera 
wrote:

> On 2024-Oct-04, KK CHN wrote:
>
> > The mobile tablets are installed with the android based vehicle
> > tracking app which updated every 30 seconds its location fitted inside
> the
> > vehicle ( lat long coordinates) to the PostgreSQL DB through the java
> > backend application to know the latest location of the vehicle and its
> > movement which will be rendered in a map based front end.
> >
> > The vehicles on the field communicate  via 443 to   8080 of the Wildfly
> > (version 27 ) deployed with the vehicle tracking application developed
> with
> > Java(version 17).
>
> It sounds like setting TCP keepalives in the connections between the
> Wildfly and the vehicles might help get the number of dead connections
> down to a reasonable level.  Then it's up to Wildfly to close the
> connections to Postgres in a timely fashion.  (It's not clear from your
> description how do vehicle connections to Wildfly relate to Postgres
> connections.)
>
>
Where do I have to introduce the TCP keepalives ? in the OS level or
application code level ?

[root@dbch wildfly-27.0.0.Final]# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
[root@dbch wildfly-27.0.0.Final]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
75
[root@dbch wildfly-27.0.0.Final]# cat
/proc/sys/net/ipv4/tcp_keepalive_probes
9
[root@dbch wildfly-27.0.0.Final]#

These are the default values in the OS level.   Do I need to reduce all the
above three values to  say 600, 20, 5  ?   Or need to be handled in the
application backend code ?

 Any hints much appreciated..

>
> I wonder if the connections from Wildfly to Postgres use SSL?  Because
> there are reported cases where TCP connections are kept and accumulate,
> causing problems -- but apparently SSL is a necessary piece for that to
> happen.
>
No SSL in between   Wildfly (8080 ) toPGSQL(5432).  Both the machines
internal lan VMs  in the same network.Only the devices on the field
(fitted on the  vehicles) communicate to the application backend via a
public URL :443  port  then  it connectes  to the 8080 of wildfly then the
java code connects the  database server running on 5432 on the internal LAN
network.

>

> --
> Álvaro Herrera   48°01'N 7°57'E  —
> https://www.EnterpriseDB.com/
> Thou shalt study thy libraries and strive not to reinvent them without
> cause, that thy code may be short and readable and thy days pleasant
> and productive. (7th Commandment for C Programmers)
>


Re: CLOSE_WAIT pileup and Application Timeout

2024-10-06 Thread KK CHN
On Fri, Oct 4, 2024 at 9:17 PM Adrian Klaver 
wrote:

> On 10/3/24 21:29, KK CHN wrote:
> > List,
> >
> > I am facing a  network (TCP IP connection closing issue) .
> >
> > Running a  mobile tablet application, Android application to update the
> > status of vehicles fleet say around 1000 numbers installed with the app
> > on each vehicle along  with a  vehicle tracking  application server
> > solution based on Java and Wildfly with  PosrgreSQL16 backend.
> >
>
> >
> > The  running vehicles may disconnect  or be unable to send the location
> > data in between if the mobile data coverage is less or absent in a
> > particular area where data coverage is nil or signal strength less.
> >
> > The server on which the backend application runs most often ( a week's
> > time  or so) shows connection timeout and is unable to serve tracking
> > of  the vehicles further.
> >
> > When we restart the  Wildfly server  the application returns to normal.
> > again the issue repeats  after a week or two.
>
> Seems the issue is in the application server. What is not clear to me is
> whether the connection timeout you refer to is from the mobile devices
> to the application or the application to the Postgres server?

its from mobile devices to application server.  When I do a restart of
application server everything backs to normal.  But after a period of time
again it cripples.  That time when I netstat on Application VM lots of
CLOSE_WAIT states as indicated.


> I'm
> guessing the latter as I would expect the mobile devices to drop
> connections more often then weekly.



> Yes mobile devices may drops connections at any point of time if it
> reaches an area where signal strength is poor( eg; Underground parking or
> near the areas where mobile data coverage is poor.
> >


The topology is mobile devices  connect and update the location via
application VM then   finally in  PGSQL VM.

The application server and  Database server both separate virtual
machines.  Application server hangs most often not the database VM.
Since there are other applications which update to the database VM without
any issue.  The DB VM caters all the writes from other applications. But
those applications are different, not fleet management one.


>
> > In the Server machine when this bottleneck occurs  I am seeing  a lot
> > of  TCP/IP CLOSE_WAIT   ( 3000 to 5000 ) when the server backend becomes
> > unresponsive.
>
> Again not clear, are you referring to the application or the Postgres
> database running on the server?
>
> >
> > What is the root cause of this issue ?   Is it due to the android
> > application unable to send the CLOSE_WAIT ACK due to poor mobile data
> > connectivity ?
> >
> >
> >   If so, how do people  address this issue ?  and what may be a fix ?
> >
> >   Any  directions / or reference material most welcome.
> >
> > Thank you,
> > Krishane
> >
> >
> >
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


CLOSE_WAIT pileup and Application Timeout

2024-10-03 Thread KK CHN
List,

I am facing a  network (TCP IP connection closing issue) .

Running a  mobile tablet application, Android application to update the
status of vehicles fleet say around 1000 numbers installed with the app on
each vehicle along  with a  vehicle tracking  application server solution
based on Java and Wildfly with  PosrgreSQL16 backend.

The mobile tablets are installed with the android based vehicle
tracking app which updated every 30 seconds its location fitted inside the
vehicle ( lat long coordinates) to the PostgreSQL DB through the java
backend application to know the latest location of the vehicle and its
movement which will be rendered in a map based front end.

The vehicles on the field communicate  via 443 to   8080 of the Wildfly
(version 27 ) deployed with the vehicle tracking application developed with
Java(version 17).


*  The mobile tablet communicates to the backend application over mobile
data (4G/5G SIMS). *

The  running vehicles may disconnect  or be unable to send the location
data in between if the mobile data coverage is less or absent in a
particular area where data coverage is nil or signal strength less.

The server on which the backend application runs most often ( a week's
time  or so) shows connection timeout and is unable to serve tracking  of
the vehicles further.

When we restart the  Wildfly server  the application returns to normal.
again the issue repeats  after a week or two.

In the Server machine when this bottleneck occurs  I am seeing  a lot of
TCP/IP CLOSE_WAIT   ( 3000 to 5000 ) when the server backend becomes
unresponsive.

What is the root cause of this issue ?   Is it due to the android
application unable to send the CLOSE_WAIT ACK due to poor mobile data
connectivity ?


 If so, how do people  address this issue ?  and what may be a fix ?

 Any  directions / or reference material most welcome.

Thank you,
Krishane


[PATCH] docs: Fix typo

2024-10-03 Thread KK Surendran
Fix typo in Documentation/gpu/rfc/i915_scheduler.rst -
"paralllel" to "parallel"

Signed-off-by: KK Surendran 
---
 Documentation/gpu/rfc/i915_scheduler.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/gpu/rfc/i915_scheduler.rst 
b/Documentation/gpu/rfc/i915_scheduler.rst
index c237ebc02..2974525f0 100644
--- a/Documentation/gpu/rfc/i915_scheduler.rst
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -26,7 +26,7 @@ i915 with the DRM scheduler is:
  which configures a slot with N contexts
* After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
  a slot in a single execbuf IOCTL and the batches run on the GPU in
- paralllel
+ parallel
* Initially only for GuC submission but execlists can be supported if
  needed
 * Convert the i915 to use the DRM scheduler
-- 
2.46.2



PgBackRest : Restore to a checkpoint shows further transactions

2024-09-24 Thread KK CHN
List,


PgBackRest :  I tried to restore the latest backup taken at my RepoServer
to a  testing EPAS server freshly deployed .

I have a full backup, two diff  and one INCR  as on today morning.   The
latest one is INCR


full backup: 20240922-232733F
timestamp start/stop: 2024-09-22 23:27:33+05:30 / 2024-09-23
09:17:00+05:30

  diff backup: 20240922-232733F_20240924-222336D
timestamp start/stop: 2024-09-24 22:23:36+05:30 / 2024-09-24
22:55:41+05:30


 incr backup: 20240922-232733F_20240925-082637I
timestamp start/stop: 2024-09-25 08:26:37+05:30 / 2024-09-25
08:36:00+05:30



On my Test  EPAS Server :

[root@dbch ~]# *sudo -u enterprisedb pgbackrest --stanza=Repo1  --delta
--set=20240922-232733F_20240925-082637*I  --target-timeline=current
*restore*


2024-09-25 10:28:42.493 P00   INFO: restore command end: completed
successfully (2657236ms)


Now I  comment out the archive command in the test EPAS server
postgresql.conf  and started the EPAS server.


WHen I issue a query  to select few rows  To my surprise  I am seeing the
records with columns with time stamp up to a time 10.36:11:968  and  1
0:36:13.363 :


How did this happen ?   I specified the restore point file (incr) taken at
2024-09-25 08:26:37,   naturally  I expected restore may show  records up
to this time stamp or   up to 2024-09-25 08:36:00+05:30   but it shows
further to  10.36:11:968  and  0:36:13.363 but not beyond this !!!


But my restore  ends successfully   at  2024-09-25 10:28:42.493 P00   INFO:
restore command end: completed successfully (2657236ms)

 Could someone explain how this comes about ?

But no other records  latest  than  10.36:11:968  and  0:36:13.363
 showing  .. How is it delimited here at this time stamp ?


SO  I guess this is due to specifying  --target-timeline=current?  But
restore finished at 10.28:42.493

OR

 Does this take all wal and replay up to  the  EPAS  service starting  time
of the testing EPAS server ?


Thank you,

Krishane.


For more inputs :  I have queried like this below..


t_db=# select * from c.cti_all_info  ORDER BY received_time DESC LIMIT 1;
   id| caller_number |  call_identifier   | ivr_start_time  |
ivr_connect_time | ivr_drop_time | ivr_drop_reason | call_landing_time |
call_start_time |  call_end_time  | call_drop_reason
| sip_extension | call_direction |


 message_list



 | voice_path | partition_key |  received_time
   | remarks | source_ip_address | pilot_number
-+---++-+--+---+-+---+-+-+---
+---+++---+--
---+-+---+--
 66769044 |555657643942 | 140771.5140 | 2024-09-25 10:36:11.968 |
   |   | |   |
| 2024-09-25 10:36:13.363 | User Disconnected
|   | IN | ["{\"srcType\":\"ACS\",\"srId\":\"



I have PgBack successfully running on a Production Server and a Repo
Server   RHEL9.4, PgBackRest 2.52.1 and EPAS 16.1 .  Restore performing for
the first time.


[kdiamond] [Bug 493549] New: on high monitor resolution (4k) the mouse sensitivity to "hit" a diamond is a problem

2024-09-23 Thread kk
https://bugs.kde.org/show_bug.cgi?id=493549

Bug ID: 493549
   Summary: on high monitor resolution (4k) the mouse sensitivity
to "hit" a diamond is a problem
Classification: Applications
   Product: kdiamond
   Version: 24.08.1
  Platform: Arch Linux
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: majew...@gmx.net
  Reporter: k...@orly.at
CC: kde-games-b...@kde.org
  Target Milestone: ---

When You open the game in a size of about 1100 x 1100 px on a 4k display,
you are nerby unable to select a diamond.
It is only at the inner center on the diamond able to select. Like 5 x 5 px (a
guess) 




***
If you're not sure this is actually a bug, instead post about it at
https://discuss.kde.org

If you're reporting a crash, attach a backtrace with debug symbols; see
https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports

Please remove this comment after reading and before submitting - thanks!
***

SUMMARY


STEPS TO REPRODUCE
1. zoom the playfield to about 1200x1200 on a 4k monitor
2. try to select a diamond not at the center of the diamond
3. 

OBSERVED RESULT

You can click, but the diamond get not selected

EXPECTED RESULT

diamond get selevted 

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
(available in the Info Center app, or by running `kinfo` in a terminal window)
Linux/KDE Plasma:  EndeavorOS x86_64  Plasma 6.1.5
KDE Plasma Version:  6.10.10-arch-1
KDE Frameworks Version: 
Qt Version: 

ADDITIONAL INFORMATION

-- 
You are receiving this mail because:
You are watching all bug changes.

PgBackRest and WAL archive expiry

2024-09-19 Thread KK CHN
List,

I successfully configured pgbackrest (pgBackRest 2.52.1) on RHEL 9.4 with
EPAS 16.1 for a couple of production servers and a Remote Repo Server.

Seems everything is working as expected.

I have a serious concern of   archive dir growing day by day..

1. In the  EPAS serverI have   postgres.conf with
archive_command = 'pgbackrest --stanza=EMI_Repo archive-push %p && cp %p
 /data/archive/%f'

The problem is that the   /data/archive  folder is growing  within a few
days to 850GB  of 2 TB  partition.

 What is the mechanism to check / expire the WAL  archive dir automatically?
 How others control this behavior and on what criteria so that PITR  won't
be badly affected   if we manually delete the WALs from the archive dir ?

Does  Postgres  or PgBackRest have any command/directive to control the
/data/archive  growth after a considerable  time/usage of disk space
without affecting PITR (or on any condition ) ?

Please shed your expertise to enlighten in this regard for a healthy WAL
retention on the DB server as well as on the RepoServer

Thank you,
Krishane


For any more inputs from DB server ..

161G./edb/as16/tablespace/ERSS
161G./edb/as16/tablespace
167G./edb/as16
167G./edb
854G./archive
229M./backup
1.1T.
[root@db1 data]# cd /data/archive/
[root@db1 archive]# du -h
854G.
[root@db1 archive]#



[root@db1 archive]# df -h
FilesystemSize  Used Avail Use% Mounted on
devtmpfs  4.0M 0  4.0M   0% /dev
tmpfs 7.7G   11M  7.7G   1% /dev/shm
tmpfs 3.1G   28M  3.1G   1% /run
/dev/mapper/rhel_bc68-root   20G  6.6G   14G  33% /
/dev/mapper/rhel_bc68-app   5.0G   68M  4.9G   2% /app

*/dev/mapper/rhel_bc68-data  2.0T  1.1T  979G  52% /data*/dev/sda2
960M  372M  589M  39% /boot
/dev/sda1 599M  7.1M  592M   2% /boot/efi
tmpfs 1.6G   52K  1.6G   1% /run/user/42
tmpfs 1.6G   36K  1.6G   1% /run/user/0
[root@db1 archive]#


# (change requires restart)
archive_mode = on   # enables archiving; off, on, or always
# (change requires restart)
# (empty string indicates archive_command
should
# be used)
#
archive_command = 'pgbackrest --stanza=EMI_Repo archive-push %p && cp %p
 /data/archive/%f'
# placeholders: %p = path of file to archive
#   %f = file name only





*[root@db1 pg_wal]# du -h20K ./archive_status5.1G.[root@db1
pg_wal]#*


Re: ssh to DB server and su normal users very slow :

2024-09-09 Thread KK CHN
update :  the  ssh -v root@db_Server_IP from my  Windows cmd   pasted
below for more details



On Mon, Sep 9, 2024 at 4:50 PM KK CHN  wrote:

> List,
>
> I have configured pgbackrest for the DB server and Repo Server(created the
> stanza info check all fine.  in these machines.
>
>
> /var/spool/pgbackrest  shows   the .Okfor each WAL  and   the Repo
> server receiving the archiving of WAL in the archive directory .
>
>
> I didn't  schedule a pgbackrest  back  as of now  due to  an issue I am
> facing as follows.
>
> PROBLEM Statement:
>
> I am facing a delay in ssh  root@dbserver_IP from my Desktop PC.  More
> than a minute to login to the root console from any remote terminal, but
> this issue was not there all these days.
>
>  I have done two changes in the DB server :-
>
> 1.
> pg_hba.conf entry
>
> ie;  changed the entry   #local   all all  trust
>
> To
> #local   all allmd5
>
>
> It already having replication entry as
> local   replication all peer
>
> 2.
> Added a .pgpass entry in theDB user's~/dir/ with the following
>
> [root@db1 ~]# cat /var/lib/edb/.pgpass
> *:*:*:enterprisedb:password
>
>
> Is this causing login delays ?   Local connection asking
> password(pg_hba.conf entry ) and   .pgpass contain the user and password
> for connecting ?
>
>
> 3. Even if I able to login to the DB server from the Remote Repo server
> after a minute or two, in side the DB server doing a #  su
> enterprisedbtaking around 90 to 120 seconds to change the user as
> enterprisedb user ??
>
> Any hints much appreciated ..
>
> Thanks in advance,
> Krishane
>
>
>
> *For more  details   I am pasting the top output ( vCPU 16 nos , RAM
> 16 GB)*
>
> top - 10:11:43 up 5 days, 17:21,  5 users,  load average: 0.97, 1.38, 1.26
> Tasks: 710 total,   1 running, 708 sleeping,   1 stopped,   0 zombie
> %Cpu(s):  1.3 us,  0.6 sy,  0.0 ni, 97.1 id,  0.6 wa,  0.1 hi,  0.3 si,
>  0.0 st
> MiB Mem :  15733.6 total,664.0 free,   6371.1 used,  13237.6 buff/cache
> MiB Swap:   8060.0 total,   7985.1 free, 74.9 used.   9362.4 avail Mem
>
> PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+
> COMMAND
> 3547252 enterpr+  20   0 4656880 262304 252032 S   8.3   1.6   0:01.97
> edb-postgres
>2588 enterpr+  20   0 4622104  12704  10888 S   2.0   0.1 106:10.00
> edb-postgres
> 3554955 enterpr+  20   0 4661692 632052 621364 S   2.0   3.9   0:00.99
> edb-postgres
> 3555894 enterpr+  20   0 4633432 628388 621056 S   1.3   3.9   0:00.26
> edb-postgres
> 3525520 enterpr+  20   0 465 96 543872 S   1.0   3.4   0:10.82
> edb-postgres
> 3546456 enterpr+  20   0 4627288  40852  38016 S   1.0   0.3   0:00.30
> edb-postgres
> 3554919 enterpr+  20   0 4655376 564024 557020 S   1.0   3.5   0:00.30
> edb-postgres
> 3555796 enterpr+  20   0 4635024 565716 556840 S   1.0   3.5   0:00.22
> edb-postgres
> 3556084 enterpr+  20   0 4653424  59156  51968 S   1.0   0.4   0:00.06
> edb-postgres
> 3525597 enterpr+  20   0 4627444  44052  41088 S   0.7   0.3   0:00.47
> edb-postgres
> 377 root   0 -20   0  0  0 I   0.3   0.0   2:43.11
> kworker/5:1H-kblockd
> 2923344 enterpr+  20   0 4625236 225176 223104 S   0.3   1.4   1:23.93
> edb-postgres
> 3525722 enterpr+  20   0 4627328  99220  96128 S   0.3   0.6   0:01.99
> edb-postgres
> 3555151 root  20   0  226580   4864   3456 R   0.3   0.0   0:00.15 top
> 3555807 enterpr+  20   0 4627444 350228 347136 S   0.3   2.2   0:00.03
> edb-postgres
> 3556023 enterpr+  20   0 4653636  60052  52608 S   0.3   0.4   0:00.15
> edb-postgres
> 3556026 enterpr+  20   0 4653424  59796  52608 S   0.3   0.4   0:00.22
> edb-postgres
> 3556074 enterpr+  20   0 4653448  59540  52224 S   0.3   0.4   0:00.11
> edb-postgres
> 3556075 enterpr+  20   0 4653372  59412  52224 S   0.3   0.4   0:00.18
> edb-postgres
>
>
> and  # ps -ax   shows
> [root@db1 ~]# ps -ax |grep "idle"
>
> 3511515 ?I  0:00 [kworker/5:0-inet_frag_wq]
> 3512991 ?Ss 0:00 postgres: enterprisedb postgres
> 10.21.134.205(56754) idle
> 3513003 ?Ss 0:00 postgres: enterprisedb er_db
> 10.21.13.205(56770) idle
> 3513005 ?Ss 0:00 postgres: enterprisedb tp_db
> 10.21.13.205(56772) idle
> 3513267 ?Ss 0:00 postgres: enterprisedb er_db
> 10.23.0.203(39262) idle
> 3513476 ?Ss 0:00 postgres: enterprisedb er_db
> 10.21.13.205(56839) idle
> 3513704 ?Ss 0:00 postgres: enterprisedb mt_db
> 10.21.13.202(56608) idle
> 3513729 ?   

ssh to DB server and su normal users very slow :

2024-09-09 Thread KK CHN
List,

I have configured pgbackrest for the DB server and Repo Server(created the
stanza info check all fine.  in these machines.


/var/spool/pgbackrest  shows   the .Okfor each WAL  and   the Repo
server receiving the archiving of WAL in the archive directory .


I didn't  schedule a pgbackrest  back  as of now  due to  an issue I am
facing as follows.

PROBLEM Statement:

I am facing a delay in ssh  root@dbserver_IP from my Desktop PC.  More than
a minute to login to the root console from any remote terminal, but this
issue was not there all these days.

 I have done two changes in the DB server :-

1.
pg_hba.conf entry

ie;  changed the entry   #local   all all  trust

To
#local   all allmd5


It already having replication entry as
local   replication all peer

2.
Added a .pgpass entry in theDB user's~/dir/ with the following

[root@db1 ~]# cat /var/lib/edb/.pgpass
*:*:*:enterprisedb:password


Is this causing login delays ?   Local connection asking
password(pg_hba.conf entry ) and   .pgpass contain the user and password
for connecting ?


3. Even if I able to login to the DB server from the Remote Repo server
after a minute or two, in side the DB server doing a #  su
enterprisedbtaking around 90 to 120 seconds to change the user as
enterprisedb user ??

Any hints much appreciated ..

Thanks in advance,
Krishane



*For more  details   I am pasting the top output ( vCPU 16 nos , RAM 16
GB)*

top - 10:11:43 up 5 days, 17:21,  5 users,  load average: 0.97, 1.38, 1.26
Tasks: 710 total,   1 running, 708 sleeping,   1 stopped,   0 zombie
%Cpu(s):  1.3 us,  0.6 sy,  0.0 ni, 97.1 id,  0.6 wa,  0.1 hi,  0.3 si,
 0.0 st
MiB Mem :  15733.6 total,664.0 free,   6371.1 used,  13237.6 buff/cache
MiB Swap:   8060.0 total,   7985.1 free, 74.9 used.   9362.4 avail Mem

PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+
COMMAND
3547252 enterpr+  20   0 4656880 262304 252032 S   8.3   1.6   0:01.97
edb-postgres
   2588 enterpr+  20   0 4622104  12704  10888 S   2.0   0.1 106:10.00
edb-postgres
3554955 enterpr+  20   0 4661692 632052 621364 S   2.0   3.9   0:00.99
edb-postgres
3555894 enterpr+  20   0 4633432 628388 621056 S   1.3   3.9   0:00.26
edb-postgres
3525520 enterpr+  20   0 465 96 543872 S   1.0   3.4   0:10.82
edb-postgres
3546456 enterpr+  20   0 4627288  40852  38016 S   1.0   0.3   0:00.30
edb-postgres
3554919 enterpr+  20   0 4655376 564024 557020 S   1.0   3.5   0:00.30
edb-postgres
3555796 enterpr+  20   0 4635024 565716 556840 S   1.0   3.5   0:00.22
edb-postgres
3556084 enterpr+  20   0 4653424  59156  51968 S   1.0   0.4   0:00.06
edb-postgres
3525597 enterpr+  20   0 4627444  44052  41088 S   0.7   0.3   0:00.47
edb-postgres
377 root   0 -20   0  0  0 I   0.3   0.0   2:43.11
kworker/5:1H-kblockd
2923344 enterpr+  20   0 4625236 225176 223104 S   0.3   1.4   1:23.93
edb-postgres
3525722 enterpr+  20   0 4627328  99220  96128 S   0.3   0.6   0:01.99
edb-postgres
3555151 root  20   0  226580   4864   3456 R   0.3   0.0   0:00.15 top
3555807 enterpr+  20   0 4627444 350228 347136 S   0.3   2.2   0:00.03
edb-postgres
3556023 enterpr+  20   0 4653636  60052  52608 S   0.3   0.4   0:00.15
edb-postgres
3556026 enterpr+  20   0 4653424  59796  52608 S   0.3   0.4   0:00.22
edb-postgres
3556074 enterpr+  20   0 4653448  59540  52224 S   0.3   0.4   0:00.11
edb-postgres
3556075 enterpr+  20   0 4653372  59412  52224 S   0.3   0.4   0:00.18
edb-postgres


and  # ps -ax   shows
[root@db1 ~]# ps -ax |grep "idle"

3511515 ?I  0:00 [kworker/5:0-inet_frag_wq]
3512991 ?Ss 0:00 postgres: enterprisedb postgres
10.21.134.205(56754) idle
3513003 ?Ss 0:00 postgres: enterprisedb er_db
10.21.13.205(56770) idle
3513005 ?Ss 0:00 postgres: enterprisedb tp_db
10.21.13.205(56772) idle
3513267 ?Ss 0:00 postgres: enterprisedb er_db
10.23.0.203(39262) idle
3513476 ?Ss 0:00 postgres: enterprisedb er_db
10.21.13.205(56839) idle
3513704 ?Ss 0:00 postgres: enterprisedb mt_db
10.21.13.202(56608) idle
3513729 ?Ss 0:00 postgres: enterprisedb er_db
10.23.0.203(44926) idle
3514113 ?Ss 0:00 postgres: enterprisedb mt_db
10.21.13.202(53743) idle
3514374 ?Ss 0:00 postgres: enterprisedb mt_db
10.21.13.202(58623) idle
3514397 pts/1T  0:00 top
3515012 ?Ss 0:00 postgres: enterprisedb mt_db
10.21.13.202(58686) idle
3515088 ?Ss 0:00 postgres: enterprisedb mgt_db
10.21.13.202(58586) idle
3515942 ?Ss 0:00 postgres: enterprisedb er_db
10.23.0.203(64844) idle
3515987 ?Ss 0:00 postgres: enterprisedb er_db
10.23.0.203(27190) idle
3516230 ?Ss 0:00 postgres: enterprisedb postgres
10.21.13.202(60354) idle
3516655 ?Ss 0:00 postgres: enterprisedb er_db
10.21.13.205(57348) i

PgBackRest full backup first time : Verification

2024-08-29 Thread KK CHN
List,

pgbackrest  backup  what all the directories and files  PgBackRest will
backup to RepoServer when performing backup using PgBackRest.  ( Because of
a limited infr ( N/W bandwidth 8Mbps between DB Server and RepoServer) .

1.   I am not sure the pgbackrest command finished output message in the
console is true due to the following facts.
 i) .   My ILL - SSL VPN connection (8Mbps link) reset in between 3 to 4
times during the backup process span of 12 hours I reissued the pgbackrest
backup command 3 to 4 time in this span)
 ii)  EPAS   installation dir   /data/edb/as16/data showing 4.5 G only,
iii) /data/edb/as16/tablespace shows 149G
iv)  /data  Dir shows  537 Gand
v)/data/edb/as16/tablespace149G   for du  -h command  output.


Kindly share your thoughts does the   pgbackrest backup command ( for the
first time run) performed as expected ..   (Which shows in command line as
finished and subsequent reissue of the same command finishes in seconds )

I can't  restore back to the DB server right now to test it as it is a
production server and down time granting is not immediately possible to
test it ...



Thank you,
Krishane



[root@db1 data]# pwd
/data
[root@db1 data]# du -h
returns   537 G

and


[root@db1 data]# cd /data/edb/as16/data/
[root@db1 data]# pwd
/data/edb/as16/data
[root@db1 data]# du -h

Returns 4.5G




Query :  1  PgbackRest Initial  full backup  will copy  what ?
 /data/edb/as16/data  dir  of size 4.5 G?

or /data  Dir of 537 G  ?  // I think this won't ?

As my  pgbackrest command  issued initially finishes  with following output
for info command

 # sudo -u postgres pgbackrest --stanza=Repo --log-level-console=info
backup  //performed this

&  once it finishes  I reissued this command two times again(in
suspicion as  I left it overnight for the backup to complete) that's why I
think 2 incr backups showing here.

and

[root@dbtest ~]# sudo -u postgres pgbackrest info
stanza: Repo
status: ok
cipher: aes-256-cbc

db (current)
wal archive min/max (16):
00010083005F/000100870019

full backup: 20240829-105625F
timestamp start/stop: 2024-08-29 18:53:32+05:30 / 2024-08-29
20:24:13+05:30
wal start/stop: 000100860063 /
000100860074
database size: 146.9GB, database backup size: 146.9GB
repo1: backup size: 20.6GB

  ### Where this 146.9 GBfrom  (  please find the file system du -h
pasted bottom of this post)


incr backup: 20240829-105625F_20240830-084303I
timestamp start/stop: 2024-08-30 08:43:03+05:30 / 2024-08-30
08:50:46+05:30
wal start/stop: 00010087000E /
000100870010
database size: 148GB, database backup size: 4.7GB
repo1: backup size: 170.4MB
backup reference list: 20240829-105625F

incr backup: 20240829-105625F_20240830-090729I
timestamp start/stop: 2024-08-30 09:07:29+05:30 / 2024-08-30
09:12:50+05:30
wal start/stop: 000100870015 /
000100870016
database size: 148.1GB, database backup size: 1GB
repo1: backup size: 13.2MB
backup reference list: 20240829-105625F,
20240829-105625F_20240830-084303I
[root@dbtest ~]#




For more information about the DB cluster file system  Find the  paste below
[root@db1 data]# pwd
/data

[root@db1 data]# cd /data/edb/as16/data
[root@db1 data]# pwd
/data/edb/as16/data
[root@db1 data]# du -h
20K ./pg_wal/archive_status
4.1G./pg_wal
1.5M./global
0   ./pg_commit_ts
0   ./pg_dynshmem
0   ./pg_notify
0   ./pg_serial
0   ./pg_snapshots
56K ./pg_subtrans
0   ./pg_twophase
3.0M./pg_multixact/members
1.2M./pg_multixact/offsets
4.1M./pg_multixact
15M ./base/1
15M ./base/4
15M ./base/5
15M ./base/15355
15M ./base/42613
28M ./base/43102
0   ./base/pgsql_tmp
40K ./base/44497
100M./base
0   ./pg_replslot
0   ./pg_tblspc
0   ./pg_stat
0   ./pg_stat_tmp
48M ./pg_xact
0   ./pg_logical/snapshots
0   ./pg_logical/mappings
4.0K./pg_logical
280M./log
0   ./dbms_pipe
4.5G.


[root@db1 data]# cd pg_tblspc/
[root@db1 pg_tblspc]# du -h
0   .
[root@db1 pg_tblspc]# ls -al
total 4
drwxr-xr-x.  2 enterprisedb enterprisedb  110 Jun 14 14:17 .
drwx--. 21 enterprisedb enterprisedb 4096 Aug 30 00:00 ..
lrwxrwxrwx.  1 enterprisedb enterprisedb   38 Jun 14 14:17 16388 ->
/data/edb/as16/tablespace/ESS/SER/DAT
lrwxrwxrwx.  1 enterprisedb enterprisedb   38 Jun 14 14:17 16389 ->
/data/edb/as16/tablespace/ESS/SER/IDX
lrwxrwxrwx.  1 enterprisedb enterprisedb   38 Jun 14 14:17 16390 ->
/data/edb/as16/tablespace/ESS/GIS/DAT
lrwxrwxrwx.  1 enterprisedb enterprisedb   38 Jun 14 14:17 16391 ->
/data/edb/as16/tablespace/ESS/GIS/IDX
lrwxrwxrwx.  1 enterprisedb enterprisedb   38 Jun 14 14:17 16392 ->
/data/edb/as16/tablespace/ESS/RPT/DAT
lr

PgBackRest Ideal N/W need to provisioned ?

2024-08-29 Thread KK CHN
List,

I am doing a full backup  using PgBackRest from a production server to
Reposerver.

My connection is  IPSec VPN over ILL  ( 8 Mbps link) between the
 Production DB Server and  the remote RepoServer.

I understood the  bottleneck of 8 Mbps between servers. (Server NICs 10Gbps
and switch 10G)

The current size of the production DB   data dir  500GB  can grow upto 1
TB  to 2 TBs during a time span of one year from today.

Query : I have started the  backup command  and it is running (may go for
hours and days as link speed is minimal, right now it is achieved only
25GB/500 GB copied from 10:55 Hrs to  22:00 Hrs) .


What should be the ideal connectivity without over provisioning the
bandwidth, but doing a full backup must finish within acceptable time
duration.

I mean the connectivity btw. production server to Repo server
 i) ILL with SSL VPN btw  servers
ii) OR MPLS between  servers ?
iii) what should be the ideal bandwidth  required considering the data dir
size mentioned above to do a reasonable backup (Full). within a  reasonable
time window
iv)  without over provisioning and underprovisioning what should be the
ideal connectivity and its bandwidth requirement to avoid network errors
and intermittent  disconnection of links  if any?

v ) Need to perform at least one full backup in a week

Any suggestions much appreciated

Thank you ,
Krishane


Re: PgBackRest Full backup and N/W reliability

2024-08-29 Thread KK CHN
On Thu, Aug 29, 2024 at 6:54 PM Greg Sabino Mullane 
wrote:

> On Thu, Aug 29, 2024 at 2:21 AM KK CHN  wrote:
>
>> I am doing a full backup  using PgBackRest from a production server to
>> Reposerver.
>>
> ...
>
>> If so, does the backup process start  again from  scratch ?   or it
>> resumes from  where the backup process is stopped   ?
>>
>
> It resumes. You will see a message like this:
>
> WARN: resumable backup 20240829-091727F of same type exists -- invalid
> files will be removed then the backup will resume
>
> Any suggestions much appreciated
>>
>
> Boost your process-max as high as you are able to speed up your backup
> time.
>
>
what will be the ideal process-max number to use ?

 Once I update the process-max param in pgbackrest.conf ,  do I need to
stop and  start the pgback rest ?

Or just editing the pgbackrest.conf  on the fly will reflect the increased
process-max  numbers advantage ?

Is there a limit for proces max. setting ?

> Cheers,
> Greg
>
>
>


PgBackRest client_loop: send disconnect: Connection reset

2024-08-29 Thread KK CHN
List,

I am facing an Error :  on a connection reset while  PgBackRest is doing
backup process for DB server to Repo Server.

But the reissue of the pgbackrest backup  command  fails as follows...

"Unable to acquire lock on file '/tmp/pgbackrest/Repo-backup.lock':
Resource temporarily unavailableHINT: is another pgBackRest process
running?


After multiple attempts and  during  8 to 10 minutes,   the backup command
from RepoServer starts working again..How to avoid /what is the exact
reason for   "Resource temporarily unavailable"?  HINT : Is another
PgBackRest process running ?


Please advise ..

On Connection reset then issuing backup command I get out puts as below.


##

[root@dbtest ~]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info backup
2024-08-29 16:38:13.922 P00   INFO: backup command begin 2.52.1: --delta
--exec-id=658924-527c843b --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16 --process-max=5
--repo1-block --repo1-bundle --repo1-cipher-pass=
--repo1-cipher-type=aes-256-cbc --repo1-path=/data/DB_BKUPS
--repo1-retention-diff=2 --repo1-retention-full=2 --stanza=Repo --start-fast
WARN: no prior backup exists, incr backup has been changed to full
2024-08-29 16:38:16.745 P00   INFO: execute non-exclusive backup start:
backup begins after the requested immediate checkpoint completes
2024-08-29 16:38:18.003 P00   INFO: backup start archive =
00010085009A, lsn = 85/9AD8
2024-08-29 16:38:18.003 P00   INFO: check archive for prior segment
000100850099
WARN: resumable backup 20240829-105625F of same type exists -- invalid
files will be removed then the backup will resume
client_loop: send disconnect: Connection reset


[root@dbtest ~]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info backup
2024-08-29 18:45:06.630 P00   INFO: backup command begin 2.52.1: --delta
--exec-id=691718-6c37ff5c --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16 --process-max=5
--repo1-block --repo1-bundle --repo1-cipher-pass=
--repo1-cipher-type=aes-256-cbc --repo1-path=/data/DB_BKUPS
--repo1-retention-diff=2 --repo1-retention-full=2 --stanza=Repo --start-fast
ERROR: [050]: unable to acquire lock on file
'/tmp/pgbackrest/Repo-backup.lock': Resource temporarily unavailable
   HINT: is another pgBackRest process running?

2024-08-29 18:45:06.631 P00   INFO: backup command end: aborted with
exception [050]
[root@dbtest ~]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info backup
2024-08-29 18:45:14.531 P00   INFO: backup command begin 2.52.1: --delta
--exec-id=691730-06e11989 --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16 --process-max=5
--repo1-block --repo1-bundle --repo1-cipher-pass=
--repo1-cipher-type=aes-256-cbc --repo1-path=/data/DB_BKUPS
--repo1-retention-diff=2 --repo1-retention-full=2 --stanza=Repo --start-fast
ERROR: [050]: unable to acquire lock on file
'/tmp/pgbackrest/Repo-backup.lock': Resource temporarily unavailable
   HINT: is another pgBackRest process running?
2024-08-29 18:45:14.531 P00   INFO: backup command end: aborted with
exception [050]

###
After 10 minutes or so I issued the command again as follows it starts
backing up as follows  ?




2024-08-29 18:48:19.561 P00   INFO: backup command end: aborted with
exception [050]
[root@dbtest ~]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info backup
2024-08-29 18:52:33.834 P00   INFO: backup command begin 2.52.1: --delta
--exec-id=693517-4994fba1 --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16 --process-max=5
--repo1-block --repo1-bundle --repo1-cipher-pass=
--repo1-cipher-type=aes-256-cbc --repo1-path=/data/DB_BKUPS_NGERSS
--repo1-retention-diff=2 --repo1-retention-full=2 --stanza=Repo --start-fast
WARN: no prior backup exists, incr backup has been changed to full
2024-08-29 18:52:36.813 P00   INFO: execute non-exclusive backup start:
backup begins after the requested immediate checkpoint completes
2024-08-29 18:52:37.788 P00   INFO: backup start archive =
000100860063, lsn = 86/6328
2024-08-29 18:52:37.788 P00   INFO: check archive for prior segment
000100860062
WARN: resumable backup 20240829-105625F of same type exists -- invalid
files will be removed then the backup will resume


Thank you,
Krishane


PgBackRest Full backup and N/W reliability

2024-08-28 Thread KK CHN
List,

I am doing a full backup  using PgBackRest from a production server to
Reposerver.

My connection is  IPSec VPN over ILL  ( 8 Mbps link) between the
 Production DB Server and  the remote RepoServer.

I understood the  bottleneck of 8 Mbps between servers. (Server NICs 10Gbps
and switch)

Query : I have started the  backup command  and it is running (may go for
hours and days as link speed is minimal) .
If the link disconnected or Network error  happens before completion of the
backup command  Definitely the option is to reissue the  backup command
again.

If so, does the backup process start  again from  scratch ?   or it resumes
from  where the backup process is stopped   ?

If it starts from scratch I am afraid that  I can''t complete the initial
full backup never :(

  Or is there a work around if the network connectivity is lost in between
?


Any suggestions much appreciated

Thank you ,
Krishane

[root@dbtest pgbackrest]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info backup

2024-08-29 10:55:27.729 P00   INFO: backup command begin 2.52.1: --delta
--exec-id=523103-56943986 --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16 --process-max=5
--repo1-block --repo1-bundle --repo1-cipher-pass=
--repo1-cipher-type=aes-256-cbc --repo1-path=/data/DB_BKUPS
--repo1-retention-diff=2 --repo1-retention-full=2 --stanza=Repo --start-fast
WARN: no prior backup exists, incr backup has been changed to full
2024-08-29 10:55:30.589 P00   INFO: execute non-exclusive backup start:
backup begins after the requested immediate checkpoint completes
2024-08-29 10:55:31.543 P00   INFO: backup start archive =
00010085004C, lsn = 85/4C0007F8
2024-08-29 10:55:31.543 P00   INFO: check archive for prior segment
00010085004B


ON Repo Server:
[root@dbtest backup]# date
Thursday 29 August 2024 10:58:08 AM IST
[root@dbtest backup]# du -h
165M./Repo
165M

[root@dbtest backup]# date
Thursday 29 August 2024 11:37:03 AM IST
[root@dbtest backup]# du -h
1.9G./Repo
1.9G

ON  Production Server/data/edb/as16/datadirectory size is   500 GB


Re: PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread KK CHN
Thank you all for the great help ..

I couldn't get a chance toRestart the DB Cluster after making the
changes highlighted.  Correct me if I am wrong (  production server, down
time requested.  ) after editing the pg_hba.conf  on   DB server as
follows
local   all all trust
# IPv4 local connections:
hostall all 127.0.0.1/32md5
hostall all 10.0.0.0/8  md5



# IPv6 local connections:
hostall all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication all  peer
#local   replication all md5
hostreplication all 127.0.0.1/32md5
hostreplication all ::1/128 md5
hostreplication all 10.0.0.0/8  md5
[root@db1 edb]#




But a work around  seems worked as follows :

[root@db1 edb]# cat .pgpass
*:*:replication:enterprisedb:my_password

changed to

[root@db1 edb]# cat .pgpass
*:*:*:enterprisedb:password


For those struggling with this issuethis may help ...

on the DB server's   enterprisedb user's  home directory ( Here it is
/var/lib/edb/  )

[root@db1 edb]# cat .pgpass
*:*:*:enterprisedb:password
[root@db1 edb]# ls -al .pgpass
-rw---. 1 enterprisedb enterprisedb 28 Aug 29 09:26 .pgpass
[root@db1 edb]#


[root@db1 edb]# ls -al .pgpass
-rw---. 1 enterprisedb enterprisedb 28 Aug 29 09:26 .pgpass
[root@db1 edb]#


On Wed, Aug 28, 2024 at 8:28 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, August 28, 2024, Amitabh Kant  wrote:
>
>> On Wed, Aug 28, 2024 at 8:00 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Wednesday, August 28, 2024, KK CHN  wrote:
>>>
>>>>
>>>> and I have   .pgpass in DB server as
>>>>
>>>
>>> You assumed this mattered but I see no mention that pgBackRest consults
>>> this file.
>>>
>>> It seems to require the local entry in pg_hba.conf to use peer
>>> authentication.
>>>
>>> David J.
>>>
>>>
>> Section 21.4 on this page (
>> https://pgbackrest.org/user-guide.html#introduction )  does seem to
>> mention the use of .pgpass file. I have no idea about the actual problem
>> though.
>>
>
> Yes, postgres itself uses .pgpass so when you configure streaming
> replication between two servers, something that is doable regardless of
> using pgBackRest, the server-to-server connection can utilize .pgpass.
>
> David J.
>


PgbackRest stanza creation : on DB server or both DB and Repo server ?

2024-08-28 Thread KK CHN
List,

 I am configuring the   Pgbackrest  ( RHEL9.4 ,  EPAS 16  , PgbackRest
2.52.1 )  on two servers.(Database cluster server and   Repo Server).

Query.1  Do I need to create stanza on both   servers   ( DB server EPAS16
User : enterprisedb,as well as on  RepoServer   user: postgres   )

1.  only creating stanza  either at  DB Server  but not on  RepoServer is
sufficient?  OR  I have to create the  stanza on both servers as follows ?

on DB server   ( RHEL 9.4, EPAS 16  : DB user is : enterprisedb  )
]#sudo -u enterprisedb pgbackrest --stanza=Repo --log-level-console=info
stanza-create

Initially It complains about another  pgbackrest running  and aborted the
stanza-creation
 second attempt  : Without complaining about  anything  the stanza creaton
successful  :)   I couldn't understand why it complained about  another
pgbackrest running in the first place ?
( before running stanza creation on DB server,  I first tried the stanza
creation on  RepoServer as   ]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info stanza-create



On Repo server : ( postgres user for Repo server, but no PG installed only
user  postgres is created for  backup purpose)

]# #sudo -u postgres pgbackrest --stanza=Repo --log-level-console=info
stanza-create

But no success(on multiple stanza creation attempts)   :  It always aborted
with a message  another pgbackrest is running ..


So  Only   stanza creation has to  performed  either  on  DB Server or
RepoServe but not on both servers ?

Kindly shed some light on this.

Thank you,
Krishane


PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread KK CHN
I am trying  pgbackrest config  on a Production Server and a Repo server (
RHEL 9.4  EPAS 16 , pgbackrest 2.52.1

I have configured   pbbackrest.conf on both machines  as per the official
docs.

>From b*oth machines password less auth works for the db user *(enterprisedb
) and repouser(postgres)

.

When I create the stanza on both the  DB server and Repo server it fails
with

connection to server socket failed  no password supplied..


Here my configs on both primary and repo server.

DB Server.
[root@db1 ~]# cat /etc/pgbackrest/pgbackrest.conf
[Repo]
pg1-path=/data/edb/as16/data
pg1-port=5444
pg1-user=enterprisedb
pg-version-force=16
pg1-database=edb

[global]
repo1-host=10.255.0.40
repo1-host-user=postgres
archive-async=y
spool-path=/var/spool/pgbackrest
log-level-console=info
log-level-file=debug
delta=y

[global:archive-get]
process-max=2

[global:archive-push]
process-max=4
[root@db1 ~]#





Reposerver
[root@dbtest ~]# cat /etc/pgbackrest/pgbackrest.conf
[Repo]
pg1-host=10.15.0.202
pg1-host-user=enterprisedb
pg1-path=/data/edb/as16/data
pg-version-force=16

[global]
repo1-path=/data/DB_BKUPS
repo1-block=y
repo1-bundle=y
repo1-retention-full=2
repo1-retention-diff=2
repo1-cipher-type=aes-256-cbc
repo1-cipher-pass=acbd
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y

[global:archive-push]
compress-level=3
[root@dbtest ~]#



*On DB Server stanza creation fails with  *
   valid_lft forever preferred_lft forever
[root@db1 ~]# sudo -u enterprisedb pgbackrest --stanza=Repo
--log-level-console=info stanza-create
2024-08-28 19:30:31.518 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=4062179-ecf39176 --log-level-console=info --log-level-file=debug
--pg1-database=edb --pg1-path=/data/edb/as16/data --pg1-port=5444
--pg1-user=enterprisedb --pg-version-force=16 --repo1-host=10.255.0.40
--repo1-host-user=postgres --stanza=Repo
*WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='edb' port=5444 user='enterprisedb'': connection to server on
socket "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied*
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2024-08-28 19:30:31.523 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@db1 ~]#








*On Repo server:  stanza creation fails with follows. *


[root@dbtest ~]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info stanza-create
2024-08-28 19:21:10.958 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=350565-6e032daa --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/data/DB_BKUPS --stanza=Repo

*WARN: unable to check pg1: [DbConnectError] raised from remote-0 ssh
protocol on '10.15.0.202': unable to connect to 'dbname='edb' port=5444
user='enterprisedb'': connection to server on socket "/tmp/.s.PGSQL.5444"
failed: fe_sendauth: no password suppliedERROR: [056]: unable to find
primary cluster - cannot proceed*
   HINT: are all available clusters in recovery?
2024-08-28 19:21:12.462 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@dbtest ~]#




My DB Server pg_hba.conf  as follows


# "local" is for Unix domain socket connections only
local   all all md5
# IPv4 local connections:
hostall all 127.0.0.1/32md5
hostall all 10.0.0.0/8  md5



# IPv6 local connections:
hostall all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication all md5
hostreplication all 127.0.0.1/32md5
hostreplication all ::1/128 md5
hostreplication all 10.0.0.0/8  md5
[root@db1 ~]#




and I have   .pgpass in DB server as

[root@db1 ~]# cat /var/lib/edb/.pgpass
*:*:replication:enterprisedb:my_secret_password
[root@db1 ~]# ls -al /var/lib/edb/.pgpass
-rw---. 1 enterprisedb enterprisedb 38 Aug 28 19:01 /var/lib/edb/.pgpass
[root@db1 ~]#

WHy it complains about  no password supplied..


Any help is much appreciated.

Krishane


Re: Pgbackrest specifying the default DB necessary/correct way ?

2024-08-28 Thread KK CHN
Very helpful.

On Wed, Aug 28, 2024 at 5:51 PM Greg Sabino Mullane 
wrote:

> On Wed, Aug 28, 2024 at 1:39 AM KK CHN  wrote:
>
>> In this DB serverI have other databases  than the default  "edb"
>> database. Specifying the above line aspg1-database=edb   // I am
>> not sure this line is necessary  or not ?
>>
>
> The pgbackrest process needs to connect to the database, which means it
> needs a user and database. You need this variable if you do not have the
> default database, "postgres". If you have a database named postgres, you
> can leave this out. Otherwise, yes, it is necessary.
>
>  pg1-database=edb // specifying like this, will it block other databases
>> on this server to get backed up ?   IF yes how can I overcome this ?
>>
>
> pgBackRest works on a cluster level, so *all* the databases are backed up.
> Indeed, it is not possible to only backup some of the databases. It's the
> whole cluster.
>
>  ( I am just learning and exploring PgbackRest)  found online some
>> reference configurations so using like this )
>
>
> Probably best to stick to the official docs; this section in particular is
> worth a read:
>
> https://pgbackrest.org/user-guide-rhel.html
>
> Cheers,
> Greg
>
>
>


Pgbackrest specifying the default DB necessary/correct way ?

2024-08-27 Thread KK CHN
List,

I have configured the pgbackrest on a live DB server  and  a Repo Server. (
EPAS 16, RHEL 9.4 and Pgbackrest 2.52.1 )

On DB Server I have
##
[root@db1 ~]# cd
[root@db1 ~]# cat /etc/pgbackrest/pgbackrest.conf
[Demo_Repo]
pg1-path=/data/edb/as16/data
pg1-port=5444
pg1-user=enterprisedb
pg-version-force=16
pg1-database=edb*##   Query 1. *

[global]
repo1-host=10.255.0.40
repo1-host-user=postgres
archive-async=y
spool-path=/var/spool/pgbackrest
log-level-console=info
log-level-file=debug
delta=y

[global:archive-get]
process-max=2

[global:archive-push]
process-max=4
##

*## Query 1:  *

  In this DB serverI have other databases  than the default  "edb"
database. Specifying the above line aspg1-database=edb   // I am
not sure this line is necessary  or not ?

( I am just learning and exploring PgbackRest)  found online some reference
configurations so using like this )

 pg1-database=edb // specifying like this, will it block other databases
on this server to get backed up ?   IF yes how can I overcome this ?

I want all databases on this server to be backed up to the remote
repository.


It is a production server and I can't perform a trial and error method here
to understand how it works.


Please shed some light on this .


Thanks ,
Krishane


pgbackrest restore with a checkpoint and timestamp after the checkpoint

2024-08-21 Thread KK CHN
List,


Query:
Can I  perform a  pgbackrest restore with  the last backup diff or incr
+  further transactions in the WAL  replayed to restore the  transactions
that happened after the last  pgbackrest backup checkpoint   ?


Scenario:

I am trying to perform this and unable to get a solution.

I have 20th Aug 2024 A Differential backup as follows..


*   diff backup: 20240820-152602F_20240820-160402D*
* timestamp start/stop: 2024-08-20 16:04:02+05:30 / 2024-08-20
16:04:05+05:30*


1. Today (21st Aug 2024)  I've performed a  table drop  as follows and
noted the time stamps in BOLD highlighted


edb=# \dt
List of relations
 Schema |   Name   | Type  |Owner
+--+---+--
 public | foo  | table | enterprisedb
 public | important_table  | table | enterprisedb
 public | important_table2 | table | enterprisedb
 public | important_table4 | table | enterprisedb
(4 rows)

edb=# select now();
   now
--
* 21-AUG-24 13:58:31.611403 +05:30  // Before  table drop *
(1 row)

*edb=# drop table important_table4;*
DROP TABLE
edb=# \dt
List of relations
 Schema |   Name   | Type  |Owner
+--+---+--
 public | foo  | table | enterprisedb
 public | important_table  | table | enterprisedb
 public | important_table2 | table | enterprisedb
(3 rows)

edb=# select now();
   now
--
 *21-AUG-24 13:58:58.379552 +05:30  //after table drop*
(1 row)

edb=#


2.  Issue  as follows ...

When I do a  restore with the above differential backup and time stamp of
recovery  upto 21-AUG-24 13:58:*48.611403*+05:30"  it recovers the database
and I am able to see the dropped table  important_table4 recovered.


  Query:   IF  THIS IS NOT the expected result  which I want..
I want the restored  db without the deleted table 



So I am  recording a time stamp   after the table drop as seen above.

But when I give the time stamp anything greater than   21-AUG-24
13:58:48.611403+05:30"
(Eg : time stamp  13:58:49.611403+05:30)with an  expectation  that the
restored db server must show the dropped state ( important_table4   not to
present there ) .

The  edb restart always fails after  pgbackrest restore with any value
higher than  timestamp  13:58:48.611403 Why ??

 As per my understanding any restore referring to a checkpoint ( the
differential backup taken as  listed above)  and a time stamp of todays
after dropping the table important_table4 must  replay the WAL files after
the differential backup taken dated as seen above and upto the timestamp
(todays)after dropping the importatn_table4. C*orrect me If  I am wrong
here  ?*

I am *expecting* to see   theedb=# \dt
without the dropped table  " important_table4 " ( if the WAL replayed
upto the timestamp as I specified, Is this possible ? ) .   But this never
gets me a successful restart  of the edb server  ?



Here the output   :


[root@uaterssdrservice01 bin]# sudo -u enterprisedb pgbackrest
--stanza=Demo --delta --set=*20240820-152602F_20240820-160402D*
 --target-timeline=current --type=time  --target="21-AUG-24
13:58:49.611403+05:30"
--target-action=promote restore


2024-08-21 14:34:17.116 P00   INFO: restore command begin 2.52.1: --delta
--exec-id=252857-6013404c --log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres
--set=20240820-152602F_20240820-160402D --spool-path=/var/spool/pgbackrest
--stanza=Repo --target="21-AUG-24 13:58:49.611403+05:30"
--target-action=promote --target-timeline=current --type=time
2024-08-21 14:34:17.469 P00   INFO: repo1: restore backup set
20240820-152602F_20240820-160402D, recovery will start at 2024-08-20
16:04:02
2024-08-21 14:34:17.470 P00   INFO: remove invalid files/links/paths from
'/var/lib/edb/as16/data'
2024-08-21 14:34:18.274 P00   INFO: write updated
/var/lib/edb/as16/data/postgresql.auto.conf
2024-08-21 14:34:18.277 P00   INFO: restore global/pg_control (performed
last to ensure aborted restores cannot be started)
2024-08-21 14:34:18.277 P00   INFO: restore size = 89.8MB, file total = 2588
2024-08-21 14:34:18.278 P00   INFO:* restore command end: completed
successfully* (1164ms)

But  Issue is  as follows.

[root@uaterssdrservice01 bin]# systemctl  start  edb-as-16.service  (*No
Errors in console*)
[root@uaterssdrservice01 bin]# sudo -u enterprisedb psql edb
psql: error: connection to server on socket "/tmp/.s.PGSQL.5444" failed: No
such file or directory
Is the server running locally and accepting connections on that
socket?
[root@uaterssdrservice01 bin]#


Why the server restart always fails   on restore with this  time stamp (
greater than   21-AUG-24 13:58:49.611403+05:30 )   ??

Or I have to understand: Never can we  restore   ä  db server after the
last checkpoint ,  a

WAL replication and Archive command for pgbackrest on same server conf

2024-08-19 Thread KK CHN
Hi List ,

I am trying to configure the pgbackrest  on a live server ( RHEL9, EPAS16
and Pgbacrest .2.52)  which is having a working   WAL archive
configuration  to a standby server in postgresql.conf as follows


1. archive_mod =on
2. archive_level=replica
3. archive_command = 'cp %p /data/archive/%f'


To do the pgbackrest conf on the same   archive_command   directive (So
both the existing WAL configuration as well as my new pgbackrest option
both will work smoothly.   how to add an entry  in line 3  ??

archive_command = 'pgbackrest --stanza=Demo  archive-push  cp %p
/data/archive/%f '



Please correct me If I am doing wrong in the above line..


Thank you,
Krish


Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-02 Thread KK CHN
Thank you for shedding light on this.

On Thu, Aug 1, 2024 at 4:04 PM Mateusz Henicz 
wrote:

> When you are performing PITR you need to configure a timestamp before your
> last committed transaction. In your case you provided timestamp after your
> last commit.
>
>

I am trying to understand this concept and correct me if I am wrong
If I do

STEPS:
1. Noting down the time stamp  just before my table drop,  then
2.  Drop a  table, then
3. Taking the incremental backup after this table drop on Repo server, Then
4. If I do a  PITR,  with --target =  timestamp  noted in step1 and  --set
= the incr backuptaken in step3

Then if  perform a  PITR restore by pgbackrest in step4  will it succeed or
fail theoretically ?

Thanks,
Krishane


When postgtes is restoring until a specified point, it restores a
> transaction from WAL, and checking if next transaction is before or after
> said timestamp. If it is before it will replay it and check next
> transaction. Until next transaction is after configured timestamp.
> If there is no transaction after your current timestamp in current WAL,
> postgres will try to restore next WAL from archive. And since there is no
> next WAL, and your timestamp is past latest committed transaction, it is
> unable to continue, because it does not know if there should be any other
> transaction replayed or not.
>
> Just perform some other actions after you note down the timestamp after
> drop table. Create another one, insert some data, do whatever to have
> another transaction in WALs.
>
> Cheers,
> Mateusz
>
> czw., 1 sie 2024 o 12:23 KK CHN  napisał(a):
>
>> The logs are here.
>>
>> https://pastecode.io/s/s5dp8ur1
>>
>>
>>
>> On Thu, Aug 1, 2024 at 3:30 PM Kashif Zeeshan 
>> wrote:
>>
>>> Hi
>>>
>>> On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:
>>>
>>>> List,
>>>>
>>>> *Not working (start EPAS server always fails):*
>>>>
>>>> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9
>>>> ( Repo   Server)
>>>>
>>>>   When I do a PTR
>>>>
>>>> 1.  After doing a table drop and then
>>>> 2. Noting down the time stamp and then
>>>> 3. Taking an incremental backup in hope that If I do a restore from
>>>> this incr Backup, that won't  contain the  dropped table.
>>>> 4. Correct me  if I am  conceptually wrong here.
>>>> 5.  I am *never *successful in restoring the EPAS server in this
>>>> scenario.
>>>>
>>>>
>>>> *I know the following will work for me, w*hy not the above one if I
>>>> really want that state of cluster also  ?
>>>>
>>>> *This is Working. *
>>>>  1. Create table
>>>> 2. Noting down the timestamp
>>>> 3.  Taking incremental backup on RepoServer.
>>>> 4. drop the created table .
>>>> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
>>>> backup  and target= step 2 time stamp .. It finished the pgaback restore
>>>> and promote command
>>>> 6. I am able to start back the  EPAS server and see the dropped table
>>>> recovered there.
>>>>
>>>> But If I want a PTR as in the first section it fails.. Why ?
>>>>
>>>> Thank you,
>>>> Krishane
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *What I have done and results obtained: *
>>>>
>>>> Created a table important_table4 in my EPAS and note down the time
>>>> after creation of this table it is  ( t1 :  "01-AUG-24
>>>> 14:08:32.447796+05:30" )
>>>>
>>>> Then I performed an Incremental backup   (incr backup:
>>>> 20240729-160137F_20240801-141148I )
>>>> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01
>>>> 14:11:52+05:30
>>>>
>>>>
>>>> Now I dropped the table table4 from the EPAS and noted down the time
>>>>
>>>>
>>>> I want to  restore the table4,, so I stopped EPAS and executed
>>>>
>>>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>>> --set=20240729-160137F_20240801-141148I  --target-timeline=current
>>>> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
>>>> --target-action=promote restore
>>>>
>>>> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
>>>> important_table4 back.
>>>>
>>>> r

Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread KK CHN
sorry ignore the previous paste , was in complete

Here the full one.   https://pastecode.io/s/hya0fyvo


On Thu, Aug 1, 2024 at 4:02 PM KK CHN  wrote:

> The logs are here.
>
> https://pastecode.io/s/s5dp8ur1
>
>
>
> On Thu, Aug 1, 2024 at 3:30 PM Kashif Zeeshan 
> wrote:
>
>> Hi
>>
>> On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:
>>
>>> List,
>>>
>>> *Not working (start EPAS server always fails):*
>>>
>>> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
>>> Repo   Server)
>>>
>>>   When I do a PTR
>>>
>>> 1.  After doing a table drop and then
>>> 2. Noting down the time stamp and then
>>> 3. Taking an incremental backup in hope that If I do a restore from this
>>> incr Backup, that won't  contain the  dropped table.
>>> 4. Correct me  if I am  conceptually wrong here.
>>> 5.  I am *never *successful in restoring the EPAS server in this
>>> scenario.
>>>
>>>
>>> *I know the following will work for me, w*hy not the above one if I
>>> really want that state of cluster also  ?
>>>
>>> *This is Working. *
>>>  1. Create table
>>> 2. Noting down the timestamp
>>> 3.  Taking incremental backup on RepoServer.
>>> 4. drop the created table .
>>> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
>>> backup  and target= step 2 time stamp .. It finished the pgaback restore
>>> and promote command
>>> 6. I am able to start back the  EPAS server and see the dropped table
>>> recovered there.
>>>
>>> But If I want a PTR as in the first section it fails.. Why ?
>>>
>>> Thank you,
>>> Krishane
>>>
>>>
>>>
>>>
>>>
>>> *What I have done and results obtained: *
>>>
>>> Created a table important_table4 in my EPAS and note down the time after
>>> creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )
>>>
>>> Then I performed an Incremental backup   (incr backup:
>>> 20240729-160137F_20240801-141148I )
>>> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01
>>> 14:11:52+05:30
>>>
>>>
>>> Now I dropped the table table4 from the EPAS and noted down the time
>>>
>>>
>>> I want to  restore the table4,, so I stopped EPAS and executed
>>>
>>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>> --set=20240729-160137F_20240801-141148I  --target-timeline=current
>>> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
>>> --target-action=promote restore
>>>
>>> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
>>> important_table4 back.
>>>
>>> root@service01 ~]# sudo -u enterprisedb psql edb
>>> psql (16.3.0)
>>> Type "help" for help.
>>>
>>> edb=# \dt
>>> List of relations
>>>  Schema |   Name   | Type  |Owner
>>> +--+---+--
>>>  public | important_table  | table | enterprisedb
>>>  public | important_table2 | table | enterprisedb
>>>  public | important_table3 | table | enterprisedb
>>>  public | important_table4 | table | enterprisedb
>>> (4 rows)
>>>
>>> SO all works fine  .
>>>
>>>
>>> *But Now the PROBLEM Statement. *
>>>
>>> *1. I am dropping the table table 4 again *
>>> edb=# \q
>>> [root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
>>> important_table4; commit;" edb
>>> BEGIN
>>> DROP TABLE
>>> COMMIT
>>> *2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
>>> current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
>>> Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )
>>>
>>> 3. Now  I am performing an incremental backup after step 2  on REPO
>>> SErver ( Hoping that this latest INCR Backup is without dropped
>>> important_table4, so that a recovery of the cluster  shouldn't show the
>>> table4 again. )
>>>
>>> incr backup details. : 20240729-160137F_20240801-142433I
>>> timestamp start/stop*: 2024-08-01 14:24:33+05:30 /
>>> 2024-08-01 14:24:36+05:30*
>>>
>>> 4. Now I want to test the database recovery  after dropping the table4
>>> in 

Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread KK CHN
The logs are here.

https://pastecode.io/s/s5dp8ur1



On Thu, Aug 1, 2024 at 3:30 PM Kashif Zeeshan 
wrote:

> Hi
>
> On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:
>
>> List,
>>
>> *Not working (start EPAS server always fails):*
>>
>> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
>> Repo   Server)
>>
>>   When I do a PTR
>>
>> 1.  After doing a table drop and then
>> 2. Noting down the time stamp and then
>> 3. Taking an incremental backup in hope that If I do a restore from this
>> incr Backup, that won't  contain the  dropped table.
>> 4. Correct me  if I am  conceptually wrong here.
>> 5.  I am *never *successful in restoring the EPAS server in this
>> scenario.
>>
>>
>> *I know the following will work for me, w*hy not the above one if I
>> really want that state of cluster also  ?
>>
>> *This is Working. *
>>  1. Create table
>> 2. Noting down the timestamp
>> 3.  Taking incremental backup on RepoServer.
>> 4. drop the created table .
>> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
>> backup  and target= step 2 time stamp .. It finished the pgaback restore
>> and promote command
>> 6. I am able to start back the  EPAS server and see the dropped table
>> recovered there.
>>
>> But If I want a PTR as in the first section it fails.. Why ?
>>
>> Thank you,
>> Krishane
>>
>>
>>
>>
>>
>> *What I have done and results obtained: *
>>
>> Created a table important_table4 in my EPAS and note down the time after
>> creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )
>>
>> Then I performed an Incremental backup   (incr backup:
>> 20240729-160137F_20240801-141148I )
>> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01
>> 14:11:52+05:30
>>
>>
>> Now I dropped the table table4 from the EPAS and noted down the time
>>
>>
>> I want to  restore the table4,, so I stopped EPAS and executed
>>
>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>> --set=20240729-160137F_20240801-141148I  --target-timeline=current
>> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
>> --target-action=promote restore
>>
>> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
>> important_table4 back.
>>
>> root@service01 ~]# sudo -u enterprisedb psql edb
>> psql (16.3.0)
>> Type "help" for help.
>>
>> edb=# \dt
>> List of relations
>>  Schema |   Name   | Type  |Owner
>> +--+---+--
>>  public | important_table  | table | enterprisedb
>>  public | important_table2 | table | enterprisedb
>>  public | important_table3 | table | enterprisedb
>>  public | important_table4 | table | enterprisedb
>> (4 rows)
>>
>> SO all works fine  .
>>
>>
>> *But Now the PROBLEM Statement. *
>>
>> *1. I am dropping the table table 4 again *
>> edb=# \q
>> [root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
>> important_table4; commit;" edb
>> BEGIN
>> DROP TABLE
>> COMMIT
>> *2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
>> current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
>> Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )
>>
>> 3. Now  I am performing an incremental backup after step 2  on REPO
>> SErver ( Hoping that this latest INCR Backup is without dropped
>> important_table4, so that a recovery of the cluster  shouldn't show the
>> table4 again. )
>>
>> incr backup details. : 20240729-160137F_20240801-142433I
>> timestamp start/stop*: 2024-08-01 14:24:33+05:30 /
>> 2024-08-01 14:24:36+05:30*
>>
>> 4. Now I want to test the database recovery  after dropping the table4 in
>> step1 to verify that my EPAS restores from the backup in step 3 and time
>> stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
>> cluster doesn't contain the important_table4.
>>
>> 5.  $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>  --set=20240729-160137F_20240801-142433I  --target-timeline=current
>> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
>> --target-action=promote restore
>>  
>> -
>> INFO: restore command end: completed successfully (1035ms)
>>
>> *ISSUE:I am

PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread KK CHN
List,

*Not working (start EPAS server always fails):*

1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
Repo   Server)

  When I do a PTR

1.  After doing a table drop and then
2. Noting down the time stamp and then
3. Taking an incremental backup in hope that If I do a restore from this
incr Backup, that won't  contain the  dropped table.
4. Correct me  if I am  conceptually wrong here.
5.  I am *never *successful in restoring the EPAS server in this scenario.


*I know the following will work for me, w*hy not the above one if I really
want that state of cluster also  ?

*This is Working. *
 1. Create table
2. Noting down the timestamp
3.  Taking incremental backup on RepoServer.
4. drop the created table .
5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
backup  and target= step 2 time stamp .. It finished the pgaback restore
and promote command
6. I am able to start back the  EPAS server and see the dropped table
recovered there.

But If I want a PTR as in the first section it fails.. Why ?

Thank you,
Krishane





*What I have done and results obtained: *

Created a table important_table4 in my EPAS and note down the time after
creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )

Then I performed an Incremental backup   (incr backup:
20240729-160137F_20240801-141148I )
timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01 14:11:52+05:30


Now I dropped the table table4 from the EPAS and noted down the time


I want to  restore the table4,, so I stopped EPAS and executed

$ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
--set=20240729-160137F_20240801-141148I  --target-timeline=current
--type=time  --target="01-AUG-24 14:08:32.447796+05:30"
--target-action=promote restore

IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
important_table4 back.

root@service01 ~]# sudo -u enterprisedb psql edb
psql (16.3.0)
Type "help" for help.

edb=# \dt
List of relations
 Schema |   Name   | Type  |Owner
+--+---+--
 public | important_table  | table | enterprisedb
 public | important_table2 | table | enterprisedb
 public | important_table3 | table | enterprisedb
 public | important_table4 | table | enterprisedb
(4 rows)

SO all works fine  .


*But Now the PROBLEM Statement. *

*1. I am dropping the table table 4 again *
edb=# \q
[root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
important_table4; commit;" edb
BEGIN
DROP TABLE
COMMIT
*2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )

3. Now  I am performing an incremental backup after step 2  on REPO SErver
( Hoping that this latest INCR Backup is without dropped important_table4,
so that a recovery of the cluster  shouldn't show the table4 again. )

incr backup details. : 20240729-160137F_20240801-142433I
timestamp start/stop*: 2024-08-01 14:24:33+05:30 / 2024-08-01
14:24:36+05:30*

4. Now I want to test the database recovery  after dropping the table4 in
step1 to verify that my EPAS restores from the backup in step 3 and time
stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
cluster doesn't contain the important_table4.

5.  $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
 --set=20240729-160137F_20240801-142433I  --target-timeline=current
--type=time  --target="01-AUG-24 14:23:22.085076+05:30"
--target-action=promote restore
 
-
INFO: restore command end: completed successfully (1035ms)

*ISSUE:I am unable to get the EPAS Server* in running state after step
5

 *What am I doing wrong ?  OR am I conceptually wrong ?*




OUTPUT on executing step 5.

[root@service01 ~]# sudo -u enterprisedb pgbackrest --stanza=Demo_Repo
--delta --set=20240729-160137F_20240801-142433I  --target-timeline=current
--type=time  --target="01-AUG-24 14:23:22.085076+05:30"
--target-action=promote restore

2024-08-01 14:30:03.535 P00   INFO: restore command begin 2.52.1: --delta
--exec-id=82738-b5fe7415 --log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres
--set=20240729-160137F_20240801-142433I --stanza=Demo_Repo
--target="01-AUG-24 14:23:22.085076+05:30" --target-action=promote
--target-timeline=current --type=time
2024-08-01 14:30:03.880 P00   INFO: repo1: restore backup set
20240729-160137F_20240801-142433I, recovery will start at 2024-08-01
14:24:33
2024-08-01 14:30:03.881 P00   INFO: remove invalid files/links/paths from
'/var/lib/edb/as16/data'
2024-08-01 14:30:04.567 P00   INFO: write updated
/var/lib/edb/as16/data/postgresql.auto.conf
2024-08-01 14:30:04.569 P00   INFO: restore global/pg_control (performed
last to ensure aborted restores cannot be started)
2024-08-01 14:30:04.5

[OPSEC]Re: Sunsetting the OPSec WG: It's Been a Secure Ride!

2024-07-30 Thread KK Chittimaneni
Thank you to everyone who contributed to this WG over the years.

Best Regards,
KK

On Tue, Jul 30, 2024 at 9:07 AM Bill Woodcock  wrote:

> Indeed!  Thanks so much, Jen, Ron, Warren.  It’s been a great forum for
> network operators within the IETF, and has provided invaluable nudges
> forward in the transition to IPv6.  And RFC7454 is one of the staples in
> educating new BGP-speakers.  I hope to see all of us working in other areas
> around the IETF.  Best wishes, everyone!
>
> -Bill
>
> > On Jul 30, 2024, at 16:31, Eric Vyncke (evyncke)  40cisco@dmarc.ietf.org> wrote:
> >
> > Also a tear or two in my eyes when reading this...
> >  Thank you to Jen, Ron, and Warren
> >  -éric
> >  PS: what is the status of the mailing list ? Will it be kept open ?
> >  From: Jen Linkova 
> > Date: Friday, 26 July 2024 at 19:41
> > To: opsec WG 
> > Cc: OpSec Chairs , Warren Kumari <
> war...@kumari.net>
> > Subject: [OPSEC]Sunsetting the OPSec WG: It's Been a Secure Ride!
> > Hello,
> >
> > With security woven into every modern IETF protocol (like a comfy
> > security blanket), this group has been unsurprisingly quiet lately.
> >
> > So, with a touch of nostalgia (and maybe a tear or two), the chairs
> > and the responible AD have decided it's time to say farewell to the
> > OpSec WG.
> >
> > The mail archive reveals the very first email was sent to this list on
> > December 19th, 2007—back when flip phones were still cool!
> >
> > It's been a fantastic 16 years. The group has published 18 RFCs,
> > making the internet (and, most importantly, IPv6) a safer and
> > all-around more awesome place.
> >
> > A massive thank you to everyone for their contributions and hard work!
> >
> > ---
> > Ron, Jen and Warren
>
>
>
>
> ___
> OPSEC mailing list -- opsec@ietf.org
> To unsubscribe send an email to opsec-le...@ietf.org
>
___
OPSEC mailing list -- opsec@ietf.org
To unsubscribe send an email to opsec-le...@ietf.org


Re: PgbackRest PointTIme Recovery : server unable to start back

2024-07-25 Thread KK CHN
:57 IST LOG:  restored log file "0009003E"
from archive
2024-07-26 11:32:57.400 P00   INFO: archive-get command begin 2.52.1:
[0009003F, pg_wal/RECOVERYXLOG] --exec-id=43299-e2db2e1b
--log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres --stanza=Demo_Repo
2024-07-26 11:32:57.521 P00   INFO: unable to find 0009003F
in the archive
2024-07-26 11:32:57.621 P00   INFO: archive-get command end: completed
successfully (222ms)
2024-07-26 11:32:57 IST LOG:  completed backup recovery with redo LSN
0/3D28 and end LSN 0/3D000100
2024-07-26 11:32:57 IST LOG:  consistent recovery state reached at
0/3D000100
2024-07-26 11:32:57 IST LOG:  database system is ready to accept read-only
connections
2024-07-26 11:32:57.632 P00   INFO: archive-get command begin 2.52.1:
[0009003F, pg_wal/RECOVERYXLOG] --exec-id=43301-f613dae9
--log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres --stanza=Demo_Repo
2024-07-26 11:32:57.761 P00   INFO: unable to find 0009003F
in the archive
2024-07-26 11:32:57.861 P00   INFO: archive-get command end: completed
successfully (231ms)
2024-07-26 11:32:57 IST LOG:  redo done at 0/3E60 system usage: CPU:
user: 0.00 s, system: 0.00 s, elapsed: 0.75 s
2024-07-26 11:32:57 IST FATAL:  recovery ended before configured recovery
target was reached
2024-07-26 11:32:57 IST LOG:  startup process (PID 43292) exited with exit
code 1


ONLY inference  I can make is

  INFO  unable to find   0009003F in the archive( This
means  the  EDB server  (10.10.20.6  ) unable to push the archives to the
 Repo server(10.10.20.7 ) ?Is that the reason for the  recovery and
start backing of edb server fails ?


the pg_hba.conf   entry in the EDB Server machine is as

hostall all 127.0.0.1/32ident
hostall all 10.10.20.7/32  scram-sha-256
#hostall all  10.10.20.7/32  trust
# IPv6 local connections:
hostall all ::1/128 ident
#hostall all 10.10.20.7/24   trust

# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication all peer
hostreplication all 10.10.20.7/32
scram-sha-256
hostreplication all 127.0.0.1/32ident
hostreplication all ::1/128 ident


Do I have to change anything in pg_hba.conf ?


my EDB Server conf as  this

archive_mode = on
archive_command = 'pgbackrest --stanza=Demo_Repo  archive-push %p'
log_filename = 'postgresql.log'
max_wal_senders = 5
wal_level = replica


Any help ?

Krishane




On Fri, Jul 26, 2024 at 10:45 AM Muhammad Ikram  wrote:

> Hi KK CHN
>
> Could you check server logs ?
> Your service trace suggests that it started server and then failure
> happened
>
> ul 26 09:48:49 service01 systemd[1]: Started EDB Postgres Advanced Server
> 16.
> Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Main process
> exited, code=exited, status=1/FAILURE
>
>
>
> Regards,
> Ikram
>
>
> On Fri, Jul 26, 2024 at 10:04 AM KK CHN  wrote:
>
>> List,
>>
>> Reference: https://pgbackrest.org/user-guide-rhel.html#pitr
>> I am following the   PTR  on RHEL9 EPAS16.
>> I am able to do a  backup(Full, diff and incr)  and   restore from a full
>> backup  and restart of EPAS16 works fine.
>>
>> But when I do an incremental backup  after doing the   procedures
>> mentioned in the PTR section of the above  reference link and  try
>> restoring the EDB database from the INCR backup   and then starting up the
>> EPAS16 the server  always results in dead state
>>
>>  My repo server is another machine.  If  I do  a  full restore  on the DB
>> server  ( sudo -u enterprisedb pgbackrest --stanza=Demo_Repo  --delta
>> restore) it works  and the server starts without any issue.
>> Restoring  from  Incremental backup tty output shows successful but edb
>> service start  results in  failure.
>>
>> Any help is much appreciated.
>>
>> Krishane.
>>
>>
>>
>>
>> STEPS followed:
>>
>> after dropping  the table  pg-primary *⇒* Drop the important table
>> ([section]
>> stopped the EDB server.
>>
>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo--delta
>> --set=20240719-122703F_20240724-094727I --target-timeline=current
>> --type=time "--target=2024-07-24 09:44:01.3255+05:30"
>> --target-ac

PgbackRest PointTIme Recovery : server unable to start back

2024-07-25 Thread KK CHN
List,

Reference: https://pgbackrest.org/user-guide-rhel.html#pitr
I am following the   PTR  on RHEL9 EPAS16.
I am able to do a  backup(Full, diff and incr)  and   restore from a full
backup  and restart of EPAS16 works fine.

But when I do an incremental backup  after doing the   procedures mentioned
in the PTR section of the above  reference link and  try  restoring the EDB
database from the INCR backup   and then starting up the  EPAS16 the
server  always results in dead state

 My repo server is another machine.  If  I do  a  full restore  on the DB
server  ( sudo -u enterprisedb pgbackrest --stanza=Demo_Repo  --delta
restore) it works  and the server starts without any issue.
Restoring  from  Incremental backup tty output shows successful but edb
service start  results in  failure.

Any help is much appreciated.

Krishane.




STEPS followed:

after dropping  the table  pg-primary *⇒* Drop the important table
([section]
stopped the EDB server.

$ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo--delta
--set=20240719-122703F_20240724-094727I --target-timeline=current
--type=time "--target=2024-07-24 09:44:01.3255+05:30"
--target-action=promote  restore
.

2024-07-26 09:48:06.343 P00   INFO: restore command end: completed
successfully (1035ms)


But

[root@rservice01 ~]# sudo systemctl start edb-as-16.service
[root@service01 ~]# sudo systemctl status edb-as-16.service
× edb-as-16.service - EDB Postgres Advanced Server 16
 Loaded: loaded (/etc/systemd/system/edb-as-16.service; disabled;
preset: disabled)
   *  Active: failed* (Result: exit-code) since Fri 2024-07-26 09:48:50
IST; 8s ago
   Duration: 242ms
Process: 41903 ExecStartPre=/usr/edb/as16/bin/edb-as-16-check-db-dir
${PGDATA} (code=exited, status=0/SUCCESS)
Process: 41908 ExecStart=/usr/edb/as16/bin/edb-postgres -D ${PGDATA}
(code=exited, status=1/FAILURE)
   Main PID: 41908 (code=exited, status=1/FAILURE)
CPU: 331ms

Jul 26 09:48:48 service01 systemd[1]: Starting EDB Postgres Advanced Server
16...
Jul 26 09:48:48 service01 edb-postgres[41908]: 2024-07-26 09:48:48 IST LOG:
 redirecting log output to logging collector process
Jul 26 09:48:48 service01 edb-postgres[41908]: 2024-07-26 09:48:48 IST
HINT:  Future log output will appear in directory "log".
Jul 26 09:48:49 service01 systemd[1]: Started EDB Postgres Advanced Server
16.
Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Main process
exited, code=exited, status=1/FAILURE
Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Killing process
41909 (edb-postgres) with signal SIGKILL.
Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Failed with result
'exit-code'.
[root@service01 ~]#

why is it unable to perform a restore and recovery  from an incr  backup ?







On The Repo Server
[root@service02 ~]#  sudo -u postgres pgbackrest --stanza=Demo_Repo info
stanza: Demo_Repo
status: ok
cipher: aes-256-cbc

db (current)
wal archive min/max (16):
00020021/000B0041

full backup: 20240719-122703F
timestamp start/stop: 2024-07-19 12:27:03+05:30 / 2024-07-19
12:27:06+05:30
wal start/stop: 0002002A /
0002002A
database size: 61.7MB, database backup size: 61.7MB
repo1: backup size: 9.6MB

incr backup: 20240719-122703F_20240719-123353I
timestamp start/stop: 2024-07-19 12:33:53+05:30 / 2024-07-19
12:33:56+05:30
wal start/stop: 0002002C /
0002002C
database size: 61.7MB, database backup size: 6.4MB
repo1: backup size: 6.2KB
backup reference list: 20240719-122703F

diff backup: 20240719-122703F_20240719-123408D
timestamp start/stop: 2024-07-19 12:34:08+05:30 / 2024-07-19
12:34:10+05:30
wal start/stop: 0002002E /
0002002E
database size: 61.7MB, database backup size: 6.4MB
repo1: backup size: 6.4KB
backup reference list: 20240719-122703F

incr backup: 20240719-122703F_20240723-110212I
timestamp start/stop: 2024-07-23 11:02:12+05:30 / 2024-07-23
11:02:15+05:30
wal start/stop: 00070038 /
00070038
database size: 48MB, database backup size: 6.4MB
repo1: backup size: 9.8KB
backup reference list: 20240719-122703F,
20240719-122703F_20240719-123408D

incr backup: 20240719-122703F_20240723-141818I
timestamp start/stop: 2024-07-23 14:18:18+05:30 / 2024-07-23
14:18:22+05:30
wal start/stop: 0008003C /
0008003C
database size: 75.4MB, database backup size: 33.8MB
repo1: backup size: 4.7MB
backup reference list: 20240719-122703F,
20240719-122703F_20240719-123408D, 20240719-122703F_20240723-110212I

 

pgBackRest for multiple production servers

2024-07-21 Thread KK CHN
Hi list ,

I am exploring the  PgBackRest tool for production deployment. ( My lab
setup with one   Database server and another Reposerver deployed working
fine as in the official docs)

Query:

What may be the standard practice employed to  backup multiple production
servers to one RepoServer ? ( the pgbackrest configuration on the
RepoServer part )

Is this the  right way to achieve this (Defining multiple stanzas
Server1,  Server  2 ..  Server  N and single  [global] with  repo1, repo2
and repon N  declarations   ?

Please correct me if I am wrong ..

Thank you
Krishane


Please find the proposed   pgbackrest.conf   in the  RepoServer  for
backing up multiple database servers.

/etc/pgbackrest/pgbackrest.conf   on  RepoServer
##
[ Server  _1]
pg1-host=10.20.20.6
pg1-host-user= pgbackUser
pg1-path=/var/lib/pgsql/16/data
. . . . .  . . .  . . . . . . . . . . . . . . . . .
. . . . . . .  .  .  .  .  .  .  .  .  .  .  .  .  .
. . . . . . . . . .  . . . . . .. .. . .. . . . .

[ Server  _N]
pgN-host=10.20.20.N
pgN-host-user= pgbackUser
pgN-path=/var/lib/pgsql/16/data


[global]
repo1-path=/var/lib/ Server_1_Backup
repo1-retention-full=2
repo1-cipher-type=aes-256-cbc
repo1-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repo1-block=y
repo1-bundle=y

repo2-path=/var/lib/ Server_2_Backup
repo2-retention-full=2
repo2-cipher-type=aes-256-cbc
repo2-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repo2-block=y
repo2-bundle=y

.   .   .   .  . .  .  .   .  .  .  .  .  .  .  .  .  .  . .  .  .  .  .
.  .
.   .   .   .  . .  .  .   .  .  .  .  .  .  .  .  .  .  . .  .  .  .  .
.  .
.   .   .   .  . .  .  .   .  .  .  .  .  .  .  .  .  .  . .  .  .  .  .
.  .

repoN-path=/var/lib Server_N_Backup
repoN-retention-full=2
repoN-cipher-type=aes-256-cbc
repoN-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repoN-block=y
repoN-bundle=y


[global:archive-push]
compress-level=3
###


Re: PgbackRest and EDB Query

2024-07-19 Thread KK CHN
Hi  list,

Thank you all for the great help and guidance, I am able to configure
 pgbackrest with EPAS-16  and a Repo server both separate machines..
Password less auth also worked well.   Backup and restore all fine.

Query
How can I make the   Reposerver   to host more than one EPAS-16 server
instance's running on multiple nodes ?

Having only one  /etg/pgbakrest/pgbackrest.conf file on the Repo Server how
to specify stanza  name and  global  for multiple EPAS servers?
My Repo Server:  cat /etc/pgbackrest/pgbackrest.conf

[Demo_Repo]
pg1-host=10.20.20.6
pg1-host-user=enterprisedb
pg1-path=/var/lib/edb/as16/data
pg-version-force=16

[global]
 about the repository

repo1-path=/var/lib/edb_BackupRepo

repo1-retention-full=2
repo1-cipher-type=aes-256-cbc
repo1-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repo1-block=y
repo1-bundle=y
[global:archive-push]
compress-level=3
##


1. So if there are multiple   EPAS servers  running   ondifferent nodes
10.20.20.7,  10.20.20.8,   etc.  how to specify the   stanzas and
globals for each  EPAS server in single (  /etc/pgbackrest/pgbackrest.conf
)on Repo server  ?

2.  Say there are  X numbers (say 10 EPAS servers from different geo
locations)  of  EPAS servers  each has a daily growth of   aprox 1 GB/day
 then   what should be the connectivity capacity parameters need to
consider to cater the  archiving and replication by pgbackrest in a
production environment to the repo server  ?

3. Also what will be the best backup  configuration  in a crontab  for
achieving maximum RPO ? I mean zero data loss ?( incr or diff
repetition intervals ?)  here my sample crontab, only for  full and diff
(in lab setup)  but for  production env and for near zero data lost what
configs needed in cron ?

my sample cron here.
[root@RepoServer ~]# crontab -u postgres -l
30  06  *  *  0 pgbackrest   --type=full--stanza=Demo2   backup
//  only on sundays

04  16  *  * 1-6 pgbackrest   --type=diff  --stanza=Demo2backup
 // on everyday diff
[root@uaterssdrservice02 ~]#

Thanks again
Krishane


On Fri, Jul 19, 2024 at 11:24 AM azeem subhani  wrote:

> Hi,
>
> passwordless connection can be established using ssh key, and when you
> don't specify the ssh key in command using -i switch:* -i
> /path/to/your/private/key*
> You simply need to set the SSH key as the default key which I have
> explained earlier, how to do that.
>
> As you are currently trying through following command, without specifying
> an ssh key for passwordless connection.
>
> From the EDB Postgres Advanced Server nodes
> $ sudo -u enterprisedb ssh pgbackrest@backup-server
>
>
>
>
> On Fri, Jul 19, 2024 at 10:06 AM Kashif Zeeshan 
> wrote:
>
>> Hi
>>
>> On Thu, Jul 18, 2024 at 6:10 PM KK CHN  wrote:
>>
>>>
>>>
>>> Hi list,
>>>
>>> Thank you all for your  inputs, I am trying pgbacrest with
>>> Enterprised DB.  Locally pgbackrest works for  EDB but when I am trying for
>>> remote repository I am facing an issue ( from the remote host to  EDB
>>> server  password less authentication part )
>>>
>>> Trying to  use a remote host  as Repo Server I am facing the issue of
>>> passwordless  authentication(Public key private key).
>>>
>>> 1.  From the EDB server  I  added the user pgbackrest directory and
>>> generated ssh-keys and copied the id_rsa.pub   to  the Repo server
>>> (pgbackrest user's .ssh dir with necessary permissions)
>>> everything(passwordless auth) working to one side.
>>>
>>> From the EDB Postgres Advanced Server nodes
>>> $ sudo -u enterprisedb ssh pgbackrest@backup-server
>>>
>>> This works from  EDB server machine without any issue(password less auth
>>> works)
>>>
>>>
>>>
>>> 2 But   from the reposerver
>>> $sudo -u pgbackrest   ssh enterprisedb@EDB_Server_IP   unable to do
>>> password less auth( Its asking password for enterpridb@EDB_Server )
>>>
>>> How to do the passwordless auth  from the  Repo server to the EDB
>>> server  for the default "enterprisedb" user of  EDB ? ( enterprisedb user
>>> doesn't have any home dir  I mean /home/enterprisedb, so I am not sure
>>> where to create .ssh dir and authorized_keys for  passwordless auth  )
>>>
>> Please make sure that the passwordless connection is made between both
>> from EDB Server to Repo Server and from Repo Server to EDB Server.
>> For this you need to genera

Re: PgbackRest and EDB Query

2024-07-18 Thread KK CHN
Hi list,

Thank you all for your  inputs, I am trying pgbacrest with Enterprised DB.
Locally pgbackrest works for  EDB but when I am trying for remote
repository I am facing an issue ( from the remote host to  EDB server
password less authentication part )

Trying to  use a remote host  as Repo Server I am facing the issue of
passwordless  authentication(Public key private key).

1.  From the EDB server  I  added the user pgbackrest directory and
generated ssh-keys and copied the id_rsa.pub   to  the Repo server
(pgbackrest user's .ssh dir with necessary permissions)
everything(passwordless auth) working to one side.

>From the EDB Postgres Advanced Server nodes
$ sudo -u enterprisedb ssh pgbackrest@backup-server

This works from  EDB server machine without any issue(password less auth
works)



2 But   from the reposerver
$sudo -u pgbackrest   ssh enterprisedb@EDB_Server_IP   unable to do
password less auth( Its asking password for enterpridb@EDB_Server )

How to do the passwordless auth  from the  Repo server to the EDB server
for the default "enterprisedb" user of  EDB ? ( enterprisedb user doesn't
have any home dir  I mean /home/enterprisedb, so I am not sure where to
create .ssh dir and authorized_keys for  passwordless auth  )

Any one who has already tackled this kindly guide  me on how to achieve
this .


Thank you,
Krishane







On Wed, Jul 17, 2024 at 9:07 PM Kashif Zeeshan 
wrote:

> Hi
>
> On Wed, Jul 17, 2024 at 5:21 PM KK CHN  wrote:
>
>> Hi ,
>>
>> I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
>> setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
>> configured with pgbackrest and everything working fine as specified in the
>> documentation.
>>
>> note:  here I am running postgres server and pgbackrest everything as
>> postgres user and no issues in  backup and recovery.
>>
>>
>>
>> Query
>> 1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
>> backup and recovery process? Or pgback works only with the community
>> PostgreSQL database ?
>>
> It support both community PG and EDB PG.
>
>>
>>
>> [ when I ran  initdb script of EDB while installing EDB it creates the
>> enterpisedb  as user and edb as initial  database by the script. ]
>>
> Enterprisedb is the default user created by EDB.
>
>>
>>
>> when I try to create the stanza on the EDB server it throws error
>> (pasted at bottom ).
>>
>>
>>
>> NOTE:
>> I know that  my EDB  running on  port 5444 instead of  5432 and the
>> dbname = edb instead of postgres, and user as  enterpisedb instead of
>> postgres how to specify these changes in the stanza creation step if  EDB
>> Supports pgbackrest tool ?
>>
> You can enter this connection information in the PbBackRest Conf file for
> the stanza you create for your EDB Instance.
>
> e.g
>
> [global]repo1-path=/var/lib/edb/as15/backups
> [demo]pg1-path=/var/lib/edb/as15/datapg1-user=enterprisedbpg1-port=5444pg-version-force=15
>
> Refer to following edb documentation
>
>
> https://www.enterprisedb.com/docs/supported-open-source/pgbackrest/03-quick_start/
>
>
>> OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB
>> ] ?
>>
>>
>> Any hints much appreciated.
>>
>> Thank you,
>> Krishane
>>
>>
>> ERROR:
>> root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
>> --log-level-console=info  stanza-create
>> 2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
>> --exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
>> --pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
>> --repo1-host-user=postgres --stanza=OD_DM2
>> WARN: unable to check pg1: [DbConnectError] unable to connect to
>> 'dbname='postgres' port=5432': connection to server on socket
>> "/tmp/.s.PGSQL.5432" failed: No such file or directory
>> Is the server running locally and accepting connections on that
>> socket?
>> ERROR: [056]: unable to find primary cluster - cannot proceed
>>HINT: are all available clusters in recovery?
>> 2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted
>> with exception [056]
>> [root@uaterssdrservice01 ~]#
>>
>>
>>
>>
>>


PgbackRest and EDB Query

2024-07-17 Thread KK CHN
Hi ,

I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
configured with pgbackrest and everything working fine as specified in the
documentation.

note:  here I am running postgres server and pgbackrest everything as
postgres user and no issues in  backup and recovery.



Query
1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
backup and recovery process? Or pgback works only with the community
PostgreSQL database ?


[ when I ran  initdb script of EDB while installing EDB it creates the
enterpisedb  as user and edb as initial  database by the script. ]


when I try to create the stanza on the EDB server it throws error  (pasted
at bottom ).



NOTE:
I know that  my EDB  running on  port 5444 instead of  5432 and the dbname
= edb instead of postgres, and user as  enterpisedb instead of postgres how
to specify these changes in the stanza creation step if  EDB Supports
pgbackrest tool ?

OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB ] ?


Any hints much appreciated.

Thank you,
Krishane


ERROR:
root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
--log-level-console=info  stanza-create
2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
--repo1-host-user=postgres --stanza=OD_DM2
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5432': connection to server on socket
"/tmp/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that
socket?
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@uaterssdrservice01 ~]#


[slurm-users] Inconsistencies in CPU time Reporting by sreport and sacct Tools

2024-04-17 Thread KK via slurm-users
I wish to ascertain the CPU core time utilized by user dj1 and dj. I have
tested with sreport cluster UserUtilizationByAccount, sreport job
SizesByAccount, and sacct. It appears that sreport cluster
UserUtilizationByAccount displays the total core hours used by the entire
account, rather than the individual user's cpu time. Here are the specifics:

Users dj and dj1 are both under the account mehpc.

In 2024-04-12 ~ 2024-04-15, dj1 used approximately 10 minutes of core time,
while dj used about 4 minutes. However, "sreport Cluster
UserUtilizationByAccount user=dj1 start=2024-04-12 end=2024-04-15" shows 14
minutes of usage. Similarly, "sreport job SizesByAccount Users=dj
start=2024-04-12 end=2024-04-15" hows about 14 minutes.
Using "sreport job SizesByAccount Users=dj1 start=2024-04-12
end=2024-04-15" or "sacct -u dj1 -S 2024-04-12 -E 2024-04-15 -o
"jobid,partition,account,user,alloccpus,cputimeraw,state,workdir%60" -X
|awk 'BEGIN{total=0}{total+=$6}END{print total}'" yields the accurate
values, which are around 10 minutes for dj1.

Attachment are the details.


detail_results
Description: Binary data

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com


[slurm-users] Fwd: sreport cluster UserUtilizationByaccount Used result versus sreport job SizesByAccount or sacct: inconsistencies

2024-04-15 Thread KK via slurm-users
-- Forwarded message -
发件人: KK 
Date: 2024年4月15日周一 13:25
Subject: sreport cluster UserUtilizationByaccount Used result versus
sreport job SizesByAccount or sacct: inconsistencies
To: 


I wish to ascertain the CPU core hours utilized by user dj1 and dj. I have
tested with sreport cluster UserUtilizationByAccount, sreport job
SizesByAccount, and sacct. It appears that sreport cluster
UserUtilizationByAccount displays the total core hours used by the entire
account, rather than the individual user's cpu time. Here are the specifics:

Users dj and dj1 are both under the account mehpc.

In 2024-04-12 ~ 2024-04-15, dj1 used approximately 10 minutes of core time,
while dj used about 4 minutes. However, "*sreport Cluster
UserUtilizationByAccount user=dj1 start=2024-04-12 end=2024-04-15*" shows
14 minutes of usage. Similarly, "*sreport job SizesByAccount Users=dj
start=2024-04-12 end=2024-04-15*" hows about 14 minutes.
Using "*sreport job SizesByAccount Users=dj1 start=2024-04-12
end=2024-04-15*" or "*sacct -u dj1 -S 2024-04-12 -E 2024-04-15 -o
"jobid,partition,account,user,alloccpus,cputimeraw,state,workdir%60" -X
|awk 'BEGIN{total=0}{total+=$6}END{print total}'*" yields the accurate
values, which are around 10 minutes for dj1. Here are the details:

[root@ood-master ~]# sacctmgr list assoc format=cluster,user,account,qos
   Cluster   UserAccount  QOS
-- -- -- 
 mehpc  root   normal
 mehpc   root   root   normal
 mehpc mehpc   normal
 mehpc dj  mehpc   normal
 mehpcdj1  mehpc   normal


[root@ood-master ~]# sacct -X -u dj1 -S 2024-04-12 -E 2024-04-15 -o
jobid,ncpus,elapsedraw,cputimeraw
JobID NCPUS ElapsedRaw CPUTimeRAW
 -- -- --
4 1 60 60
5 2120240
6 1 61 61
8 2120240
9 0  0  0

[root@ood-master ~]# sacct -X -u dj -S 2024-04-12 -E 2024-04-15 -o
jobid,ncpus,elapsedraw,cputimeraw
JobID NCPUS ElapsedRaw CPUTimeRAW
 -- -- --
7 2120240


[root@ood-master ~]# sreport job SizesByAccount Users=dj1 start=2024-04-12
end=2024-04-15

Job Sizes 2024-04-12T00:00:00 - 2024-04-14T23:59:59 (259200 secs)
Time reported in Minutes

  Cluster   Account 0-49 CPUs   50-249 CPUs  250-499 CPUs  500-999 CPUs
 >= 1000 CPUs % of cluster
- - - - - -
- 
mehpc  root10 0 0 0
0  100.00%


[root@ood-master ~]# sreport job SizesByAccount Users=dj start=2024-04-12
end=2024-04-15

Job Sizes 2024-04-12T00:00:00 - 2024-04-14T23:59:59 (259200 secs)
Time reported in Minutes

  Cluster   Account 0-49 CPUs   50-249 CPUs  250-499 CPUs  500-999 CPUs
 >= 1000 CPUs % of cluster
- - - - - -
- 
mehpc  root 4 0 0 0
0  100.00%


[root@ood-master ~]# sreport Cluster UserUtilizationByAccount user=dj1
start=2024-04-12 end=2024-04-15

Cluster/User/Account Utilization 2024-04-12T00:00:00 - 2024-04-14T23:59:59
(259200 secs)
Usage reported in CPU Minutes

  Cluster Login Proper Name Account Used   Energy
- - --- ---  
mehpc   dj1 dj1 dj1   mehpc   140



[root@ood-master ~]# sreport Cluster UserUtilizationByAccount user=dj
start=2024-04-12 end=2024-04-15

Cluster/User/Account Utilization 2024-04-12T00:00:00 - 2024-04-14T23:59:59
(259200 secs)
Usage reported in CPU Minutes

  Cluster Login Proper Name Account Used   Energy
- - --- ---  
mehpcdj   dj dj   mehpc   140


[root@ood-master ~]# sa

[kde] [Bug 484388] New: threshold temperature warning is wrong

2024-03-24 Thread kk
https://bugs.kde.org/show_bug.cgi?id=484388

Bug ID: 484388
   Summary: threshold temperature warning is wrong
Classification: I don't know
   Product: kde
   Version: unspecified
  Platform: Arch Linux
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: unassigned-b...@kde.org
  Reporter: k...@orly.at
  Target Milestone: ---

SUMMARY
***
NOTE: If you are reporting a crash, please try to attach a backtrace with debug
symbols.
See
https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports
***
Miniprogram : Thermal Monitor 0.1.4

STEPS TO REPRODUCE
1. Open Settings
2. GoTo: Appearance
3.  Enable danger color




OBSERVED RESULT

Even the temperature of Sensor is below the warning  threshold - the font-color
is red (e.q. warning)

EXPECTED RESULT
If the temperature is below the warning threshold, the font color should be
same is descriptor color. (for example: black)

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma: Arch 6.8.1-arch-1(64bit)
(available in About System)
KDE Plasma Version: 6.0.2
KDE Frameworks Version:  6.0.0
Qt Version: 6.6.2

ADDITIONAL INFORMATION
Platform Wayland

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: STM32H7 serial TX DMA issues

2024-03-08 Thread Kian Karas (KK)
Hi,

here is the pull request:
https://github.com/apache/nuttx/pull/11871

My initial comments (and "fix") for uart_xmitchars_dma() are no longer 
relevant. Hence, those changes are no longer included.

Regards
Kian

From: Sebastien Lorquet 
Sent: 08 March 2024 11:13
To: dev@nuttx.apache.org 
Subject: Re: STM32H7 serial TX DMA issues

Hello,

Yes, stm32h7 uart transmission has issues. You can easily test this in
nsh with just an echo command and a very long string, eg > 64 ascii
chars. At first I believed it was buffering problems.

This caused me some headaches 1.5 years ago, but the DMA serial driver
is too complex for me to debug. I have disabled CONFIG_UARTn_TXDMA on
relevant uarts of my board.

Please give the link to your PR when it's ready so I can follow this
closely.

Thank you,

Sebastien


Le 08/03/2024 à 10:29, David Sidrane a écrit :
> Hi Kian,
>
> The Problem with the semaphore is it cause blocking when the port
> is opened non blocking.
>
> Please do PR so we can review it.
>
> David
>
>
> -Original Message-
> From: Kian Karas (KK) 
> Sent: Friday, March 8, 2024 4:18 AM
> To: dev@nuttx.apache.org
> Subject: STM32H7 serial TX DMA issues
>
> Hi community
>
> The STM32H7 serial driver TX DMA logic is no longer working properly.
>
> The issues started with commit 660ac63b. Subsequent attempts (f92a9068,
> 6c186b60) have failed to get it working again.
>
> I think the original idea of 660ac63b is right, it just failed to restart
> TX DMA upon TX DMA completion (if needed).
>
> I would suggest reverting the following commits: 6c186b60 58f2a7b1
> 69a8b5b5. Then add the following patch as an amendment:
>
> diff --git a/arch/arm/src/stm32h7/stm32_serial.c
> b/arch/arm/src/stm32h7/stm32_serial.c
> index 120ea0f3b5..fc90c5d521 100644
> --- a/arch/arm/src/stm32h7/stm32_serial.c
> +++ b/arch/arm/src/stm32h7/stm32_serial.c
> @@ -3780,11 +3780,20 @@ static void up_dma_txcallback(DMA_HANDLE handle,
> uint8_t status, void *arg)
>   }
>   }
>
> -  nxsem_post(&priv->txdmasem);
> -
> /* Adjust the pointers */
>
> uart_xmitchars_done(&priv->dev);
> +
> +  /* Initiate another transmit if data is ready */
> +
> +  if (priv->dev.xmit.tail != priv->dev.xmit.head)
> +{
> +  uart_xmitchars_dma(&priv->dev);
> +}
> +  else
> +{
> +  nxsem_post(&priv->txdmasem);
> +}
>   }
>   #endif
>
> @@ -3806,6 +3815,14 @@ static void up_dma_txavailable(struct uart_dev_s
> *dev)
> int rv = nxsem_trywait(&priv->txdmasem);
> if (rv == OK)
>   {
> +  if (dev->xmit.head == dev->xmit.tail)
> +{
> +  /* No data to transfer. Release semaphore. */
> +
> +  nxsem_post(&priv->txdmasem);
> +  return;
> +}
> +
> uart_xmitchars_dma(dev);
>   }
>   }
>
>
> However, uart_xmitchars_dma() is currently not safe to call from an
> interrupt service routine, so the following patch would also be required:
>
> diff --git a/drivers/serial/serial_dma.c b/drivers/serial/serial_dma.c
> index aa99e801ff..b2603953ad 100644
> --- a/drivers/serial/serial_dma.c
> +++ b/drivers/serial/serial_dma.c
> @@ -97,26 +97,29 @@ void uart_xmitchars_dma(FAR uart_dev_t *dev)  {
> FAR struct uart_dmaxfer_s *xfer = &dev->dmatx;
>
> -  if (dev->xmit.head == dev->xmit.tail)
> +  size_t head = dev->xmit.head;
> +  size_t tail = dev->xmit.tail;
> +
> +  if (head == tail)
>   {
> /* No data to transfer. */
>
> return;
>   }
>
> -  if (dev->xmit.tail < dev->xmit.head)
> +  if (tail < head)
>   {
> -  xfer->buffer  = &dev->xmit.buffer[dev->xmit.tail];
> -  xfer->length  = dev->xmit.head - dev->xmit.tail;
> +  xfer->buffer  = &dev->xmit.buffer[tail];
> +  xfer->length  = head - tail;
> xfer->nbuffer = NULL;
> xfer->nlength = 0;
>   }
> else
>   {
> -  xfer->buffer  = &dev->xmit.buffer[dev->xmit.tail];
> -  xfer->length  = dev->xmit.size - dev->xmit.tail;
> +  xfer->buffer  = &dev->xmit.buffer[tail];
> +  xfer->length  = dev->xmit.size - tail;
> xfer->nbuffer = dev->xmit.buffer;
> -  xfer->nlength = dev->xmit.head;
> +  xfer->nlength = head;
>   }
>
> dev->tx_count += xfer->length + xfer->nlength;
>
>
> Any thoughts?
>
> Regards
> Kian


STM32H7 serial TX DMA issues

2024-03-08 Thread Kian Karas (KK)
Hi community

The STM32H7 serial driver TX DMA logic is no longer working properly.

The issues started with commit 660ac63b. Subsequent attempts (f92a9068, 
6c186b60) have failed to get it working again.

I think the original idea of 660ac63b is right, it just failed to restart TX 
DMA upon TX DMA completion (if needed).

I would suggest reverting the following commits: 6c186b60 58f2a7b1 69a8b5b5. 
Then add the following patch as an amendment:

diff --git a/arch/arm/src/stm32h7/stm32_serial.c 
b/arch/arm/src/stm32h7/stm32_serial.c
index 120ea0f3b5..fc90c5d521 100644
--- a/arch/arm/src/stm32h7/stm32_serial.c
+++ b/arch/arm/src/stm32h7/stm32_serial.c
@@ -3780,11 +3780,20 @@ static void up_dma_txcallback(DMA_HANDLE handle, 
uint8_t status, void *arg)
 }
 }

-  nxsem_post(&priv->txdmasem);
-
   /* Adjust the pointers */

   uart_xmitchars_done(&priv->dev);
+
+  /* Initiate another transmit if data is ready */
+
+  if (priv->dev.xmit.tail != priv->dev.xmit.head)
+{
+  uart_xmitchars_dma(&priv->dev);
+}
+  else
+{
+  nxsem_post(&priv->txdmasem);
+}
 }
 #endif

@@ -3806,6 +3815,14 @@ static void up_dma_txavailable(struct uart_dev_s *dev)
   int rv = nxsem_trywait(&priv->txdmasem);
   if (rv == OK)
 {
+  if (dev->xmit.head == dev->xmit.tail)
+{
+  /* No data to transfer. Release semaphore. */
+
+  nxsem_post(&priv->txdmasem);
+  return;
+}
+
   uart_xmitchars_dma(dev);
 }
 }


However, uart_xmitchars_dma() is currently not safe to call from an interrupt 
service routine, so the following patch would also be required:

diff --git a/drivers/serial/serial_dma.c b/drivers/serial/serial_dma.c
index aa99e801ff..b2603953ad 100644
--- a/drivers/serial/serial_dma.c
+++ b/drivers/serial/serial_dma.c
@@ -97,26 +97,29 @@ void uart_xmitchars_dma(FAR uart_dev_t *dev)
 {
   FAR struct uart_dmaxfer_s *xfer = &dev->dmatx;

-  if (dev->xmit.head == dev->xmit.tail)
+  size_t head = dev->xmit.head;
+  size_t tail = dev->xmit.tail;
+
+  if (head == tail)
 {
   /* No data to transfer. */

   return;
 }

-  if (dev->xmit.tail < dev->xmit.head)
+  if (tail < head)
 {
-  xfer->buffer  = &dev->xmit.buffer[dev->xmit.tail];
-  xfer->length  = dev->xmit.head - dev->xmit.tail;
+  xfer->buffer  = &dev->xmit.buffer[tail];
+  xfer->length  = head - tail;
   xfer->nbuffer = NULL;
   xfer->nlength = 0;
 }
   else
 {
-  xfer->buffer  = &dev->xmit.buffer[dev->xmit.tail];
-  xfer->length  = dev->xmit.size - dev->xmit.tail;
+  xfer->buffer  = &dev->xmit.buffer[tail];
+  xfer->length  = dev->xmit.size - tail;
   xfer->nbuffer = dev->xmit.buffer;
-  xfer->nlength = dev->xmit.head;
+  xfer->nlength = head;
 }

   dev->tx_count += xfer->length + xfer->nlength;


Any thoughts?

Regards
Kian


Re: Addition of STM32H7 MCU's

2024-01-18 Thread Kian Karas (KK)
Hi Robert, Community

We have NuttX running on an STM32H723VE, but haven't tested all peripherals. We 
also did some initial work on an STM32H730, but this has hardly been tested.

What is the best way to share the STM32H723VE support with the community? It 
needs some reviewing. I am concerned we could have broken stuff for other MCUs 
in the familly, but I can't test this.

@Robert: if you are in a hurry, send me an email directly and I'll respond with 
a patch.

Regards
Kian

From: Robert Turner 
Sent: 18 January 2024 03:30
To: dev@nuttx.apache.org 
Subject: Re: Addition of STM32H7 MCU's

Nah not internal cache. The SRAM sizes for H723/5 are different from any of
those defined in arch/arm/include/stm32h7/chip.h
Suspect we need to get these correct as other files use these defs also,
such as stm32_allocateheap.c
Is Jorge's PR the one merged on Jul 12 (8ceff0d)?
Thanks,
Robert

On Thu, Jan 18, 2024 at 2:56 PM Alan C. Assis  wrote:

> Hi Robert,
> Thank you for the explanation! Is it about internal cache?
>
> Looking at
> https://www.st.com/en/microcontrollers-microprocessors/stm32h7-series.html
> I can see that H723/5 shares mostly everything with H333/5.
> I only tested NuttX on STM32H743ZI and STM32H753BI (I and Jorge added
> support to this few weeks ago).
>
> Please take a look at Jorge's PRs, probably if you fix the memory in the
> linker script and the clock tree for your board NuttX will work fine on it.
>
> BR,
>
> Alan
>
> On Wed, Jan 17, 2024 at 10:25 PM Robert Turner  wrote:
>
> > Apologies, I should have been more specific, I was referring to parts in
> > the family which are not currently covered, such as the STM32H723xx which
> > we use. The RAM sizes definitions in chip.h for
> > CONFIG_STM32H7_STM32H7X3XX/CONFIG_STM32H7_STM32H7X5XX are incorrect for
> > the  STM32H723xx and  STM32H725xx.
> > BR,
> > Robert
> >
> > On Thu, Jan 18, 2024 at 1:28 PM Alan C. Assis  wrote:
> >
> > > Robert,
> > > STM32H7 family is already supported.
> > >
> > > Look at arch/arm/src/stm32h7 and equivalent at boards/
> > >
> > > BR,
> > >
> > > Alan
> > >
> > > On Tuesday, January 16, 2024, Robert Turner  wrote:
> > >
> > > > Did anyone finish supporting the broader STM32H7xx family? If so, is
> it
> > > > close to being mergeable or sendable as a patch?
> > > >
> > > > Thanks,
> > > > Robert
> > > >
> > > > On Fri, Sep 8, 2023 at 10:33 PM raiden00pl 
> > wrote:
> > > >
> > > > > > You're right, but not entirely) For example, chips of different
> > > > subseries
> > > > > have different interrupt vector tables. Those. The
> stm32h7x3xx_irq.h
> > > file
> > > > > lists interrupt vectors for the RM0433, but not for the RM0455 or
> > > > > RM0468. Although
> > > > > some chips from all these series have 7x3 in the name.
> > > > >
> > > > > I think they are the same (not checked, intuition tells me so). But
> > > some
> > > > > peripherals are not available on some chips and then the
> > > > > corresponding interrupt line is marked RESERVED, or its the same
> > > > peripheral
> > > > > but with upgraded functionality (QSPI/OCTOSPI) or
> > > > > for some reason ST changed its name to confuse devs. There should
> be
> > no
> > > > > conflict between IRQ lines.
> > > > >
> > > > > > Even if it duplicates 90% of the file it is better than #ifdefing
> > the
> > > > > > stm32h7x3xx_irq.h file. AKA ifdef rash!
> > > > >
> > > > > One file approach can be done with only one level of #ifdefs, one
> > level
> > > > of
> > > > > #ifdefs doesn't have a negative impact on code quality (but
> > > > > it's probably a matter of individual feelings).
> > > > > For IRQ and memory map (and probably DMAMUX), the approach with
> > > separate
> > > > > files may make sense but for peripheral definitions
> > > > > I don't see any benefit in duplicating files.
> > > > >
> > > > > pt., 8 wrz 2023 o 12:01 
> > > napisał(a):
> > > > >
> > > > > > You're right, but not entirely) For example, chips of different
> > > > subseries
> > > > > > have different interrupt vector tables. Those. The
> > stm32h7x3xx_irq.h
> > > > file
> > > > > > lists interrupt vectors for the RM0433, but not for the RM0455 or
> > > > > RM0468. Although
> > > > > > some chips from all these series have 7x3 in the name.
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *От:* "raiden00pl" 
> > > > > > *Кому:* "undefined" 
> > > > > > *Отправлено:* пятница, 8 сентября 2023 г., 12:52
> > > > > > *Тема:* Re: Addition of STM32H7 MCU's
> > > > > >
> > > > > > From what I'm familiar with STM32H7, all chips use the same
> > registers
> > > > and
> > > > > > bit definitions.
> > > > > > Therefore, keeping definitions for different chips in different
> > files
> > > > > > doesn't make sense in my opinion.
> > > > > > The only problem is that some chips support some peripherals
> while
> > > > others
> > > > > > do not. But this can be
> > > > > > solved using definitions from Kconfig, where we define the
> > su

Re: TUN device (PPP) issue?

2024-01-17 Thread Kian Karas (KK)
Hi Zhe

I am working on tag nuttx-12.2.1.

Your referenced commit did indeed fix the issue.

My apologies for not trying on master. I mistakenly though the error was in the 
TUN device driver, which I noticed had not changed since nuttx-12.2.1.

Thanks a lot!
Kian

From: Zhe Weng 翁�� 
Sent: 17 January 2024 04:55
To: Kian Karas (KK) 
Cc: dev@nuttx.apache.org 
Subject: Re: TUN device (PPP) issue?


Hi Kian,


Which version of NuttX are you working on? It behaves like a problem I've met 
before. Do you have this commit in your code? If not, maybe you could have a 
try: 
https://github.com/apache/nuttx/commit/e2c9aa65883780747ca00625a1452dddc6f8a138


Best regards

Zhe



From: Kian Karas (KK) 
Sent: Tuesday, January 16, 2024 11:53:06 PM
To: dev@nuttx.apache.org
Subject: TUN device (PPP) issue?

Hi community

I am experiencing an issue with PPP/TUN and reception of packets. The network 
stack reports different decoding errors in the received packets e.g.:
[   24.56] [  WARN] ppp: ipv4_in: WARNING: IP packet shorter than length in 
IP header

I can reproduce the issue by sending a number of packets (from my PC over PPP 
to the TUN device in NuttX),  which are all larger than can fit into one IOB 
*and* which are ignored (e.g. unsupported protocol or IP destination) - i.e. 
*not* triggering a response / TX packet. I then send a correct ICMP echo 
request from my PC to NuttX, which causes the above error to be reported.

The following PC commands will trigger the error message. My PC has IP 
172.29.4.1 and the NuttX ppp interface has 172.29.4.2. Note the first command 
sends to the *wrong* IP address so that NuttX ignores the ICMP messages. The 
second commands uses the IP of NuttX and should result in a response. I run the 
test after a fresh boot and with no other network traffic to/from NuttX.

$ ping -I ppp0 -W 0.2 -i 0.2 -c 13 172.29.4.3 -s 156
$ ping -I ppp0 -W 0.2 -c 1 172.29.4.2 -s 0

If I skip the first command, ping works fine.

I think the issue is caused by the IOB management in the TUN device driver 
(drivers/net/tun.c). I am new to NuttX, so I don't quite understand the correct 
use of IOB, so I am just guessing here. I think that when a packet is received 
by tun_write() and too large to fit into a single IOB *and* the packet is 
ignored, the IOB chain "lingers" and is not freed. Subsequent packets received 
by tun_write() does not end up in the beginning of the first IOB and the 
IP/TCP/UDP header may then be split across IOB boundary. The network stack 
assumes the protocol headers are not split across IOB boundaries, so the 
network stack ends up reading outside the IOB io_data[] array boundaries 
resulting in undefined behavior.

With CONFIG_IOB_DEBUG enabled, notice how the "avail" value decrease for each 
ignored packet until the final/correct ICMP request (at time 24.54) is 
copied to the second IOB in the chain.

[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=0 
len=184 next=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 182 
bytes new len=182
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 added to the 
chain
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=0 len=2 
next=0
[   10.08] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 2 bytes 
new len=2
[   10.08] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.08] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=168 
len=184 next=0x24002a50
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 168 
bytes new len=168
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=2 
len=16 next=0
[   10.28] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 16 bytes 
new len=16
[   10.28] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.28] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.46] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=154 
len=184 next=0x24002a50
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 154 
bytes new len=154
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=16 
len=30 next=0
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 30 bytes 
new len=30
[   10.48] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.49] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
...
[   12.50] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 l

TUN device (PPP) issue?

2024-01-16 Thread Kian Karas (KK)
Hi community

I am experiencing an issue with PPP/TUN and reception of packets. The network 
stack reports different decoding errors in the received packets e.g.:
[   24.56] [  WARN] ppp: ipv4_in: WARNING: IP packet shorter than length in 
IP header

I can reproduce the issue by sending a number of packets (from my PC over PPP 
to the TUN device in NuttX),  which are all larger than can fit into one IOB 
*and* which are ignored (e.g. unsupported protocol or IP destination) - i.e. 
*not* triggering a response / TX packet. I then send a correct ICMP echo 
request from my PC to NuttX, which causes the above error to be reported.

The following PC commands will trigger the error message. My PC has IP 
172.29.4.1 and the NuttX ppp interface has 172.29.4.2. Note the first command 
sends to the *wrong* IP address so that NuttX ignores the ICMP messages. The 
second commands uses the IP of NuttX and should result in a response. I run the 
test after a fresh boot and with no other network traffic to/from NuttX.

$ ping -I ppp0 -W 0.2 -i 0.2 -c 13 172.29.4.3 -s 156
$ ping -I ppp0 -W 0.2 -c 1 172.29.4.2 -s 0

If I skip the first command, ping works fine.

I think the issue is caused by the IOB management in the TUN device driver 
(drivers/net/tun.c). I am new to NuttX, so I don't quite understand the correct 
use of IOB, so I am just guessing here. I think that when a packet is received 
by tun_write() and too large to fit into a single IOB *and* the packet is 
ignored, the IOB chain "lingers" and is not freed. Subsequent packets received 
by tun_write() does not end up in the beginning of the first IOB and the 
IP/TCP/UDP header may then be split across IOB boundary. The network stack 
assumes the protocol headers are not split across IOB boundaries, so the 
network stack ends up reading outside the IOB io_data[] array boundaries 
resulting in undefined behavior.

With CONFIG_IOB_DEBUG enabled, notice how the "avail" value decrease for each 
ignored packet until the final/correct ICMP request (at time 24.54) is 
copied to the second IOB in the chain.

[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=0 
len=184 next=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 182 
bytes new len=182
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 added to the 
chain
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=0 len=2 
next=0
[   10.08] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 2 bytes 
new len=2
[   10.08] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.08] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=168 
len=184 next=0x24002a50
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 168 
bytes new len=168
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=2 
len=16 next=0
[   10.28] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 16 bytes 
new len=16
[   10.28] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.28] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.46] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=154 
len=184 next=0x24002a50
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 154 
bytes new len=154
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=16 
len=30 next=0
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 30 bytes 
new len=30
[   10.48] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.49] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
...
[   12.50] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   12.51] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=14 
len=184 next=0x24002a50
[   12.51] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 14 bytes 
new len=14
[   12.52] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=156 
len=170 next=0
[   12.52] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 170 
bytes new len=170
[   12.52] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   12.53] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   24.54] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=28 
offset=0
[   24.54] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=0 
len=28 next=0x24002a50
[   24.55] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 0 bytes 
new len=0
[   24.55] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=170 
le

Re: pgBackRest on old installation

2023-11-20 Thread KK CHN
Thank you.  Its worked out well. But a basic doubt ? is storing the DB
superuser password in .pgpass is advisable ? What other options do we have ?
#su postgres
bash-4.2$ cd

bash-4.2$ cat .pgpass
*:*:*:postgres:your_password
bash-4.2$


On Mon, Nov 20, 2023 at 4:16 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:

>
> On 11/20/23 12:31, KK CHN wrote:
>
> list,
>
> I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
> legacy application.)
>
> I have installed pgbackrest through  package install on RHEL7.6
> But unable to get the basic stanza-creation working It throws an error.
>
>
> * /etc/pgbackrest.conf  as follows..*
> 
> [demo]
> pg1-path=/app/edb/as10/data
> pg1-port = 5444
> pg1-socket-path=/tmp
>
> [global]
>
> repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw
>
> repo1-cipher-type=aes-256-cbc
> repo1-path=/var/lib/pgbackrest
> repo1-retention-full=2
> backup-user=postgres
>
>
> [global:archive-push]
> compress-level=3
> #
>
>
>
> [root@dbs ~]# pgbackrest version
> pgBackRest 2.48
> [root@dbs ~]#
> #
>
> *Postgres conf as follows... *
>
> listen_addresses = '*'
> port = 5444
> unix_socket_directories = '/tmp'
>
> archive_command = 'pgbackrest --stanza=demo archive-push %p'
> archive_mode = on
> log_filename = 'postgresql.log'
> max_wal_senders = 3
> wal_level = replica
>
> #
>
>
> *ERROR  Getting as follows ..What went wrong here ??*
>
>
>  [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
> --log-level-console=info stanza-create
> 2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
> --exec-id=29527-bf5e2f80 --log-level-console=info
> --pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
> --repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
> --repo1-path=/var/lib/pgbackrest --stanza=demo
> WARN: unable to check pg1: [DbConnectError] unable to connect to
> 'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
> "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
> ERROR: [056]: unable to find primary cluster - cannot proceed
>HINT: are all available clusters in recovery?
> 2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted
> with exception [056]
> [root@dbs ~]#
>
> It complains about the password.  I followed the below tutorial link, but
> no mention of password (Where to supply password, what parameter where ?)
> setting here ==> https://pgbackrest.org/user-guide-rhel.html
>
> This is about the user connecting to the db, in general, pgbackrest has to
> connect like any other app/user. So, change your .pgpass to contain smth
> like the below on the top of the file :
>
> /tmp:5444:*:postgres:your_whatever_pgsql_password
>
> and retry
>
>
>
> Any hints welcome..  What am I missing here ??
>
> Best,
> Krishane
>
>
>
>
>
>
>
>


pgBackRest on old installation

2023-11-20 Thread KK CHN
list,

I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
legacy application.)

I have installed pgbackrest through  package install on RHEL7.6
But unable to get the basic stanza-creation working It throws an error.


* /etc/pgbackrest.conf  as follows..*

[demo]
pg1-path=/app/edb/as10/data
pg1-port = 5444
pg1-socket-path=/tmp

[global]
repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw

repo1-cipher-type=aes-256-cbc
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
backup-user=postgres


[global:archive-push]
compress-level=3
#



[root@dbs ~]# pgbackrest version
pgBackRest 2.48
[root@dbs ~]#
#

*Postgres conf as follows... *

listen_addresses = '*'
port = 5444
unix_socket_directories = '/tmp'

archive_command = 'pgbackrest --stanza=demo archive-push %p'
archive_mode = on
log_filename = 'postgresql.log'
max_wal_senders = 3
wal_level = replica

#


*ERROR  Getting as follows ..What went wrong here ??*


 [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
--log-level-console=info stanza-create
2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
--exec-id=29527-bf5e2f80 --log-level-console=info
--pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/var/lib/pgbackrest --stanza=demo
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
"/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@dbs ~]#

It complains about the password.  I followed the below tutorial link, but
no mention of password (Where to supply password, what parameter where ?)
setting here ==> https://pgbackrest.org/user-guide-rhel.html


Any hints welcome..  What am I missing here ??

Best,
Krishane


capacity planning question

2023-10-30 Thread KK CHN
Hi,



I am in need of  an infrastructure set up for data analytics / live video
stream analytics application  using big data and analytics technology..


The data is basically right now stored as structured data(no video
streaming) in PostgresDatabase. ( Its an emergency call handling solution,
In  database which stores,  caller info (address, mobile number, locations
co-ordinates, emergency category metadata and dispatch information
regarding rescue vehicles.,  Rescue vehicle location update (lat,
long)every 30 seconds all are stored in the Postgres Database ..



Input1 :   I have to do an analytics on these data( say 600 GB for the last
2 years its the size grown from initial setup).To perform an analytical
application development( using python and data analytics libraries, and
displaying the results and analytical predication through a dashboard
application.)


Query 1. How much resource in terms of compute(GPU?(CPU) cores required for
this analytical application? and memory ? And any specific type of
storage(in memory like redis required ? ) etc, which   I have to provision
for these kind of application processing. ?? any hints most welcome..  Any
more input required let me know I can provide if available.


Input 2

In addition to the above I have to do video analytics from bodyworn cameras
by police personnel, drone surveillance

 Videos from any emergency sites,  patrol vehicle (from a mobile tablet
device over 5G )live streaming of indent locations for few  minutes ( say 3
to 5 minutes live streaming for each incident. )  There are 50 drones, 500
Emergency  rescue service vehicles, 300 body worn camera personnels..   and
roughly  5000 incidents / emergency incidents  per day  happening,  which
needs video streaming for at least 1000 incidents for a time duration of 4
to 5 minutes live streaming.


Query2. What/(how many) kind of computing resources GPUs(CPUs)?  RAM,
Storage solutions I have to deploy in numbers( or cores of GPUs/how
many/(CPUs)?  RAM ?   In Memory (Redis or similar ) or any other specific
data storage mechanisms ?   Any hints much appreciated..


Best,

Krishane


Re: pgBackRest for a 50 TB database

2023-10-03 Thread KK CHN
Greetings,
Happy to hear you successfully performed pgBackRest for a 50TB DB. Out of
curiosity I would like to know your infrastructure settings.

1. The  connectivity protocoal and bandwidth you used for your backend
storage ?  Is it iSCSI, FC FCoE or GbE ? what's the exact reason for
the 26 Hours it took in the best case ? What factors may reduce 26 Hours to
much less time say 10 Hour or so for a 50 TB DB to  backup destination ??
What to  fine tune or deploy  for a better performance?

2. It has been said that  you are running the DB on a 2 slot 18 core
processor = 36 Physical cores ..  Is it a dedicated Server H/W entirely
dedicated for a 50 TB database alone ?
Why I asked, nowadays mostly we may run the DB servers on VMs in
virtualized environments..  So I would like to know  all 36 Physical cores
and associated RAM are all utilized by your 50 TB Database server ? or any
vacant CPU cores/Free RAM on those server machines?

3.  What kind of connectivity/bandwidth between DB server and Storage
backend you established ( I Want to know the server NIC card details,
Connectivity Channel protocol/bandwidth and Connecting Switch spec from DB
Server to Storage backend( NAS in this case right ?)

Could you share the recommendations / details as in your case , Becoz I'm
also in need to perform such a pgBackRest trial from a  production DB  to
a  suitable Storage Device( Mostly Unified storage  DELL Unity)

Any inputs are most welcome.

Thanks,
Krishane

On Tue, Oct 3, 2023 at 12:14 PM Abhishek Bhola <
abhishek.bh...@japannext.co.jp> wrote:

> Hello,
>
> As said above, I tested pgBackRest on my bigger DB and here are the
> results.
> Server on which this is running has the following config:
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):36
> On-line CPU(s) list:   0-35
> Thread(s) per core:1
> Core(s) per socket:18
> Socket(s): 2
> NUMA node(s):  2
>
> Data folder size: 52 TB (has some duplicate files since it is restored
> from tapes)
> Backup is being written on to DELL Storage, mounted on the server.
>
> pgbackrest.conf with following options enabled
> repo1-block=y
> repo1-bundle=y
> start-fast=y
>
>
> 1. *Using process-max: 30, Time taken: ~26 hours*
> full backup: 20230926-092555F
> timestamp start/stop: 2023-09-26 09:25:55+09 / 2023-09-27
> 11:07:18+09
> wal start/stop: 00010001AC0E0044 /
> 00010001AC0E0044
> database size: 38248.9GB, database backup size: 38248.9GB
> repo1: backup size: 6222.0GB
>
> 2. *Using process-max: 10, Time taken: ~37 hours*
>  full backup: 20230930-190002F
> timestamp start/stop: 2023-09-30 19:00:02+09 / 2023-10-02
> 08:01:20+09
> wal start/stop: 00010001AC0E004E /
> 00010001AC0E004E
> database size: 38248.9GB, database backup size: 38248.9GB
> repo1: backup size: 6222.0GB
>
> Hope it helps someone to use these numbers as some reference.
>
> Thanks
>
>
> On Mon, Aug 28, 2023 at 12:30 AM Abhishek Bhola <
> abhishek.bh...@japannext.co.jp> wrote:
>
>> Hi Stephen
>>
>> Thank you for the prompt response.
>> Hearing it from you makes me more confident about rolling it to PROD.
>> I will have a discussion with the network team once about and hear what
>> they have to say and make an estimate accordingly.
>>
>> If you happen to know anyone using it with that size and having published
>> their numbers, that would be great, but if not, I will post them once I set
>> it up.
>>
>> Thanks for your help.
>>
>> Cheers,
>> Abhishek
>>
>> On Mon, Aug 28, 2023 at 12:22 AM Stephen Frost 
>> wrote:
>>
>>> Greetings,
>>>
>>> * Abhishek Bhola (abhishek.bh...@japannext.co.jp) wrote:
>>> > I am trying to use pgBackRest for all my Postgres servers. I have
>>> tested it
>>> > on a sample database and it works fine. But my concern is for some of
>>> the
>>> > bigger DB clusters, the largest one being 50TB and growing by about
>>> > 200-300GB a day.
>>>
>>> Glad pgBackRest has been working well for you.
>>>
>>> > I plan to mount NAS storage on my DB server to store my backup. The
>>> server
>>> > with 50 TB data is using DELL Storage underneath to store this data
>>> and has
>>> > 36 18-core CPUs.
>>>
>>> How much free CPU capacity does the system have?
>>>
>>> > As I understand, pgBackRest recommends having 2 full backups and then
>>> > having incremental or differential backups as per requirement. Does
>>> anyone
>>> > have any reference numbers on how much time a backup for such a DB
>>> would
>>> > usually take, just for reference. If I take a full backup every Sunday
>>> and
>>> > then incremental backups for the rest of the week, I believe the
>>> > incremental backups should not be a problem, but the full backup every
>>> > Sunday might not finish in time.
>>>
>>> pgBackRest scales extremely well- what's going to matter here is how
>>> much you can 

Re: [webkit-gtk] Fix CVE-2023-32435 for webkitgtk 2.38.6

2023-09-06 Thread 不会弹吉他的KK
On Wed, Sep 6, 2023 at 9:46 PM Michael Catanzaro 
wrote:

> On Wed, Sep 6 2023 at 04:23:17 PM +0800, 不会弹吉他的KK
>  wrote:
> > My question is
> > 1. Does webkitgtk 2.38.6 is vulnerable to CVE-2023-32435?
>
> No clue, sorry.
>
> > 2. If YES, how to deal the patches with the 2 new files? If just
> > ignore and only patch file
> > Source/JavaScriptCore/wasm/WasmSectionParser.cpp, could
> > CVE-2023-32435 be fixed for 2.38.6, please?
>
> Patching just that one file is what I would do if tasked with
> backporting this fix.

OK.

That said, keep in mind that only 10-20% of our
> security vulnerabilities receive CVEs, so just patching CVEs is not
> sufficient to provide a secure version of WebKitGTK. The 2.38 branch is
> no longer secure and you should try upgrading to 2.42. (I would skip
> 2.40 at this point, since that branch will end next week when 2.42.0 is
> released.)
>
For Yocto project whick I am working on, packages(recipes) can NOT be
updated with
major version upgrade on Yocto released products/branches. So we still have
to fix such
kind of CVEs. But for master branch, webkitgtk will be upgraded as soon as
it released.

Thanks a lot.
Kai

>
> Michael
>
>
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


[webkit-gtk] Fix CVE-2023-32435 for webkitgtk 2.38.6

2023-09-06 Thread 不会弹吉他的KK
Hi All,
CVE-2023-32435 has been fixed in webkitgtk 2.40.0. According to
https://bugs.webkit.org/show_bug.cgi?id=251890, the commit is at
https://github.com/WebKit/WebKit/commit/50c7aaec2f53ab3b960f1b299aad5009df6f1967
.
It patches 3 files, but 2 of them are created/added in 2.40.0 and do NOT
exist in 2.38.6:
* Source/JavaScriptCore/wasm/WasmAirIRGenerator64.cpp
* Source/JavaScriptCore/wasm/WasmAirIRGeneratorBase.h

My question is
1. Does webkitgtk 2.38.6 is vulnerable to CVE-2023-32435?
2. If YES, how to deal the patches with the 2 new files? If just ignore and
only patch file Source/JavaScriptCore/wasm/WasmSectionParser.cpp,
could CVE-2023-32435 be fixed for 2.38.6, please?

Regards,
Kai
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: [webkit-gtk] Webkit bugzilla ID access

2023-08-31 Thread 不会弹吉他的KK
Hi Michael,

Thanks a lot!.

Kai

On Wed, Aug 30, 2023 at 11:42 PM Michael Catanzaro 
wrote:

>
> Hi, see: https://commits.webkit.org/260455@main
>
>
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: [webkit-gtk] Webkit bugzilla ID access

2023-08-29 Thread 不会弹吉他的KK
Hi MIchael,

Would you like to share the fix commit of CVE-2023-23529, please? It is
handled by https://bugs.webkit.org/show_bug.cgi?id=251944 which is still
not pulibc.

Sorry for duplicate email that previous is rejected by maillist.

Thanks,
Kai

On Wed, May 31, 2023 at 10:17 PM Michael Catanzaro 
wrote:

>
> Hi, the bugs are private. I can give you the mappings between bug ID
> and fix commit, though:
>
> 248266 - https://commits.webkit.org/258113@main
> 245521 - https://commits.webkit.org/256215@main
> 245466 - https://commits.webkit.org/255368@main
> 247420 - https://commits.webkit.org/256519@main
> 246669 - https://commits.webkit.org/255960@main
> 248615 - https://commits.webkit.org/262352@main
> 250837 - https://commits.webkit.org/260006@main
>
> That said, I don't generally recommend backporting fixes yourself
> because (a) it can become pretty difficult as time goes on, and (b)
> only a tiny fraction of security fixes receive CVE identifiers (maybe
> around 5%). So I highly recommend upgrading to WebKitGTK 2.40.2.
> WebKitGTK maintains API and ABI stability to the greatest extent
> possible in order to encourage safe updates.
>
> Michael
>
>
> ___
> webkit-gtk mailing list
> webkit-gtk@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-gtk
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: DB Server slow down & hang during Peak hours of Usage

2023-08-08 Thread KK CHN
On Tue, Aug 8, 2023 at 5:49 PM Marc Millas  wrote:

> also,
> checkpoint setup are all default values
>
> you may try to
> checkpoint_completion_target = 0.9
> checkpoint_timeout = 15min
> max_wal_size = 5GB
>
> and, as said in the previous mail, check the checkpoint logs
>
> Also, all vacuum and autovacuum values are defaults
> so, as autovacuum_work_mem = -1
> the autovacuum processes will use the 4 GB setuped by maintenance_work_mem
> = 4096MB
> as there are 3 launched at the same time, its 12 GB "eaten"
> which doesn't look like a good idea, so set
> autovacuum_work_mem = 128MB
>
> also pls read the autovacuum doc for your version (which is ?) here for
> postgres 12:
> https://www.postgresql.org/docs/12/runtime-config-autovacuum.html
>
>
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>
>
> On Tue, Aug 8, 2023 at 1:59 PM Marc Millas  wrote:
>
>> Hello,
>> in the postgresql.conf joined, 2 things (at least) look strange:
>> 1) the values for background writer are the default values, fit for a
>> server with a limited writes throughput.
>> you may want to increase those, like:
>> bgwriter_delay = 50ms
>> bgwriter_lru_maxpages = 400
>> bgwriter_lru_multiplier = 4.0
>> and check the checkpoint log to see if there are still backend processes
>> writes.
>>
>> 2) work_mem is set to 2 GB.
>> so, if 50 simultaneous requests use at least one buffer for sorting,
>> joining, ..., you will consume 100 GB of RAM
>> this value seems huge for the kind of config/usage you describe.
>> You may try to set work_mem to 100 MB and check what's happening.
>>
>> Also check the logs, postgres tells his life there...
>>
>>
>>
>>
>>
>> Marc MILLAS
>> Senior Architect
>> +33607850334
>> www.mokadb.com
>>
>>
>>
Thank you all for your time and the valuable inputs to fix the issue.  Let
me tune conf parameters as advised and   will get back with the results and
log outputs .

Krishane

>
>> On Mon, Aug 7, 2023 at 3:36 PM KK CHN  wrote:
>>
>>> List ,
>>>
>>> *Description:*
>>>
>>> Maintaining a DB Server Postgres and with a lot of read writes to this
>>> Server( virtual machine running on  ESXi 7 with CentOS 7) .
>>>
>>> ( I am not sure how to get the read / write counts or required IOPS or
>>> any other parameters for you. If  you point our  I can execute those
>>> commands and get the data. )
>>>
>>> Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
>>> Emergency call response system  writing many  Emergency Response vehicles
>>> locations coordinates to the DB every 30 Seconds and every emergency call
>>> metadata (username, phone number, location info and address of the caller
>>> to the DB for each call)
>>>
>>> During these hours  the system hangs and the  Application ( which shows
>>> the location of the vehicles on a  GIS map hangs ) and the CAD machines
>>> which connects to the system hangs as those machines can't  connect to the
>>> DB and get data for displaying the caller information to the call taking
>>> persons working on them. )
>>>
>>> *Issue : *
>>> How to trace out what makes this DB  hangs and make it slow  and how to
>>> fix it..
>>>
>>> *Resource poured on the system :*
>>>
>>> *64 vCPUs  allocate ( Out of a host machine comprised of 2 processor
>>> slots of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage
>>> show 50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
>>> showing around 33 GB only ) *
>>>
>>> *Query :*
>>>
>>> How to rectify the issues that makes the DB server underperforming and
>>> find a permanent fix for this slow down issue*. *
>>>
>>> *Attached the  Postgres.conf file here for reference .*
>>>
>>> *Any more information required I can share for analysis to fix the
>>> issue. *
>>>
>>>
>>> *Krishane *
>>>
>>


Re: My 1st TABLESPACE

2023-08-08 Thread KK CHN
On Mon, Aug 7, 2023 at 5:47 PM Amn Ojee Uw  wrote:

> Thanks Negora.
>
> Makes sense, I will check it out.
>
> On 8/7/23 1:48 a.m., negora wrote:
>
> Hi:
>
> Although the "postgres" user owns the "data" directory, Has he access to
> the whole branch of directories? Maybe the problem is that he can't reach
> the "data" directory.
>
> Regards.
>
>
> On 07/08/2023 07:43, Amn Ojee Uw wrote:
>
> I'd like to create a TABLESPACE, so, following this web page
> ,  I
> have done the following :
>
> *mkdir
> /home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*
>
> *sudo chown postgres:postgres
> /home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*
>
> *sudo -u postgres psql*
>
> *\du*
> * arbolone| Cannot login  | {}*
> * chispa
> || {prosafe}*
> * workerbee | Superuser, Create DB| {arbolone}*
> * jme
> || {arbolone}*
> * postgres| Superuser, Create role, Create DB, Replication, Bypass RLS
> | {}*
> * prosafe  | Cannot login  | {}*
>
> *CREATE TABLESPACE jmetablespace OWNER jme LOCATION
> '/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data';*
>
>
Here owner is jme   and the  data dir  you created must have owner jme..

> The *CREATE **TABLESPACE* schema throws this error message :
>
> *ERROR:  could not set permissions on directory
> "/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data":
> Permission denied*
>
> I have followed the web page to the best of my abilities, and AFAIK, the
> postgres user owns the folder '*data*'.
>
> I know that something is missing, where did I go wrong and how can I
> resolve this issue?
>
>
> Thanks in advance.
>
>
>


DB Server slow down & hang during Peak hours of Usage

2023-08-07 Thread KK CHN
List ,

*Description:*

Maintaining a DB Server Postgres and with a lot of read writes to this
Server( virtual machine running on  ESXi 7 with CentOS 7) .

( I am not sure how to get the read / write counts or required IOPS or any
other parameters for you. If  you point our  I can execute those commands
and get the data. )

Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
Emergency call response system  writing many  Emergency Response vehicles
locations coordinates to the DB every 30 Seconds and every emergency call
metadata (username, phone number, location info and address of the caller
to the DB for each call)

During these hours  the system hangs and the  Application ( which shows the
location of the vehicles on a  GIS map hangs ) and the CAD machines which
connects to the system hangs as those machines can't  connect to the DB and
get data for displaying the caller information to the call taking persons
working on them. )

*Issue : *
How to trace out what makes this DB  hangs and make it slow  and how to fix
it..

*Resource poured on the system :*

*64 vCPUs  allocate ( Out of a host machine comprised of 2 processor slots
of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage show
50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
showing around 33 GB only ) *

*Query :*

How to rectify the issues that makes the DB server underperforming and find
a permanent fix for this slow down issue*. *

*Attached the  Postgres.conf file here for reference .*

*Any more information required I can share for analysis to fix the issue. *


*Krishane *


postgresql(1).conf
Description: Binary data


Re: Backup Copy of a Production server.

2023-08-07 Thread KK CHN
On Mon, Aug 7, 2023 at 10:49 AM Ron  wrote:

> On 8/7/23 00:02, KK CHN wrote:
>
> List,
>
> I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
> external storage( Say USB Hard Drive) and need to set up a backup server
> with this data dir.
>
> What is the trivial method to achieve this ??
>
> 1. Is Sqldump an option at a production server ?? (  Will this affect the
> server performance  and possible slowdown of the production server ? This
> server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
> Any bottlenecks ?
>
>
> Whether or not there will be bottlenecks depends on how busy (CPU and disk
> load) the current server is.
>
>
> 2. Is copying the data directory from the production server to an external
> storage and replace the data dir  at a  backup server with same postgres
> version and replace it's data directory with this data dir copy is a viable
> option ?
>
>
> # cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
> database server performance ??)   due to the copy command overhead ?
>
>
> OR  doing a WAL Replication Configuration to a standby is the right method
> to achieve this ??
>
>
> But you say you can't establish a network connection outside the DC.  ( I
> can't do for a remote machine .. But I can do  a WAL replication to another
> host in the same network inside the DC. So that If I  do a sqldump  or Copy
> of Data dir of the standby server it won't affect the production server, is
> this sounds good  ?  )
>
>
>  This is to take out the database backup outside the Datacenter and our DC
> policy won't allow us to establish a network connection outside the DC to a
> remote location for WAL replication .
>
>
> If you're unsure of what Linux distro & version and Postgresql version
> that you'll be restoring the database to, then the solution is:
> DB=the_database_you_want_to_backup
> THREADS=
> cd $PGDATA
> cp -v pg_hba.conf postgresql.conf /media/mydb_backup
> cd /media/mydb_backup
> pg_dumpall --globals-only > globals.sql
>

What is the relevance of  globals-only and  what this will do  ${DB}.log
// or is it  ${DB}.sql  ?

pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log  // .log
> couldn't get an idea what it mean
>
> If you're 100% positive that the system you might someday restore to is
> *exactly* the same distro & version, and Postgresql major version, then
> I'd use PgBackRest.
>
> --
> Born in Arizona, moved to Babylonia.
>


Backup Copy of a Production server.

2023-08-06 Thread KK CHN
List,

I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
external storage( Say USB Hard Drive) and need to set up a backup server
with this data dir.

What is the trivial method to achieve this ??

1. Is Sqldump an option at a production server ?? (  Will this affect the
server performance  and possible slowdown of the production server ? This
server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
Any bottlenecks ?

2. Is copying the data directory from the production server to an external
storage and replace the data dir  at a  backup server with same postgres
version and replace it's data directory with this data dir copy is a viable
option ?


# cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
database server performance ??)   due to the copy command overhead ?


OR  doing a WAL Replication Configuration to a standby is the right method
to achieve this ??

 This is to take out the database backup outside the Datacenter and our DC
policy won't allow us to establish a network connection outside the DC to a
remote location for WAL replication .

Any hints most welcome ..

Thank you
Krishane


EDB to Postgres Migration

2023-07-12 Thread KK CHN
List,

Recently I happened to have  managed a few  EDB instances running on the
EDB-10 version .

I am looking for an option for migrating all these EDB instances  to
Postgres Community edition.

1. What  major steps / actions involved ( in  bird's eye view ) for a
successful migration  to postgres community edition . ( From EDB 10 to
Postgres 14 )

2. What major challenges are involved?  (or any hurdles ?)


Please enlighten me with your experience..

Any reference  links most welcome ..

PS: -  The EDB instances are live and in production.. I can get a down time
( 5  to 15 Minutes Maximum)  Or can live  porting and upgrading to postgres
14  is possible  with minimal downtime ?

Request your  guidance,
Krishane.


BI Reports and Postgres

2023-07-11 Thread KK CHN
List,
1. For generating BI reports, which  Databases are more suitable ( RDBMS
like Postgres  OR NoSQL like MongoDB ) ? Which is best? Why ?

2. Is NoSQL DBs like MongoDB et all useful in which scenarios  and
application context ? or NoSQLs are losing the initial hype ?

3. Could someone point out which BI report tool (  OpenSource tool / Free
Software tool )  available for  generating BI reports from Postgres ?
 What does the community use ?

4. For Generating BI reports does it make sense to keep your data in RDBMS
or do we need to port data to MongoDB or similar NoSQLs ?

Any hints are much appreciated.
Krishane


PostgreSQL Server Hang​

2023-06-21 Thread KK CHN
*Description of System: *
1. We are running a Postgres Server (version 12, on CentOS 6) for an
emergency call attending and  vehicle tracking system fitted with mobile
devices for vehicles with navigation apps for emergency service.

2.   vehicles every 30 Seconds sending location coordinates( Lat /Long ) and
getting stored into the DB server at the emergency call center cum control
room.

*Issue: *
We are facing an issue of  the database hanging and becoming unresponsive
for
applications running which try to connect to the DB. So eventually applications
are also crawling on its knees.


*Mitigation done so far : *What mitigation we have done is increasing the
resources,CPU(vCPUs)  from  32 to 64  ( Not sure is this the right
approach / may be too dumb idea, but the hanging issue for the time being
rectified. )..

RAM 32 GB increased to 48 GB,  but  it observed that RAM usage was always
below 32GB only ( So foolishly increased the RAM !!!)

*Question: *
How to optimize and fine tune this database performance issue ?  Definitely
pouring the resources like the above is not a solution.

What to check the root cause and find for the performance bottle neck
reasons  ?

Thank you,
Krishane


*Additional Inputs If required: *

*##*
The DB machine   is running on a CentOS6 platform ..  Only a single Database
Instance  running as a Virtual Machine.

The  Database server also stores call center call related ( call arrival and
dispatch time stamps and short messages to and  from around 300 Desktop
application operators and data from Mobile tablets  fitted on Vehicles
with  VTS App installed on it.  The vehicle locations every 30 seconds are
continuously stored into the database..

Voice calls from callers in emergency which are each 3MB in size not stored
in the Database, but as  files stored in an NFS mount folder and  the
Database stores only the references to that voice call  files for future
reference ( The call volumes are around 1 lakh / day )   Only  meta data
information related to the calls ,  caller name, caller number, lat/long data
of caller, short description of caller situation which are less than 200
Characters  x  3 messages per each call  stored into DB.

This database  is also used for  making  reports on the action taken by
call takers/despatchers, Vehicle tracking reports etc on a daily basis.
 Around 2000 Vehicles are fitted with Mobile tablets with the emergency
navigation apps in the fleet.

The database grows roughly 1 GB / Day  )





Re: Doris 编译报错

2023-05-05 Thread zy-kk
If your compiled code is the Doris 1.2 branch, please use the 1.2 version of 
the docker development image: apache/doris:build-env-for-1.2

> 2023年5月5日 19:25,郑高峰  写道:
> 
> 环境 centOs,
> Docker 拉的最新编译环境,
> 代码为apache-doris-1.2.4.1-src
> 
> 错误提示为:
> /root/apache-doris-1.2.4.1-src/be/src/vec/core/field.h: In member function 
> 'doris::Status doris::ScrollParser::fill_columns(const 
> doris::TupleDescriptor*, 
> std::vector  >&, doris::MemPool*, bool*, const std::map std::__cxx11::basic_string >&)':
> /root/apache-doris-1.2.4.1-src/be/src/vec/core/field.h:633:9: error: 'val' 
> may be used uninitialized in this function [-Werror=maybe-uninitialized]
> 633 | new (&storage) StorageType(std::forward(x));
> | ^~
> /root/apache-doris-1.2.4.1-src/be/src/exec/es/es_scroll_parser.cpp:627:30: 
> note: 'val' was declared here
> 627 | __int128 val;
> 
> 
> 
>   
> 郑高峰
> zp...@163.com
>  
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@doris.apache.org
> For additional commands, e-mail: dev-h...@doris.apache.org



Re: [webkit-gtk] How to fix CVEs of webkitgtk 2.36.x

2023-03-26 Thread 不会弹吉他的KK
On Wed, Mar 22, 2023 at 7:01 PM Michael Catanzaro 
wrote:

> On Wed, Mar 22 2023 at 11:26:56 AM +0200, Adrian Perez de Castro
>  wrote:
> > Recently advisories published by Apple include the Bugzilla issue
> > numbers
> > (e.g. [1]), so with some work you can find out which commits
> > correspond to
> > the fixes.
>
> It finally occurs to me that since Apple now publishes the bug
> information, we could start publishing revision information. We'd want
> to fix [1] first.
>

Hi  Adrián and Michael,

Thanks. I'll try to do more search for the existing CVEs.


> > WebKitGTK 2.38.x is backwards compatible with 2.36.x, you can safely
> > update
> > without needing to change applications. In general, we always keep
> > the API and
> > ABI backwards compatible.
>
> For avoidance of doubt, WebKitGTK 2.40.x is backwards-compatible as
> well and that will remain true indefinitely, as long as you continue to
> build the same API version [2]. Adrian might be planning one last
> 2.38.x release, but it's really time to move on to 2.40.
>
> On rare occasions, an upgrade might affect the behavior of particular
> API functionality within the same API version, but this is unusual and
> is avoided whenever possible. I don't think any APIs broke between 2.36
> and 2.40, so that shouldn't be a problem for you this time. The goal is
> for upgrades to be as safe as possible.
>

Great. Your comments will be powerful evidence to upgrade webkitgtk on
Yocto lts release.

Thanks a lot.
Kai


> Michael
>
> [1] https://bugs.webkit.org/show_bug.cgi?id=249672
> [2]
>
> https://blogs.gnome.org/mcatanzaro/2023/03/21/webkitgtk-api-for-gtk-4-is-now-stable/
>
>
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


[webkit-gtk] How to fix CVEs of webkitgtk 2.36.x

2023-03-21 Thread 不会弹吉他的KK
Hi All,

I am working on Yocto project. In last LTS Yocto release the version of
webkitgtk is 2.36.8.
And there are more than 15 CVE issues for 2.36.8 till now. I checked the
git log and
"WebKitGTK and WPE WebKit Security Advisory" pages that I only got info
that which CVE
has been fixed in which version of webkitgtk. But I can NOT get the exact
info that it is fixed by
which commit(s). So if there anywhere or some web page to get the specific
fix/patch for a CVE,
please?

And the second question is webkitgtk 2.38.x backward compatible with
2.36.8? I compare
 the header files between 2.36.8 and 2.38.4 that it seems no function
deleted and no interface
change for existing functions, only some functions are marked deprecated
and some new functions
added. Does that mean upgrade webkitgtk from 2.36.8 to 2.38.4 will not
break applications which
depend on it, please?

Thanks a lot.
Kai
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: [vpp-dev] Help:VPP-DPDK #acl_plugin #dpdk

2023-02-07 Thread kk
I read the official example given by dpdk. The two function interfaces 
"rte_ring_create and rte_mempool_create" should indeed be called after 
rte_eal_init(). The problem is that I have no problem calling these two 
function interfaces in the vpp dpdk plug-in, which is very strange.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22565): https://lists.fd.io/g/vpp-dev/message/22565
Mute This Topic: https://lists.fd.io/mt/96804295/21656
Mute #acl_plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/acl_plugin
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Help:VPP-DPDK #acl_plugin #dpdk

2023-02-07 Thread kk
Hello everyone, I wrote a vpp plug-in by myself. I called the dpdk function 
interface "rte_ring_create and rte_mempool_create" in this plug-in, and then it 
will prompt:
"MEMPOOL: Cannot allocate tailq entry!
Problem getting send ring
RING: Cannot reserve memory for tailq
RING: Cannot reserve memory for tailq
",
does anyone know what's going on?
And I found that I have no problem calling this function interface in the 
vpp-dpdk plug-in.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22557): https://lists.fd.io/g/vpp-dev/message/22557
Mute This Topic: https://lists.fd.io/mt/96804295/21656
Mute #acl_plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/acl_plugin
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: NEO6 GPS with Py PICO with micropython

2022-11-30 Thread KK CHN
List,

Just commented the // gpsModule.readline() in the while loop,  (
refer the link
https://microcontrollerslab.com/neo-6m-gps-module-raspberry-pi-pico-micropython/
)


while True: # gpsModule.readline() // This line commented out and the "GPS
not found message disappeared". buff = str(gpsModule.readline()) parts =
buff.split(',')


The GPS not found error which appears intermittently in the output python
console for few seconds ( say 7 to 8 seconds  its printing the lines   "
GPS data not found" )   now  disappears.

 Any thoughts?  How the above line comment made it vanish the  "GPS data
not found" error output.

Krishane

On Wed, Nov 30, 2022 at 3:58 AM rbowman  wrote:

> On Tue, 29 Nov 2022 17:23:31 +0530, KK CHN wrote:
>
>
> > When I ran the program I am able to see the output of  latitude and
> > longitude in the console of thony IDE.  But  between certain intervals
> > of a few seconds  I am getting the latitude and longitude data ( its
> > printing GPS data not found ?? ) in the python console.
>
> I would guess the 8 seconds in
>
> timeout = time.time() + 8
>
> is too short. Most GPS receivers repeat a sequence on NMEA sentences and
> the code is specifically looking for $GPGGA. Add
>
> print(buff)
>
> to see the sentences being received. I use the $GPRMC since I'm interested
> in the position, speed, and heading. It's a different format but if you
> only want lat/lon you could decode it in a similar fashion as the $GPGGA.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


NEO6 GPS with Py PICO with micropython

2022-11-29 Thread KK CHN
List ,
I am following this tutorial to  get latitude and longitude data  using
NEO6 GPS module and Py PICO  to read the GPS data from the device.

I followed the code specified in this tutorial.
https://microcontrollerslab.com/neo-6m-gps-module-raspberry-pi-pico-micropython/

I have installed thony IDE in my Desktop(windows PC) and  run the code
after the devices all connected and using USB cable connected to my PC.

When I ran the program I am able to see the output of  latitude and
longitude in the console of thony IDE.  But  between certain intervals of a
few seconds  I am getting the latitude and longitude data ( its printing
GPS data not found ?? ) in the python console.

The satellite count from the $GGPA output showing 03 ..
and the GPS data not found repeating randomly for intervals of seconds.
Any hints why it is missing the GPS data (randomly) ??

PS:-  The GPS device I placed outside  my window and connected to the PC
with a USB cable from the PICO  module. GPS device NEO6 light (Red LED
) blinking even though the "  GPS data not found" messages in th python
console.

Any hints ?? most welcome

Yours,
Krishane
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 如何使用flink sql优雅的处理大量嵌套if-else逻辑

2022-11-28 Thread macia kk
我会选择 UDF  + 配置文件,把配置文件放 HDFS上,UDF读这个配置文件。每次更新HDFS的配置文件,重启下任务

casel.chen  于2022年11月24日周四 12:01写道:

> 我有一个flink
> sql作业需要根据不同字段值满足不同条件来设置另一个字段值,还有一些嵌套if-else逻辑,这块逻辑不是固定的,业务方会过一段时间调整一次。
> 想请问如何使用flink sql优雅的处理嵌套if-else逻辑呢?我有想到使用drools规则引擎,通过udf来调用,不知道还有没有更好的办法?
>
>


Re: Tumble Window 会带来反压问题吗?

2022-10-20 Thread macia kk
https://img-beg-sg-1252771144.cos.ap-singapore.myqcloud.com/20221020144100.png
看这个图,窗口结束的时候,会产生反压,导致前边的 busy 直接是0,不干活了

https://img-beg-sg-1252771144.cos.ap-singapore.myqcloud.com/20221020152835.png
这个是前边在正常消费处理的时候




macia kk  于2022年10月20日周四 14:24写道:

> Hi  yidan
>
> 我的的意思是,假设上游 1-10 分钟在处理数据,然后第11分钟就把大批量数据发给 sink,然后上游继续进行 10-20的处理,但是这时候
> sink 由于数据量大产生了阻塞,造成反压反馈给上游,上游就变慢了。但实际上如果没有反压机制。10-20 的时候,sink
> 其实可以慢慢写完的。唯一的区别是他发送了一个反压信号,导致上游处理变慢。不知道理解的对不对。
>
>
> 为了要10分钟发送,是因为上游太多数据, 所以我先提前用窗口个聚合一下,目前一秒将近有 800MB 的流量
>
>
>
> Shammon FY  于2022年10月20日周四 11:48写道:
>
>> 如果必须要10分钟,但是key比较分散,感觉这种情况可以增加资源加大一下并发试试,减少每个task发出的数据量
>>
>> On Thu, Oct 20, 2022 at 9:49 AM yidan zhao  wrote:
>>
>> > 这个描述前后矛盾,写出速度跟不上导致反压,那控制写出速度不是问题更大。不过你不需要考虑这些,因为你控制不了写出速度,只能控制写出时机。
>> >
>> > 写出时机是由window的结束时间和watermark决定的,所以如果真要解决,需要控制分窗不要固定整点10分钟。
>> >
>> > macia kk  于2022年10月20日周四 00:57写道:
>> > >
>> > > 聚合10分钟再输出,到10分钟的时候由于积攒了很多数据,写出速度跟不上,导致反压,然后上游消费就处理变慢了。
>> > >
>> > > 如果控制一下写出的速度,让他慢慢写会不会好一些
>> >
>>
>


Re: Tumble Window 会带来反压问题吗?

2022-10-19 Thread macia kk
Hi  yidan

我的的意思是,假设上游 1-10 分钟在处理数据,然后第11分钟就把大批量数据发给 sink,然后上游继续进行 10-20的处理,但是这时候 sink
由于数据量大产生了阻塞,造成反压反馈给上游,上游就变慢了。但实际上如果没有反压机制。10-20 的时候,sink
其实可以慢慢写完的。唯一的区别是他发送了一个反压信号,导致上游处理变慢。不知道理解的对不对。


为了要10分钟发送,是因为上游太多数据, 所以我先提前用窗口个聚合一下,目前一秒将近有 800MB 的流量



Shammon FY  于2022年10月20日周四 11:48写道:

> 如果必须要10分钟,但是key比较分散,感觉这种情况可以增加资源加大一下并发试试,减少每个task发出的数据量
>
> On Thu, Oct 20, 2022 at 9:49 AM yidan zhao  wrote:
>
> > 这个描述前后矛盾,写出速度跟不上导致反压,那控制写出速度不是问题更大。不过你不需要考虑这些,因为你控制不了写出速度,只能控制写出时机。
> >
> > 写出时机是由window的结束时间和watermark决定的,所以如果真要解决,需要控制分窗不要固定整点10分钟。
> >
> > macia kk  于2022年10月20日周四 00:57写道:
> > >
> > > 聚合10分钟再输出,到10分钟的时候由于积攒了很多数据,写出速度跟不上,导致反压,然后上游消费就处理变慢了。
> > >
> > > 如果控制一下写出的速度,让他慢慢写会不会好一些
> >
>


Tumble Window 会带来反压问题吗?

2022-10-19 Thread macia kk
聚合10分钟再输出,到10分钟的时候由于积攒了很多数据,写出速度跟不上,导致反压,然后上游消费就处理变慢了。

如果控制一下写出的速度,让他慢慢写会不会好一些


Flink 的 大Hive 维度表

2022-09-21 Thread macia kk
Hi
  Flink 的 Hive 维度表是放在内从中,可以把这个放到State中吗,这样用 RocksDB 就能减小一下内存的使用量


Python code: brief

2022-07-26 Thread KK CHN
List ,

I have come across a difficulty to understand the code in this file.  I am
unable to understand exactly what the code snippet is doing here.

https://raw.githubusercontent.com/CODARcode/MDTrAnal/master/lib/codar/oas/MDTrSampler.py

I am new to this type of scientific computing code snippets and it is coded
by someone.  Due to a requirement I would like to understand what these
lines of code will do exactly.  If someone  could explain to me what the
code snippets do in the code blocks, it will be a great help .

Thanks in advance
Krish
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [users@httpd] site compromised and httpd log analysis

2022-07-06 Thread KK CHN
On Wed, Jul 6, 2022 at 8:33 AM Yehuda Katz  wrote:

> Your log doesn't start early enough. Someone uploaded a web shell (or
> found an existing web shell) to your server, possibly using an upload for
> that doesn't validate the input, then used that shell to run commands on
> your server.
>

Yes, that was not too old log

Here is another old log  paste
https://zerobin.net/?a4d9f5b146676594#hkpTU0ljaG5W0GUNVEsaYqvffQilrXavBmbK+V9mzUw=


.

Here is another log which starts earlier than the earlier logs.  Which may
help to investigate more.

I would consider your entire server to be compromised at this point since
> you have no record of what else the attacker could have done once they had
> a shell.
>
> Yes we took the server down, and recreated the VM with an old backup. Also
informed the developer/maintainer about this simple.shell execution and the
need of regular patching of the PHP7 version and the wordpress framework
they used for hosting.

I would like to know what other details / analysis we need to perform to
find out how the attacker got access and what time the backdoor was
installed and through what vulnerability they exploited ?

I request your tips  to investigate further and to find the root cause of
this kind of attack and how to prevent it in future..??



Make sure that you do not allow users to upload files and then execute
> those files.
>
> - Y
>
> On Tue, Jul 5, 2022 at 9:53 PM KK CHN  wrote:
>
>> https://pastebin.com/YspPiWif
>>
>> One of the websites hosted  by a customer on our Cloud infrastructure was
>> compromised, and the attackers were able to replace the home page with
>> their banner html page.
>>
>> The log files output I have pasted above.
>>
>> The site compromised was PHP 7 with MySQL.
>>
>> From the above log, can someone point out what exactly happened and how
>> they are able to deface the home page.
>>
>> How to prevent these attacks ? What is the root cause of this
>> vulnerability  and how the attackers got access ?
>>
>> Any other logs or command line outputs required to trace back kindly let
>> me know what other details  I have to produce ?
>>
>> Kindly shed your expertise in dealing with these kind of attacks and
>> trace the root cause and prevention measures to block this.
>>
>> Regards,
>> Krish
>>
>>
>>


[users@httpd] site compromised and httpd log analysis

2022-07-05 Thread KK CHN
https://pastebin.com/YspPiWif

One of the websites hosted  by a customer on our Cloud infrastructure was
compromised, and the attackers were able to replace the home page with
their banner html page.

The log files output I have pasted above.

The site compromised was PHP 7 with MySQL.

>From the above log, can someone point out what exactly happened and how
they are able to deface the home page.

How to prevent these attacks ? What is the root cause of this
vulnerability  and how the attackers got access ?

Any other logs or command line outputs required to trace back kindly let me
know what other details  I have to produce ?

Kindly shed your expertise in dealing with these kind of attacks and trace
the root cause and prevention measures to block this.

Regards,
Krish


[users@httpd] Defaced Website : Few forensic tips and help

2022-07-04 Thread KK CHN
List ,

https://pastebin.com/YspPiWif

One of our PHP  website hacked on 3rd july 2022.  I am attaching the httpd
access files contents in the above pastebin.I hide the original URL of
the website due to a SLA policy.

Can anybody point out from the logs what exactly made the attacker able to
bring the site down..

Has he used this php site for attacking ?

Any other logs or command line outputs needed  let me know. I will share
the required files.   I am new to this area of forensic analysis to find
out the root cause of the attack .

Kindly shed some tips to find out where the vulnerability is and how to
prevent it in future.

Any more inputs/details  required  keep me informed, I can share those too.

Regards,
Krish


[users@httpd] Slow web site response..PHP-8/CSS/Apache/

2022-06-23 Thread KK CHN
List,

I am facing a slow response for a hosted PHP8 web site..   It takes 30
seconds to load the website fully .  The application and database(
postgresql ) both are separately running on two Virtual Machines in
OpenStack cloud.  in two 10.184.x.221  and 10.184.y.221 networks
respectively.



When I  used tools like  GTMetrix and Webpagetest.org   it says   there are
render  blocking resources

Resources are blocking the first paint of your page. Consider delivering
critical JS/CSS inline and deferring all non-critical JS/styles.
Learn how to improve this


Resources that *may* be contributing to render-blocking include:
URL Transfer Size Download Time
 xxx.mysite.com/css/bootstrap.min.css   152KB 6.6s
xxx.mysite.com/css/style.css 14.2KB 5.9s
xxx.mysite.com/css/font/font.css  3.33KB  5.7s

here this bootstrap.css, which take  TTFB  6 seconds   and full loading of
the website taking almost extra 24 seconds total  30 seconds to render it..

https://pastebin.mozilla.org/SX3Cyhpg


The GTmetrix.com site also  show  this  issue also

The Critical Request Chains below show you what resources are loaded with a
high priority. Consider reducing the length of chains, reducing the
download size of resources, or deferring the download of unnecessary
resources to improve page load.
Learn how to improve this


Maximum critical path latency: *24.9s*



How can I overcome this issue   ?  Is this a  VM performance issue or PHP
issue ?/Apache issue ?or PHP applicaiton to Database  backend
connection issue..

Excuse me if this an off topic post to httpd list. Hope a lot of people
might have their experience to share how to trouble shoot or what may the
root cause making this site response too slow.

Kindly shed some light here.  Any hints where to start most welcome..

Any more data needed pls let me know ..I can share .

Thanks in advance,
Krish.


[jira] [Created] (BEAM-14528) Data type and conversions are stricter for BQ sinks

2022-05-27 Thread KK Ramamoorthy (Jira)
KK Ramamoorthy created BEAM-14528:
-

 Summary: Data type and conversions are stricter for BQ sinks
 Key: BEAM-14528
 URL: https://issues.apache.org/jira/browse/BEAM-14528
 Project: Beam
  Issue Type: Bug
  Components: io-java-gcp
Reporter: KK Ramamoorthy


From [https://github.com/apache/beam/pull/17404], it seems like the timestamp 
conversion has been made stricter, such that the pipeline only accepts 
timestamps in the form “2022-05-09T18:04:59Z”.

 

Here are some formats that previously worked, but seem to no longer be accepted:

“2022-05-09 18:04:59”

“2022-05-09T18:04:59”

“2022-05-09T18:04:59.739-07:00”

 

Also from the same pull request, it seems to no longer accept integers in 
fields marked as floats

 

Unexpected value :0, type: class java.lang.Integer. Table field name: a, type: 
FLOAT64

 

Are these the desired behaviors? It’s different from the behavior I’ve observed 
when using the Streaming Inserts method.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-15 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17492824#comment-17492824
 ] 

kk commented on SPARK-38115:


Is there any config as such to stop using FileOutputCommiter, because we didn't 
set any conf explicitly to use the committers.

And more over when overwriting on s3:// then i don't have a problem of 
_temporary. Problem comes if our path has s3a://

Just I am looking if I can use conf/options to manage temporary location as 
staging and have target path as primary

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Minor
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-15 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17492785#comment-17492785
 ] 

kk commented on SPARK-38115:


Hello [~hyukjin.kwon] did you get a chance to look into this

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Minor
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-07 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17488445#comment-17488445
 ] 

kk commented on SPARK-38115:


Thanks [~hyukjin.kwon] for responding.

Basically I am trying to write data to s3 from spark dataframe. And this will 
use FileOutputCommitter by spark.

[https://stackoverflow.com/questions/46665299/spark-avoid-creating-temporary-directory-in-s3]

Now my requirement is to either change the '{*}_temporary{*}' path to write to 
different s3 bucket and copy to original s3 by setting any spark conf or 
parameter part of write step.

or 

stop creating *_temporary* when writing to s3. 

As we have version enabled bucket the _temporary is being stored in the version 
even though it is not physically present.

Below is the write step:

df.coalesce(1).write.format('parquet').mode('overwrite').save('{*}s3a{*}://outpath')

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Minor
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-04 Thread kk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kk updated SPARK-38115:
---
Description: No default spark conf or param to control the '_temporary' 
path when writing to filesystem.  (was: There is default spark conf or param to 
control the '_temporary' path when writing to filesystem.)

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Spark Core, Spark Shell, Spark Submit
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Major
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [blink-dev] Re: Intent to extend the origin trial: WebTransport over HTTP/3

2022-01-20 Thread kk as
Hi
  Can you please let me know  what transport protocol  do the Streams API 
use in WebTransport over http3/quic.  
I am assuming the datagram API uses the UDP protocol for transport .   Can 
you also please let me know what is the difference in latency
when you send data using Streams API vs Datagram API ?


thanks

On Wednesday, October 27, 2021 at 10:34:56 PM UTC-7 Yutaka Hirano wrote:

> On Thu, Oct 28, 2021 at 2:38 AM Joe Medley  wrote:
>
>> Hi,
>>
>> Can I get some clarification?
>>
>> So this extends the origin trial through 96, but you don't know yet 
>> whether it will ship in 97? Is this correct?
>>
> We're shipping WebTransport over HTTP/3 in 97.
>
>
>> Joe
>> Joe Medley | Technical Writer, Chrome DevRel | jme...@google.com | 
>> 816-678-7195 <(816)%20678-7195>
>> *If an API's not documented it doesn't exist.*
>>
>>
>> On Mon, Oct 25, 2021 at 1:00 AM Mike West  wrote:
>>
>>> LGTM3.
>>>
>>> -mike
>>>
>>>
>>> On Thu, Oct 21, 2021 at 9:58 PM Daniel Bratell  
>>> wrote:
>>>
 For a gapless origin trial->shipping it is important to be sure we 
 don't overlook any feedback in the race to shipping. The normal process 
 has 
 gaps built in which form natural points to do that final polish based on 
 received feedback and that will be missing here.

 It does sound like the feedback has been positive though and that there 
 are no known problems that can't be fixed after shipping, and with that in 
 mind:

 LGTM2
 On 2021-10-21 21:53, Yoav Weiss wrote:

 Discussing amongst the API owners (Alex, Daniel, Rego and myself), this 
 is essentially a request for a gapless OT, only that the would-be-gap is 
 slightly longer than usual. Given the evidence 
 
  of 
 developer feedback presented in the I2S, that seems like a reasonable 
 request. 

 LGTM1 (as gapless OT requests require 3 LGTMs)

 On Monday, October 18, 2021 at 10:39:14 AM UTC+2 Yutaka Hirano wrote:

> Contact emails
>
> yhi...@chromium.org,vas...@chromium.org
>
> Explainer
>
> https://github.com/w3c/webtransport/blob/main/explainer.md
>
> Design docs/spec
>
> Specification: https://w3c.github.io/webtransport/#web-transport
>
>
> https://docs.google.com/document/d/1UgviRBnZkMUq4OKcsAJvIQFX6UCXeCbOtX_wMgwD_es/edit
>
> TAG review
>
> https://github.com/w3ctag/design-reviews/issues/669
>
>
> Summary
>
> WebTransport is an interface representing a set of reliable/unreliable 
> streams to a server. The interface potentially supports multiple 
> protocols, 
> but based on discussions on the IETF webtrans working group, we are 
> developing WebTransport over HTTP/3 which uses HTTP3 as the underlying 
> protocol.
>
> Note that we were developing QuicTransport a.k.a. WebTransport over 
> QUIC and we ran an origin trial M84 through M90. It uses the same 
> interface 
> WebTransport, but because of the protocol difference ("quic-transport" 
> vs. 
> "https") it is difficult for web developers to be confused by them.
>
> new WebTransport("quic-transport://example.com:9922")
>
> represents a WebTransport over QUIC connection, and
>
> new WebTransport("https://example.com:9922";)
>
> represents a WebTransport over HTTP/3 connection.
>
> Goals for experimentation
>
> We're shipping the API in M97 
> .
>  
> Twitch, one of our partners, wants to continue their experiment until the 
> API is fully shipped. I think this is a reasonable request given we 
> originally aimed to ship the feature in M96 but we missed the branch 
> point.
>
> The original goals follow:
>
> To see whether the API (and the implementation) is useful in various 
> circumstances.
>
> Our partners want to evaluate this API on various network 
> circumstances (i.e., lab environments are not enough) to see its 
> effectiveness.
>
> We also expect feedback for performance.
>
> Experimental timeline
>
> M95 and M96
>
> Ongoing technical constraints
>
> None
>
> Debuggability
>
> The devtools support is under development.
>
> Just like with regular HTTP/3 traffic, the detailed information about 
> the connection can be obtained via chrome://net-export interface.
>
> Will this feature be supported on all six Blink platforms (Windows, 
> Mac, Linux,
>
> Chrome OS, Android, and Android WebView)?
>
> Yes
>
> Is this feature fully tested by web-platform-tests 
> 
> ?

[s2putty-developers] RE:

2022-01-07 Thread kk
Title: Untitled document




2021年马上结束了,是否在为没有客户,没有订单而发愁
是否考虑借助软件来为您提高工作效率,开发海外客户

全球引擎数据,海关数据,决策人分析,社交平台搜索,WhatsApp客户电话搜索等多种获客模式帮助您解决无客户难题,提升工作效率及质量

邮件营销+WhatsApp营销多种营销方式,提升工作效率,快速获取意向客户,加速转化进度

QQ:1203046899
WeChat:18617145735
欢迎咨询
2021年马上结束了,是否在为没有客户,没有订单而发愁
是否考虑借助软件来为您提高工作效率,开发海外客户

全球引擎数据,海关数据,决策人分析,社交平台搜索,WhatsApp客户电话搜索等多种获客模式帮助您解决无客户难题,提升工作效率及质量

邮件营销+WhatsApp营销多种营销方式,提升工作效率,快速获取意向客户,加速转化进度

QQ:2890057524
WeChat:13247602337(手机同号)
欢迎咨询






___
s2putty-developers mailing list
s2putty-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s2putty-developers


unsubscribe

2021-11-15 Thread kk
unsubscribe



Re: [squid-dev] request for change handling hostStrictVerify

2021-11-02 Thread kk

On Monday, November 01, 2021 14:58 GMT, Alex Rousskov 
 wrote:
 On 11/1/21 3:59 AM, k...@sudo-i.net wrote:
> On Saturday, October 30, 2021 01:14 GMT, Alex Rousskov wrote:
>> >> AFAICT, in the majority of deployments, the mismatch between the
>> >> intended IP address and the SNI/Host header can be correctly handled
>> >> automatically and without creating serious problems for the user. Squid
>> >> already does the right thing in some cases. Somebody should carefully
>> >> expand that coverage to intercepted traffic. Frankly, I am somewhat
>> >> surprised nobody has done that yet given the number of complaints!

> Not sure what do you mean with "Somebody should carefully expand that
> coverage to intercepted traffic"?

I meant that somebody should create a high-quality pull request that
modifies Squid source code to properly address the problem you, AFAICT,
are suffering from. There is already code that handles similar
situations correctly.

Alex.

Ok Alex, I will try to implement it.
https://github.com/chifu1234/squid

-- 
Kevin Klopfenstein
Bellevuestrasse 103
3095 Spiegel, CH
sudo-i.net


smime.p7s
Description: S/MIME cryptographic signature
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] request for change handling hostStrictVerify

2021-11-01 Thread kk

On Saturday, October 30, 2021 01:14 GMT, Alex Rousskov 
 wrote:
 On 10/29/21 8:37 PM, Amos Jeffries wrote:
> On 30/10/21 11:09, Alex Rousskov wrote:
>> On 10/26/21 5:46 PM, k...@sudo-i.net wrote:
>>
>>> - Squid enforces the Client to use SNI
>>> - Squid lookup IP for SNI (DNS resolution).
>>> - Squid forces the client to go to the resolved IP
>>
>> AFAICT, the above strategy is in conflict with the "SECURITY NOTE"
>> paragraph in host_verify_strict documentation: If Squid strays from the
>> intended IP using client-supplied destination info, then malicious
>> applets will escape browser IP-based protections. Also, SNI obfuscation
>> or encryption may make this strategy ineffective or short-lived.
>>
>> AFAICT, in the majority of deployments, the mismatch between the
>> intended IP address and the SNI/Host header can be correctly handled
>> automatically and without creating serious problems for the user. Squid
>> already does the right thing in some cases. Somebody should carefully
>> expand that coverage to intercepted traffic. Frankly, I am somewhat
>> surprised nobody has done that yet given the number of complaints!

> IIRC the "right thing" as defined by TLS for SNI verification is that it
> be the same as the host/domain name from the wrapper protocol (i.e. the
> Host header / URL domain from HTTPS messages). Since Squid uses the SNI
> at step2 as Host value it already gets checked against the intercepted IP


Just to avoid misunderstanding, my email was _not_ about SNI
verification. I was talking about solving the problem this thread is
devoted to (and a specific solution proposed in the opening email on the
thread).

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-devThanks Alex & Amos.

Not sure what do you mean with "Somebody should carefully expand that coverage 
to intercepted traffic"?
>then malicious applets will escape browser IP-based protections.
Browser should perform IP-based protection on browser(client) level and should 
therefor not traverse squid.



-- 
Kevin Klopfenstein
Bellevuestrasse 103
3095 Spiegel, CH
sudo-i.net


smime.p7s
Description: S/MIME cryptographic signature
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[ovs-dev] unknown OpenFlow message (version 4, type 18, stat 21)

2021-10-31 Thread kk Yoon
openvswitch에 무선 파라미터 요청 메시지를 추가하기 위해 다음과 같은 과정을 거쳤습니다.

1. 무선 파라미터 메시지 정의
enum ofptype {
OFPTYPE_WPARAMS_REQUEST, /* OFPRAW_OFPST13_WPARAMS_ REQUEST. */
OFPTYPE_WPARAMS_REPLY, /* OFPRAW_OFST13_WPARAMS_REPLY. */
}

enum offraw {
/* OFPST 1.3+ (21): 무효. */
OFPRAW_OFST13_WPARAMS_ REQUEST,

/* OFST 1.3+ (21): 무효. */
OFPRAW_OFST13_WPARAMS_REPLY
}

2. 처리 함수 정의,
static enum ofperr
handle_wparams_request(struct ofconn* ofconn, const struct ofp_header* oh)
{
VLOG_WARN("handle_wparams_ request() 호출\n");
struct ofpbuf* buf;

buf = offraw_alloc_reply(OFPRAW_ OFPST13_WPARAMS_REPLY, 오, 0);
ofconn_send_reply(ofconn, buf);
반환 0;
}

정적 enum ofperr
handle_single_part_openflow( struct ofconn *ofconn, const struct ofp_header
*oh,
enum ofptype 유형)
OVS_EXCLUDED(ofproto_mutex)
{
// VLOG_INFO("유형: %d 대 %d", 유형, OFPTYPE_GET_TXPOWER_REQUE)

스위치(유형) {
경우 OFPTYPE_WPARAMS_REQUEST:
return handle_wparams_request(ofconn, oh);
}

하지만 /var/log/openvswitch/ovs- vswitchd.log 인쇄
2021-09-10T08:18:32.850Z| 18277|ofp_msgs|WARN|알 수 없는 OpenFlow 메시지(버전 4, 유형
18, 통계 21)
무엇이 문제입니까?
감사합니다.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] unknown OpenFlow message (version 4, type 18, stat 21)

2021-10-29 Thread kk Yoon
To add a wireless parameter request message to the openvswitch, we went
through the following process.

1. wireless parameter message definition
enum ofptype {
OFPTYPE_WPARAMS_REQUEST, /* OFPRAW_OFPST13_WPARAMS_REQUEST. */
OFPTYPE_WPARAMS_REPLY, /* OFPRAW_OFPST13_WPARAMS_REPLY. */
}

enum ofpraw {
/* OFPST 1.3+ (21): void. */
OFPRAW_OFPST13_WPARAMS_REQUEST,

/* OFPST 1.3+ (21): void. */
OFPRAW_OFPST13_WPARAMS_REPLY
}

2. Definition of processing function,
static enum ofperr
handle_wparams_request(struct ofconn* ofconn, const struct ofp_header* oh)
{
VLOG_WARN("handle_wparams_request() called\n");
struct ofpbuf* buf;

buf = ofpraw_alloc_reply(OFPRAW_OFPST13_WPARAMS_REPLY, oh, 0);
ofconn_send_reply(ofconn, buf);
return 0;
}

static enum ofperr
handle_single_part_openflow(struct ofconn *ofconn, const struct ofp_header
*oh,
enum ofptype type)
OVS_EXCLUDED(ofproto_mutex)
{
// VLOG_INFO("type : %d vs %d", type, OFPTYPE_GET_TXPOWER_REQUEST);

switch (type) {
case OFPTYPE_WPARAMS_REQUEST:
return handle_wparams_request(ofconn, oh);
}

but /var/log/openvswitch/ovs-vswitchd.log print
2021-09-10T08:18:32.850Z|18277|ofp_msgs|WARN|unknown OpenFlow message
(version 4, type 18, stat 21)
What's the problem?
Thank you.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[squid-dev] request for change handling hostStrictVerify

2021-10-26 Thread kk

Hi Guys!
Sorry I was unsure if this was the correct point of contact in regards to 
hostStrictVerify.

I think I am not the only one having issues with hostStrictVerify in scenarios 
where you just intercept traffic (tls) and squid checks the SNI if the IP 
address from the Client is the same as squid resolve it. The major issue in 
that approach is that many services today change their DNS records at a very 
high frequency, thus it's almost impossible to make sure that client and squid 
do have the same A record cached.

My Proposal to resolve this issue would be the following:
- Squid enforces the Client to use SNI! (currently, this is not done and can be 
considered as a security issue, because you can bypass any hostname rules)
- Squid lookup IP for SNI (DNS resolution).
- Squid forces the client to go to the resolved IP (and thus ignoring the IP 
which was provided in the L3 info from the client)

Any thoughts?


many thanks & have a nice day,

Kevin

-- 
Kevin Klopfenstein
sudo-i.net


smime.p7s
Description: S/MIME cryptographic signature
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: Combine multiple wasm files

2021-09-01 Thread Mehaboob kk
Sorry, I missed this post. Thank you for answering my question. What I am
doing now is to build multiple .a files and combine while linking into one
wasm file.
I tried reverse engineer wasm and .a files back to c file and its still wat
like format not getting back to the original c/c++ code which I started
with. However, someone who is intimately familiar with wasm may be able to
figure out the source, just like a assembly language expert can understand
the logic. Now I am trying to apply some source obfuscation as binary
obfuscation tools not available for wasm.
Do you think my understanding is correct? Are there any tools out there to
reproduce the exact c code from wasm binary?




On Sat, Jun 19, 2021 at 8:23 PM 'Sam Clegg' via emscripten-discuss <
emscripten-discuss@googlegroups.com> wrote:

> What is the current mechanism for loading the wasm file you are
> supplying?  Are you using emscripten's dynamic linking capability (i.e.
> MAIN_MODULE + SIDE_MODULE?).
>
> If that answer is yes, and you are asking about linking a SIDE_MODULE into
> the MAIN_MODULE ahead of time, its not something that is supported no.
> Also, shared libraries (side moudles) are are only slightly more obfuscated
> than object files and `.a` archives.  They are all in the wasm format which
> is fairly easy to disassembly.  If you want to try to prevent decompilation
> or disassembly you would need to do more than just ship as a shared library
> (side module) you would also need to perform some kind of obfuscation,
> which by its nature (and the nature of WebAssemlby in particular) is always
> going to have limits.
>
> On Sat, Jun 19, 2021 at 3:45 PM Mehaboob kk  wrote:
>
>> Hello,
>>
>> Is it possible to combine multiple .wasm files to one single .wasm file?
>>
>> Scenario:
>> I want to share a library(SDK) to an end customer who is building the
>> .wasm/JS application. Customer concerned that loading multiple wasm files
>> is not efficient. So we wanted to combine two wasm files. Although I can
>> share static lib generated using Emscripten, its not preferred because .a
>> file is easy to reverse compared to wasm right?
>>
>> Any inputs on this please?
>>
>> Thank you
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "emscripten-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to emscripten-discuss+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcCQC45LU%3DNMFev5PYjn%2BmzOOgDfhXx0ahDXq8GqKRVjJQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcCQC45LU%3DNMFev5PYjn%2BmzOOgDfhXx0ahDXq8GqKRVjJQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "emscripten-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to emscripten-discuss+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/emscripten-discuss/CAL_va29zxMEFFi8gHEP5C0pBJ0idQh8qeqT%2B1S5v%3D9XO5BMKNA%40mail.gmail.com
> <https://groups.google.com/d/msgid/emscripten-discuss/CAL_va29zxMEFFi8gHEP5C0pBJ0idQh8qeqT%2B1S5v%3D9XO5BMKNA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcA7%3DS0SD%2Bas_BZGvwKD95DT-%2B32%2Bk3zn_bDfNL77LhnOg%40mail.gmail.com.


[ovirt-users] Injecting VirtIO drivers : Query

2021-08-25 Thread KK CHN
Hi,

I am in the process of importing  multi disk  Windows VMs from   HyperV
 environment to   my  OpenStack Setup( Ussuri version, glance and QEMU-KVM
)

I am referring online documents as  in the trailing lines.   But Is this
relevant to inject  VirtIO drivers to the Windows VMs ( as the articles
date back to 2015) .Some where it  mentions when you perform P2V
migration its necessary.

Is this VirtIO injection is necessary in my case ?  I am exporting from
HyperV and importing to OpenStack.

1. Kindly advise me the relevance of VirtIO injection and is it applicable
to my requirements.

2. Is there any  uptodate reference materials link for performing Windows
Multidisk VM importing to OpenStack(ussurin, glance and KVM) .  Or Am I
doing an impossible thing by beat around the bush ?


These are the links which I referred but it too old :  So the relevance of
the contents still applicable ?   ( Windows VMs are   Windows 2012 Server,
2008 and 2003 which I need to import to OpenStack)

https://superuser.openstack.org/articles/how-to-migrate-from-vmware-and-hyper-v-to-openstack/

https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e00kAWeCAM

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXAGSMCIE44FBN4GXJSZEG2XHTTM5NFU/


[ovirt-users] Automigration of VMs from other hypervisors

2021-08-11 Thread KK CHN
Hi list,

I am in the process of migrating 150+ VMs running on Rhevm4.1 toKVM
based OpenStack installation ( Ussuri with KVm and glance as image storage.)

What I am doing now, manually shutdown each VM through RHVM GUI  and export
to export domain and  scp those image files of each VM to our OpenStack
controller node and uploading to glance and creating each VM manually.

query 1.
Is there a better way to automate this migration by any utility or scripts ?
Any one done this kind of automatic migration before what was your
approach?  Or whats the better approach instead of doing manual migration ?

Or only manually I have to repeat the process for all 150+ Virtual
machines?  ( guest VMs are  CentOS7 and Redhat Linux 7 with LVM data
partitions attached)

Kindly share your thoughts..

Query 2.

other than this 150+ VMs Redhat Linux 7 and Centos VMs  on Rhevm 4.1, I
have to migrate  50+ VMs  which hosted on hyperV.

What the method / approach for exporting from HyperV and importing to
OpenStack Ussuri version  with glance with KVM hpervisor ? ( This is the
ffirst time I am going to use hyperV, no much idea about export from hyperv
and Import to KVM)

  Will the images exported form HyperV(vhdx image disks with single disk
and multiple disk(max 3 disk)  VMs) can be directly imported to KVM ? does
KVM support this or  need to modify vhdx disk images to any other format ?
What is the  best approach should be in case of HyperV hosted VMs( Windows
2012 guest machines and Linux guest machines ) to be imported to KVM based
OpenStack(Ussuri version with glance as image storage ).

Thanks in advance

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7KSQLVOSV5I6QGBAYC4U7SWQIJ2PPC5/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-05 Thread KK CHN
rddisk and can I specify its mount point ? or any other suggestions or
correcton ?  Because its a live host. I can't do trail and error on service
maintainer's rhevm host machines.

kindly correct me if any thing wrong in my steps .  I have to perform this
script running on mylaptop to  rhevm host machines without breaking
anything.

Kindly guide me.

Kris

On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> This requires ovirt.conf file:
>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
> >>
> >> RHV supports download of disks to one image per disk, which you can move
> >> to another system.
> >>
> >> We also have export to ova, which creates one tar file with all
> exported disks,
> >> if this helps.
> >>
> >> Nir
> >>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQ5RAHW3E6F5IL6QYOG7W3P3BI35MJSU/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-04 Thread KK CHN
Appreciate all  for sharing the valuable information.

1.  I am downloading centos 8  as the Python Ovirt SDK  installation says
it works on  Centos 8 and Need to setup a VM with this OS and install
 ovirt Python SDK on this   VM.   The requirement is that   this
Centos 8 VM should able to communicate with the   Rhevm 4.1  Host node
where the ovirt shell ( Rhevm Shell [connected] #is
 availableright ?

2.  pinging to the  host with "Rhevm Shell [connected]# "   and  that
should  be ssh ed  from the CentOS 8 VM where python3 and oVirt SDK
installed and going to execute the  script  (with ovirt configuration file
on this VM.).  Is these two connectivity checks are enough for executing
the script ?  or any other protocols need to be enabled in the firewall
between these two machine?



3.  while googling  I saw a post
https://users.ovirt.narkive.com/CeEW3lcj/ovirt-users-clone-and-export-vm-by-ovirt-shell


action vm myvm export --storage_domain-name myexport

Will this command export ?  and which format it will export to  the export
domain ?
 Is there any  option to provide with this command to  specify  any
supported format the vm image  to be exported  ?

 Thisneed to be executed from "Rhevm Shell [connected]# "   TTY  right
?



On Wed, Aug 4, 2021 at 1:00 PM Vojtech Juranek  wrote:

> On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote:
> > On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:
> > > On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> > > > I have asked our VM maintainer to run the  command
> > > >
> > > > # virsh -r dumpxml vm-name_blah//as Super user
> > > >
> > > > But no output :   No matching domains found that was the TTY  output
> on
> > >
> > > that rhevm node when I executed the command.
> > >
> > > > Then I tried to execute #  virsh list //  it doesn't list any VMs
> > >
> > > !!!   ( How come this ? Does the Rhevm node need to enable any CLI
> with
> > > License key or something to list Vms or  to dumpxml   with   virsh ? or
> > > its
> > > CLI commands ?
> > >
> > > RHV undefine the vms when they are not running.
> > >
> > > > Any way I want to know what I have to ask the   maintainerto
> provide
> > >
> > > a working a working  CLI   or ? which do the tasks expected to do with
> > > command line utilities in rhevm.
> > >
> > > If the vm is not running you can get the vm configuration from ovirt
> > >
> > > using the API:
> > > GET /api/vms/{vm-id}
> > >
> > > You may need more API calls to get info about the disks, follow the
> > > 
> > > in the returned xml.
> > >
> > > > I have one more question :Which command can I execute on an rhevm
> > >
> > > node  to manually export ( not through GUI portal) a   VMs to
>  required
> > > format  ?
> > >
> > > > For example;   1.  I need to get  one  VM and disks attached to it
> as
> > >
> > > raw images.  Is this possible how?
> > >
> > > > and another2. VM and disk attached to it as  Ova or( what other
> good
> > >
> > > format) which suitable to upload to glance ?
> > >
> > > Arik can add more info on exporting.
> > >
> > > >   Each VMs are around 200 to 300 GB with disk volumes ( so where
> should
> > >
> > > be the images exported to which path to specify ? to the host node(if
> the
> > > host doesn't have space  or NFS mount ? how to specify the target
> location
> > > where the VM image get stored in case of NFS mount ( available ?)
> > >
> > > You have 2 options:
> > > - Download the disks using the SDK
> > > - Export the VM to OVA
> > >
> > > When exporting to OVA, you will always get qcow2 images, which you can
> > > later
> > > convert to raw using "qemu-img convert"
> > >
> > > When downloading the disks, you control the image format, for example
> > > this will download
> > >
> > > the disk in any format, collapsing all snapshots to the raw format:
> > >  $ python3
> > >
> > > /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> > > -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
> > >
> > > To perform this which modules/packages need to be installed in the
> rhevm
> >
> > host node ?  Does the rhevm hosts come with python3 installed by default
> ?
> > or I need to install  python3 on 

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> To perform this which modules/packages need to be installed in the rhevm
host node ?  Does the rhevm hosts come with python3 installed by default ?
or I need to install  python3 on rhevm node ? Then  using pip3 to install
the  download_disk.py / what the module name to install this sdk ?  any
dependency before installing this sdk ? like java need to be installed on
the rhevm node ?

One doubt:  came across  virt v2v while google search,  can virtv2v  be
used in rhevm node to export VMs to images ?  or only from other
hypervisors   to rhevm only virt v2v supports ?

This requires ovirt.conf file:   // ovirt.conf file need to be created
? or already there  in any rhevm node?

>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm 

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
I have asked our VM maintainer to run the  command

# virsh -r dumpxml vm-name_blah//as Super user

But no output :   No matching domains found that was the TTY  output on
that rhevm node when I executed the command.

Then I tried to execute #  virsh list //  it doesn't list any VMs  !!!
 ( How come this ? Does the Rhevm node need to enable any CLI  with License
key or something to list Vms or  to dumpxml   with   virsh ? or its CLI
commands ?

Any way I want to know what I have to ask the   maintainerto provide a
working a working  CLI   or ? which do the tasks expected to do with
command line utilities in rhevm.

I have one more question :Which command can I execute on an rhevm node
to manually export ( not through GUI portal) a   VMs to   required format  ?

For example;   1.  I need to get  one  VM and disks attached to it  as raw
images.  Is this possible how?

and another2. VM and disk attached to it as  Ova or( what other good
format) which suitable to upload to glance ?


  Each VMs are around 200 to 300 GB with disk volumes ( so where should be
the images exported to which path to specify ? to the host node(if the host
doesn't have space  or NFS mount ? how to specify the target location where
the VM image get stored in case of NFS mount ( available ?)

Thanks in advance


On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:

> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >
> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >
> > Now I am in the process of migrating  those VMs to  my cloud setup with
> OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >
> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >
> > There are three folders  which contain images for each VM .
> > These folders contain the base OS image, and attached LVM disk images (
> from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >
> > Is there a way to  get all these images to be exported as  Single image
> file Instead of  multiple image files from Rhevm it self.  Is this possible
> ?
> >
> > If possible how to combine e all these disk images to a single image and
> that image  can upload to our  cloud  glance storage as a single image ?
>
> It is not clear what is the vm you are trying to export. If you share
> the libvirt xml
> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
>
> RHV supports download of disks to one image per disk, which you can move
> to another system.
>
> We also have export to ova, which creates one tar file with all exported
> disks,
> if this helps.
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGBJTVT6EME4TXQ3OHY7L6YXOGZXCRC6/


Re: Python Developer

2021-07-12 Thread RaviKiran Kk
We are looking for django developer
Contact 6309620745

On Fri, Jul 2, 2021, 23:56 Nagaraju Singothu 
wrote:

> Dear Group Members,
>
>  My name is Nagaraju, I have 2+ years of experience as a
> python developer in Ikya Software Solutions Pvt Ltd at Hyderabad. Please
> refer to my resume for your reference and I hope I will get a good response
> from you as soon as possible.
>
> Thanking you,
>
> With regards.
> Nagaraju,
> Mob:7659965869.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/CAMyGuAZ2pHMojwy-kAbmR3dQRDExZcPWV2QvkBoUMGnPNeLUYA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAPEfQMT0n2x4sV4Bi1hvBgGqyTi3Qwj3aFpgPmBhnxQa2FqviA%40mail.gmail.com.


Re: counting the same words within a song added by a user using Django

2021-07-12 Thread RaviKiran Kk
We are looking for django developer
Plz contact 6309620745

On Mon, Jul 5, 2021, 17:07 DJANGO DEVELOPER  wrote:

> Hi there.
> I am developing a project based on adding songs to the user's library and
> to the home page.
> other users can also purchase the songs like wise people do shopping on
> eCommerce stores.
> *Problem:(Question)*
> The problem that I want to discuss here is that when a user adds a sing
> through django forms, and now that song is added to the user's personal
> library.
> now what I want to do is :
>
>
> *When the lyrics of a song are added as a record to the "Song" table, the
> individual words in that song should be added to a 2nd table with their
> frequency of usage within that song (so the words need to be counted and a
> signal needs to be created).Also, when a user adds the song to his/her
> personal library, all of the words from the song and their frequencies
> within that song should be added to another table and associated with that
> user.*
>
> how to count same word within a song?
>
> can anyone help me here?
> your help would be appreciated.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/bc7bc37b-6f26-465c-b330-d275ab86b76an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAPEfQMQcJ0Na%2BxR4AE0QeuJ5RL4VucmaDpJ84_fg9YKUDqqHAw%40mail.gmail.com.


Combine multiple wasm files

2021-06-19 Thread Mehaboob kk
Hello,

Is it possible to combine multiple .wasm files to one single .wasm file?

Scenario:
I want to share a library(SDK) to an end customer who is building the
.wasm/JS application. Customer concerned that loading multiple wasm files
is not efficient. So we wanted to combine two wasm files. Although I can
share static lib generated using Emscripten, its not preferred because .a
file is easy to reverse compared to wasm right?

Any inputs on this please?

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcCQC45LU%3DNMFev5PYjn%2BmzOOgDfhXx0ahDXq8GqKRVjJQ%40mail.gmail.com.


Re: [Kannada STF-32363] ಹೊನ್ನಾಳಿಯ ರಾಜಶೇಖರ ಸಶಿ ಇವರು ಆನ್ಲೈನ್ ರಸ ಪ್ರಶ್ನೆ ತಯಾರಿಸುವ ವಿಧಾನದ link ಕಳಿಸಿದ್ದಾರೆ. ಆಸಕ್ತರು ವೀಕ್ಷಿಸಬಹುದು.

2021-06-12 Thread kotekalallaiah kk
Nange mattu Nanna  shaleya makkalige thumba anu  koola aythu tq sir

On Tue, Jun 8, 2021, 11:53 AM Basavaraja n d 
wrote:

> https://youtu.be/08CSXXiRdaw
>
> --
> ---
> 1.ವಿಷಯ ಶಿಕ್ಷಕರ ವೇದಿಕೆಗೆ ಶಿಕ್ಷಕರನ್ನು ಸೇರಿಸಲು ಈ ಅರ್ಜಿಯನ್ನು ತುಂಬಿರಿ.
> -
> https://docs.google.com/forms/d/e/1FAIpQLSevqRdFngjbDtOF8YxgeXeL8xF62rdXuLpGJIhK6qzMaJ_Dcw/viewform
> 2. ಇಮೇಲ್ ಕಳುಹಿಸುವಾಗ ಗಮನಿಸಬೇಕಾದ ಕೆಲವು ಮಾರ್ಗಸೂಚಿಗಳನ್ನು ಇಲ್ಲಿ ನೋಡಿ.
> -
> http://karnatakaeducation.org.in/KOER/index.php/ವಿಷಯಶಿಕ್ಷಕರವೇದಿಕೆ_ಸದಸ್ಯರ_ಇಮೇಲ್_ಮಾರ್ಗಸೂಚಿ
> 3. ಐ.ಸಿ.ಟಿ ಸಾಕ್ಷರತೆ ಬಗೆಗೆ ಯಾವುದೇ ರೀತಿಯ ಪ್ರಶ್ನೆಗಳಿದ್ದಲ್ಲಿ ಈ ಪುಟಕ್ಕೆ ಭೇಟಿ
> ನೀಡಿ -
> http://karnatakaeducation.org.in/KOER/en/index.php/Portal:ICT_Literacy
> 4.ನೀವು ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶ ಬಳಸುತ್ತಿದ್ದೀರಾ ? ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶದ ಬಗ್ಗೆ
> ತಿಳಿಯಲು -
> http://karnatakaeducation.org.in/KOER/en/index.php/Public_Software
> ---
> ---
> You received this message because you are subscribed to the Google Groups
> "KannadaSTF - ಕನ್ನಡ ಭಾಷಾ ಶಿಕ್ಷಕರ ವೇದಿಕೆ" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kannadastf+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/kannadastf/CALc7Aqt%2B3efdBrsJ4zp%3Dg4mRx5PWVzFoU%2BpgLZakuJL0reZMbw%40mail.gmail.com
> 
> .
>

-- 
---
1.ವಿಷಯ ಶಿಕ್ಷಕರ ವೇದಿಕೆಗೆ  ಶಿಕ್ಷಕರನ್ನು ಸೇರಿಸಲು ಈ  ಅರ್ಜಿಯನ್ನು ತುಂಬಿರಿ.
 
-https://docs.google.com/forms/d/e/1FAIpQLSevqRdFngjbDtOF8YxgeXeL8xF62rdXuLpGJIhK6qzMaJ_Dcw/viewform
2. ಇಮೇಲ್ ಕಳುಹಿಸುವಾಗ ಗಮನಿಸಬೇಕಾದ ಕೆಲವು ಮಾರ್ಗಸೂಚಿಗಳನ್ನು ಇಲ್ಲಿ ನೋಡಿ.
-http://karnatakaeducation.org.in/KOER/index.php/ವಿಷಯಶಿಕ್ಷಕರವೇದಿಕೆ_ಸದಸ್ಯರ_ಇಮೇಲ್_ಮಾರ್ಗಸೂಚಿ
3. ಐ.ಸಿ.ಟಿ ಸಾಕ್ಷರತೆ ಬಗೆಗೆ ಯಾವುದೇ ರೀತಿಯ ಪ್ರಶ್ನೆಗಳಿದ್ದಲ್ಲಿ ಈ ಪುಟಕ್ಕೆ ಭೇಟಿ ನೀಡಿ -
http://karnatakaeducation.org.in/KOER/en/index.php/Portal:ICT_Literacy
4.ನೀವು ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶ ಬಳಸುತ್ತಿದ್ದೀರಾ ? ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶದ ಬಗ್ಗೆ ತಿಳಿಯಲು 
-http://karnatakaeducation.org.in/KOER/en/index.php/Public_Software
---
--- 
You received this message because you are subscribed to the Google Groups 
"KannadaSTF - ಕನ್ನಡ ಭಾಷಾ  ಶಿಕ್ಷಕರ ವೇದಿಕೆ" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kannadastf+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/kannadastf/CAMuheRAynvLWHVx-tkRUft7yR624GEY2vDPgU%3DOvxu_QmMOPyQ%40mail.gmail.com.


Re: [OPSEC] Alvaro Retana's No Objection on draft-ietf-opsec-v6-25: (with COMMENT)

2021-05-11 Thread KK Chittimaneni
Hi Alvaro,

Thank you very much for your detailed review.

Together with my co-authors, we have uploaded revision -27, which should
address all your comments.

The diff is at: https://www.ietf.org/rfcdiff?url2=draft-ietf-opsec-v6-27

Regards,
KK

On Mon, Apr 19, 2021 at 8:27 AM Alvaro Retana 
wrote:

> Enno:
>
> Hi!
>
> I looked at -26.
>
> I still find the applicability statement confusing, the the reasons I
> described in 1.a/1.b (below).  There is a contradiction about whether the
> document applies to residential users (as mentioned in §1.1 and §5) or not
> (as mentioned in the Abstract).  Also, why does the "applicability
> statement especially applies to Section 2.3 and Section 2.5.4” *only*?
>
> This is obviously a non-blocking comment, but I believe it is important
> since the applicability statement may influence who reads and follows the
> recommendations.
>
> Thanks!
>
> Alvaro.
>
> On April 10, 2021 at 2:36:26 PM, Enno Rey (e...@ernw.de) wrote:
>
> Hi Alvaro,
>
> thanks for the detailed evaluation and for the valuable feedback.
>
> I went thru your COMMENTS and performed some related adaptions of the
> draft. A new version has been uploaded.
>
> thank you again & have a great weekend
>
> Enno
>
>
>
>
> On Mon, Apr 05, 2021 at 02:07:53PM -0700, Alvaro Retana via Datatracker
> wrote:
> > Alvaro Retana has entered the following ballot position for
> > draft-ietf-opsec-v6-25: No Objection
> >
> > When responding, please keep the subject line intact and reply to all
> > email addresses included in the To and CC lines. (Feel free to cut this
> > introductory paragraph, however.)
> >
> >
> > Please refer to
> https://www.ietf.org/iesg/statement/discuss-criteria.html
> > for more information about IESG DISCUSS and COMMENT positions.
> >
> >
> > The document, along with other ballot positions, can be found here:
> > https://datatracker.ietf.org/doc/draft-ietf-opsec-v6/
> >
> >
> >
> > --
> > COMMENT:
> > --
> >
> >
> > (1) The applicability statement in ??1.1 is confusing to me.
> >
> > a. The Abstract says that "this document are not applicable to
> residential
> > user cases", but that seems not to be true because this section says
> that the
> > contents do apply to "some knowledgeable-home-user-managed residential
> > network[s]", and ??5 is specific to residential users.
> >
> > b. "This applicability statement especially applies to Section 2.3 and
> Section
> > 2.5.4." Those two sections represent a small part of the document; what
> about
> > the rest? It makes sense to me for the applicability statement to cover
> most
> > of the document.
> >
> > c. "For example, an exception to the generic recommendations of this
> document
> > is when a residential or enterprise network is multi-homed." I'm not
> sure if
> > this sentence is an example of the previous one (above) or if "for
> example" is
> > out of place.
> >
> > (2) ??5 mentions "early 2020" -- I assume that the statement is still
> true now.
> >
> > (3) It caught my attention that there's only one Normative Reference
> (besides
> > rfc8200, of course). Why? What is special about the IPFIX registry?
> >
> > It seems that an argument could be made to the fact that to secure
> OSPFv3, for
> > example, an understanding of the protocol is necessary. This argument
> could be
> > extended to other protocols or mechanisms, including IPv6-specific
> technology:
> > ND, the addressing architecture, etc. Consider the classification of the
> > references in light of [1].
> >
> > [1]
> >
> https://www.ietf.org/about/groups/iesg/statements/normative-informative-references/
> >
> >
> >
>
> --
> Enno Rey
>
> Cell: +49 173 6745902
> Twitter: @Enno_Insinuator
>
>
___
OPSEC mailing list
OPSEC@ietf.org
https://www.ietf.org/mailman/listinfo/opsec


Re: [OPSEC] Roman Danyliw's No Objection on draft-ietf-opsec-v6-26: (with COMMENT)

2021-05-11 Thread KK Chittimaneni
Hi Roman,

Thank you very much for your detailed review.

Together with my co-authors, we have uploaded revision -27, which should
address most of your comments except a few listed below with our rationale:

** Section 2.1.5.  Per “However, in scenarios where anonymity is a strong
desire (protecting user privacy is more important than user attribution),
privacy extension addresses should be used.”, it might be worth
acknowledging
that even if these are managed network, the end user and the operators may
be
at odds on what privacy properties are important.

[authors] We didn't change the text here as we felt that this is a given.

** Section 3.1.  This list is helpful.  Is text here and Section 2.5.4
needed?
For example, does one need to say both “discard _packets_ from bogons” (this
section) and “discard _routes_ from bogons” (Section 2.5.4)

[authors] We kept the text in both sections the rationale being that
packets are dropped at the enterprise edge while routes are ignored by
peering routers (not all enterprises have a DFZ routing)

The diff is at: https://www.ietf.org/rfcdiff?url2=draft-ietf-opsec-v6-27

Regards,
KK

On Tue, Apr 20, 2021 at 7:11 PM Roman Danyliw via Datatracker <
nore...@ietf.org> wrote:

> Roman Danyliw has entered the following ballot position for
> draft-ietf-opsec-v6-26: No Objection
>
> When responding, please keep the subject line intact and reply to all
> email addresses included in the To and CC lines. (Feel free to cut this
> introductory paragraph, however.)
>
>
> Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
> for more information about DISCUSS and COMMENT positions.
>
>
> The document, along with other ballot positions, can be found here:
> https://datatracker.ietf.org/doc/draft-ietf-opsec-v6/
>
>
>
> --
> COMMENT:
> --
>
> ** Section 2.1.5.  Per “However, in scenarios where anonymity is a strong
> desire (protecting user privacy is more important than user attribution),
> privacy extension addresses should be used.”, it might be worth
> acknowledging
> that even if these are managed network, the end user and the operators may
> be
> at odds on what privacy properties are important.
>
> ** Section 2.2.1.  Per “A firewall or edge device should be used to
> enforce the
> recommended order and the maximum occurrences of extension headers”, does
> enforcement mean dropping the packets?
>
> ** Section 2.3.2.  Per “Network operators should be aware that RA-Guard and
> SAVI do not work or could even be harmful in specific network
> configurations”,
> please provide a more details, ideally through citation.
>
> ** Section 2.3.2, “Enabling RA-Guard by default in … enterprise campus
> networks
> …”, what’s the key property of “enterprise campus network”.  The
> introduction
> already roughly says this whole document applies to managed networks.
>
> ** Section 2.5.2.  Reading this section, the specific recommended practices
> weren’t clear.
>
> ** Section 2.6.  It wasn’t clear how comprehensive this list of logs was
> intended to be.  A few additional thoughts include: -- DHCPv6 logs --
> firewall
> ACL logs -- authentication server logs -- NEA-like policy enforcement at
> the
> time of connection -- vpn/remote access logs -- passive DNS from port 53
> traffic -- full packet capture -- active network and service scanning/audit
> information
>
> ** Section 2.6.1.2.  The recommended fields in this section are helpful,
> but
> only in the context of the rest of the five-tuple + timing + interface +
> vlan +
> select layer 4 information for each flow.  These open IPFIX information
> elements aren't mentioned.
>
> ** Section 2.6.2.1.  Per “The forensic use case is when the network
> operator
> must locate an IPv6 address that was present in the network at a certain
> time
> or is currently in the network”, isn’t this use case more precisely an
> attempt
> to link an IP address to (a) a specific port in the case of a wired
> network;
> (b) a access point (or base station, etc.) in the case of wireless; or (c)
> an
> external IP address in the case of a VPN connection?
>
> ** Section 2.6.2.1.  Additional techniques/suggestions to consider:
> -- Using the IPAM system noted in Section 2.1 or any other inventory
> system to
> provide hints in the about where an IP address might belong in the
> topology.
>
> -- A reminder that mapping between and IP+port or MAC+IP might not have
> been
> the same one as during the time of the event of interest
>
> -- there is discussion about identifying subscribers for an ISP but not in
> normal enterprise network scenario 

  1   2   3   4   5   6   7   8   9   10   >