Re: PgbackRest PointTIme Recovery : server unable to start back

2024-07-26 Thread KK CHN
G:  restored log file "0009003E"
from archive
2024-07-26 11:32:57.400 P00   INFO: archive-get command begin 2.52.1:
[0009003F, pg_wal/RECOVERYXLOG] --exec-id=43299-e2db2e1b
--log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres --stanza=Demo_Repo
2024-07-26 11:32:57.521 P00   INFO: unable to find 0009003F
in the archive
2024-07-26 11:32:57.621 P00   INFO: archive-get command end: completed
successfully (222ms)
2024-07-26 11:32:57 IST LOG:  completed backup recovery with redo LSN
0/3D28 and end LSN 0/3D000100
2024-07-26 11:32:57 IST LOG:  consistent recovery state reached at
0/3D000100
2024-07-26 11:32:57 IST LOG:  database system is ready to accept read-only
connections
2024-07-26 11:32:57.632 P00   INFO: archive-get command begin 2.52.1:
[0009003F, pg_wal/RECOVERYXLOG] --exec-id=43301-f613dae9
--log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres --stanza=Demo_Repo
2024-07-26 11:32:57.761 P00   INFO: unable to find 0009003F
in the archive
2024-07-26 11:32:57.861 P00   INFO: archive-get command end: completed
successfully (231ms)
2024-07-26 11:32:57 IST LOG:  redo done at 0/3E60 system usage: CPU:
user: 0.00 s, system: 0.00 s, elapsed: 0.75 s
2024-07-26 11:32:57 IST FATAL:  recovery ended before configured recovery
target was reached
2024-07-26 11:32:57 IST LOG:  startup process (PID 43292) exited with exit
code 1


ONLY inference  I can make is

  INFO  unable to find   0009003F in the archive( This
means  the  EDB server  (10.10.20.6  ) unable to push the archives to the
 Repo server(10.10.20.7 ) ?Is that the reason for the  recovery and
start backing of edb server fails ?


the pg_hba.conf   entry in the EDB Server machine is as

hostall all 127.0.0.1/32ident
hostall all 10.10.20.7/32  scram-sha-256
#hostall all  10.10.20.7/32  trust
# IPv6 local connections:
hostall all ::1/128 ident
#hostall all 10.10.20.7/24   trust

# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication all peer
hostreplication all 10.10.20.7/32
scram-sha-256
hostreplication all 127.0.0.1/32ident
hostreplication all ::1/128 ident


Do I have to change anything in pg_hba.conf ?


my EDB Server conf as  this

archive_mode = on
archive_command = 'pgbackrest --stanza=Demo_Repo  archive-push %p'
log_filename = 'postgresql.log'
max_wal_senders = 5
wal_level = replica


Any help ?

Krishane




On Fri, Jul 26, 2024 at 10:45 AM Muhammad Ikram  wrote:

> Hi KK CHN
>
> Could you check server logs ?
> Your service trace suggests that it started server and then failure
> happened
>
> ul 26 09:48:49 service01 systemd[1]: Started EDB Postgres Advanced Server
> 16.
> Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Main process
> exited, code=exited, status=1/FAILURE
>
>
>
> Regards,
> Ikram
>
>
> On Fri, Jul 26, 2024 at 10:04 AM KK CHN  wrote:
>
>> List,
>>
>> Reference: https://pgbackrest.org/user-guide-rhel.html#pitr
>> I am following the   PTR  on RHEL9 EPAS16.
>> I am able to do a  backup(Full, diff and incr)  and   restore from a full
>> backup  and restart of EPAS16 works fine.
>>
>> But when I do an incremental backup  after doing the   procedures
>> mentioned in the PTR section of the above  reference link and  try
>> restoring the EDB database from the INCR backup   and then starting up the
>> EPAS16 the server  always results in dead state
>>
>>  My repo server is another machine.  If  I do  a  full restore  on the DB
>> server  ( sudo -u enterprisedb pgbackrest --stanza=Demo_Repo  --delta
>> restore) it works  and the server starts without any issue.
>> Restoring  from  Incremental backup tty output shows successful but edb
>> service start  results in  failure.
>>
>> Any help is much appreciated.
>>
>> Krishane.
>>
>>
>>
>>
>> STEPS followed:
>>
>> after dropping  the table  pg-primary *⇒* Drop the important table
>> ([section]
>> stopped the EDB server.
>>
>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo--delta
>> --set=20240719-122703F_20240724-094727I --target-timeline=current
>> --type=time "--target=2024-07-24 09:44:01.3255+05:30"
>> --target-action=promote  restore
>>

PgbackRest PointTIme Recovery : server unable to start back

2024-07-25 Thread KK CHN
List,

Reference: https://pgbackrest.org/user-guide-rhel.html#pitr
I am following the   PTR  on RHEL9 EPAS16.
I am able to do a  backup(Full, diff and incr)  and   restore from a full
backup  and restart of EPAS16 works fine.

But when I do an incremental backup  after doing the   procedures mentioned
in the PTR section of the above  reference link and  try  restoring the EDB
database from the INCR backup   and then starting up the  EPAS16 the
server  always results in dead state

 My repo server is another machine.  If  I do  a  full restore  on the DB
server  ( sudo -u enterprisedb pgbackrest --stanza=Demo_Repo  --delta
restore) it works  and the server starts without any issue.
Restoring  from  Incremental backup tty output shows successful but edb
service start  results in  failure.

Any help is much appreciated.

Krishane.




STEPS followed:

after dropping  the table  pg-primary *⇒* Drop the important table
([section]
stopped the EDB server.

$ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo--delta
--set=20240719-122703F_20240724-094727I --target-timeline=current
--type=time "--target=2024-07-24 09:44:01.3255+05:30"
--target-action=promote  restore
.

2024-07-26 09:48:06.343 P00   INFO: restore command end: completed
successfully (1035ms)


But

[root@rservice01 ~]# sudo systemctl start edb-as-16.service
[root@service01 ~]# sudo systemctl status edb-as-16.service
× edb-as-16.service - EDB Postgres Advanced Server 16
 Loaded: loaded (/etc/systemd/system/edb-as-16.service; disabled;
preset: disabled)
   *  Active: failed* (Result: exit-code) since Fri 2024-07-26 09:48:50
IST; 8s ago
   Duration: 242ms
Process: 41903 ExecStartPre=/usr/edb/as16/bin/edb-as-16-check-db-dir
${PGDATA} (code=exited, status=0/SUCCESS)
Process: 41908 ExecStart=/usr/edb/as16/bin/edb-postgres -D ${PGDATA}
(code=exited, status=1/FAILURE)
   Main PID: 41908 (code=exited, status=1/FAILURE)
CPU: 331ms

Jul 26 09:48:48 service01 systemd[1]: Starting EDB Postgres Advanced Server
16...
Jul 26 09:48:48 service01 edb-postgres[41908]: 2024-07-26 09:48:48 IST LOG:
 redirecting log output to logging collector process
Jul 26 09:48:48 service01 edb-postgres[41908]: 2024-07-26 09:48:48 IST
HINT:  Future log output will appear in directory "log".
Jul 26 09:48:49 service01 systemd[1]: Started EDB Postgres Advanced Server
16.
Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Main process
exited, code=exited, status=1/FAILURE
Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Killing process
41909 (edb-postgres) with signal SIGKILL.
Jul 26 09:48:50 service01 systemd[1]: edb-as-16.service: Failed with result
'exit-code'.
[root@service01 ~]#

why is it unable to perform a restore and recovery  from an incr  backup ?







On The Repo Server
[root@service02 ~]#  sudo -u postgres pgbackrest --stanza=Demo_Repo info
stanza: Demo_Repo
status: ok
cipher: aes-256-cbc

db (current)
wal archive min/max (16):
00020021/000B0041

full backup: 20240719-122703F
timestamp start/stop: 2024-07-19 12:27:03+05:30 / 2024-07-19
12:27:06+05:30
wal start/stop: 0002002A /
0002002A
database size: 61.7MB, database backup size: 61.7MB
repo1: backup size: 9.6MB

incr backup: 20240719-122703F_20240719-123353I
timestamp start/stop: 2024-07-19 12:33:53+05:30 / 2024-07-19
12:33:56+05:30
wal start/stop: 0002002C /
0002002C
database size: 61.7MB, database backup size: 6.4MB
repo1: backup size: 6.2KB
backup reference list: 20240719-122703F

diff backup: 20240719-122703F_20240719-123408D
timestamp start/stop: 2024-07-19 12:34:08+05:30 / 2024-07-19
12:34:10+05:30
wal start/stop: 0002002E /
0002002E
database size: 61.7MB, database backup size: 6.4MB
repo1: backup size: 6.4KB
backup reference list: 20240719-122703F

incr backup: 20240719-122703F_20240723-110212I
timestamp start/stop: 2024-07-23 11:02:12+05:30 / 2024-07-23
11:02:15+05:30
wal start/stop: 00070038 /
00070038
database size: 48MB, database backup size: 6.4MB
repo1: backup size: 9.8KB
backup reference list: 20240719-122703F,
20240719-122703F_20240719-123408D

incr backup: 20240719-122703F_20240723-141818I
timestamp start/stop: 2024-07-23 14:18:18+05:30 / 2024-07-23
14:18:22+05:30
wal start/stop: 0008003C /
0008003C
database size: 75.4MB, database backup size: 33.8MB
repo1: backup size: 4.7MB
backup reference list: 20240719-122703F,
20240719-122703F_20240719-123408D, 20240719-122703F_20240723-110212I

 

pgBackRest for multiple production servers

2024-07-21 Thread KK CHN
Hi list ,

I am exploring the  PgBackRest tool for production deployment. ( My lab
setup with one   Database server and another Reposerver deployed working
fine as in the official docs)

Query:

What may be the standard practice employed to  backup multiple production
servers to one RepoServer ? ( the pgbackrest configuration on the
RepoServer part )

Is this the  right way to achieve this (Defining multiple stanzas
Server1,  Server  2 ..  Server  N and single  [global] with  repo1, repo2
and repon N  declarations   ?

Please correct me if I am wrong ..

Thank you
Krishane


Please find the proposed   pgbackrest.conf   in the  RepoServer  for
backing up multiple database servers.

/etc/pgbackrest/pgbackrest.conf   on  RepoServer
##
[ Server  _1]
pg1-host=10.20.20.6
pg1-host-user= pgbackUser
pg1-path=/var/lib/pgsql/16/data
. . . . .  . . .  . . . . . . . . . . . . . . . . .
. . . . . . .  .  .  .  .  .  .  .  .  .  .  .  .  .
. . . . . . . . . .  . . . . . .. .. . .. . . . .

[ Server  _N]
pgN-host=10.20.20.N
pgN-host-user= pgbackUser
pgN-path=/var/lib/pgsql/16/data


[global]
repo1-path=/var/lib/ Server_1_Backup
repo1-retention-full=2
repo1-cipher-type=aes-256-cbc
repo1-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repo1-block=y
repo1-bundle=y

repo2-path=/var/lib/ Server_2_Backup
repo2-retention-full=2
repo2-cipher-type=aes-256-cbc
repo2-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repo2-block=y
repo2-bundle=y

.   .   .   .  . .  .  .   .  .  .  .  .  .  .  .  .  .  . .  .  .  .  .
.  .
.   .   .   .  . .  .  .   .  .  .  .  .  .  .  .  .  .  . .  .  .  .  .
.  .
.   .   .   .  . .  .  .   .  .  .  .  .  .  .  .  .  .  . .  .  .  .  .
.  .

repoN-path=/var/lib Server_N_Backup
repoN-retention-full=2
repoN-cipher-type=aes-256-cbc
repoN-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repoN-block=y
repoN-bundle=y


[global:archive-push]
compress-level=3
###


Re: PgbackRest and EDB Query

2024-07-19 Thread KK CHN
Hi  list,

Thank you all for the great help and guidance, I am able to configure
 pgbackrest with EPAS-16  and a Repo server both separate machines..
Password less auth also worked well.   Backup and restore all fine.

Query
How can I make the   Reposerver   to host more than one EPAS-16 server
instance's running on multiple nodes ?

Having only one  /etg/pgbakrest/pgbackrest.conf file on the Repo Server how
to specify stanza  name and  global  for multiple EPAS servers?
My Repo Server:  cat /etc/pgbackrest/pgbackrest.conf

[Demo_Repo]
pg1-host=10.20.20.6
pg1-host-user=enterprisedb
pg1-path=/var/lib/edb/as16/data
pg-version-force=16

[global]
 about the repository

repo1-path=/var/lib/edb_BackupRepo

repo1-retention-full=2
repo1-cipher-type=aes-256-cbc
repo1-cipher-pass=0oahu5f5dvH7eD4TI1eBEl8Vpn14hWEmgLGuXgpUHo9R2VQKCw6Sm99FnOfHBY
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y
repo1-block=y
repo1-bundle=y
[global:archive-push]
compress-level=3
##


1. So if there are multiple   EPAS servers  running   ondifferent nodes
10.20.20.7,  10.20.20.8,   etc.  how to specify the   stanzas and
globals for each  EPAS server in single (  /etc/pgbackrest/pgbackrest.conf
)on Repo server  ?

2.  Say there are  X numbers (say 10 EPAS servers from different geo
locations)  of  EPAS servers  each has a daily growth of   aprox 1 GB/day
 then   what should be the connectivity capacity parameters need to
consider to cater the  archiving and replication by pgbackrest in a
production environment to the repo server  ?

3. Also what will be the best backup  configuration  in a crontab  for
achieving maximum RPO ? I mean zero data loss ?( incr or diff
repetition intervals ?)  here my sample crontab, only for  full and diff
(in lab setup)  but for  production env and for near zero data lost what
configs needed in cron ?

my sample cron here.
[root@RepoServer ~]# crontab -u postgres -l
30  06  *  *  0 pgbackrest   --type=full--stanza=Demo2   backup
//  only on sundays

04  16  *  * 1-6 pgbackrest   --type=diff  --stanza=Demo2backup
 // on everyday diff
[root@uaterssdrservice02 ~]#

Thanks again
Krishane


On Fri, Jul 19, 2024 at 11:24 AM azeem subhani  wrote:

> Hi,
>
> passwordless connection can be established using ssh key, and when you
> don't specify the ssh key in command using -i switch:* -i
> /path/to/your/private/key*
> You simply need to set the SSH key as the default key which I have
> explained earlier, how to do that.
>
> As you are currently trying through following command, without specifying
> an ssh key for passwordless connection.
>
> From the EDB Postgres Advanced Server nodes
> $ sudo -u enterprisedb ssh pgbackrest@backup-server
>
>
>
>
> On Fri, Jul 19, 2024 at 10:06 AM Kashif Zeeshan 
> wrote:
>
>> Hi
>>
>> On Thu, Jul 18, 2024 at 6:10 PM KK CHN  wrote:
>>
>>>
>>>
>>> Hi list,
>>>
>>> Thank you all for your  inputs, I am trying pgbacrest with
>>> Enterprised DB.  Locally pgbackrest works for  EDB but when I am trying for
>>> remote repository I am facing an issue ( from the remote host to  EDB
>>> server  password less authentication part )
>>>
>>> Trying to  use a remote host  as Repo Server I am facing the issue of
>>> passwordless  authentication(Public key private key).
>>>
>>> 1.  From the EDB server  I  added the user pgbackrest directory and
>>> generated ssh-keys and copied the id_rsa.pub   to  the Repo server
>>> (pgbackrest user's .ssh dir with necessary permissions)
>>> everything(passwordless auth) working to one side.
>>>
>>> From the EDB Postgres Advanced Server nodes
>>> $ sudo -u enterprisedb ssh pgbackrest@backup-server
>>>
>>> This works from  EDB server machine without any issue(password less auth
>>> works)
>>>
>>>
>>>
>>> 2 But   from the reposerver
>>> $sudo -u pgbackrest   ssh enterprisedb@EDB_Server_IP   unable to do
>>> password less auth( Its asking password for enterpridb@EDB_Server )
>>>
>>> How to do the passwordless auth  from the  Repo server to the EDB
>>> server  for the default "enterprisedb" user of  EDB ? ( enterprisedb user
>>> doesn't have any home dir  I mean /home/enterprisedb, so I am not sure
>>> where to create .ssh dir and authorized_keys for  passwordless auth  )
>>>
>> Please make sure that the passwordless connection is made between both
>> from EDB Server to Repo Server and from Repo Server to EDB Server.
>> For this you need to generate the  ssh keys on 

Re: PgbackRest and EDB Query

2024-07-18 Thread KK CHN
Hi list,

Thank you all for your  inputs, I am trying pgbacrest with Enterprised DB.
Locally pgbackrest works for  EDB but when I am trying for remote
repository I am facing an issue ( from the remote host to  EDB server
password less authentication part )

Trying to  use a remote host  as Repo Server I am facing the issue of
passwordless  authentication(Public key private key).

1.  From the EDB server  I  added the user pgbackrest directory and
generated ssh-keys and copied the id_rsa.pub   to  the Repo server
(pgbackrest user's .ssh dir with necessary permissions)
everything(passwordless auth) working to one side.

>From the EDB Postgres Advanced Server nodes
$ sudo -u enterprisedb ssh pgbackrest@backup-server

This works from  EDB server machine without any issue(password less auth
works)



2 But   from the reposerver
$sudo -u pgbackrest   ssh enterprisedb@EDB_Server_IP   unable to do
password less auth( Its asking password for enterpridb@EDB_Server )

How to do the passwordless auth  from the  Repo server to the EDB server
for the default "enterprisedb" user of  EDB ? ( enterprisedb user doesn't
have any home dir  I mean /home/enterprisedb, so I am not sure where to
create .ssh dir and authorized_keys for  passwordless auth  )

Any one who has already tackled this kindly guide  me on how to achieve
this .


Thank you,
Krishane







On Wed, Jul 17, 2024 at 9:07 PM Kashif Zeeshan 
wrote:

> Hi
>
> On Wed, Jul 17, 2024 at 5:21 PM KK CHN  wrote:
>
>> Hi ,
>>
>> I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
>> setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
>> configured with pgbackrest and everything working fine as specified in the
>> documentation.
>>
>> note:  here I am running postgres server and pgbackrest everything as
>> postgres user and no issues in  backup and recovery.
>>
>>
>>
>> Query
>> 1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
>> backup and recovery process? Or pgback works only with the community
>> PostgreSQL database ?
>>
> It support both community PG and EDB PG.
>
>>
>>
>> [ when I ran  initdb script of EDB while installing EDB it creates the
>> enterpisedb  as user and edb as initial  database by the script. ]
>>
> Enterprisedb is the default user created by EDB.
>
>>
>>
>> when I try to create the stanza on the EDB server it throws error
>> (pasted at bottom ).
>>
>>
>>
>> NOTE:
>> I know that  my EDB  running on  port 5444 instead of  5432 and the
>> dbname = edb instead of postgres, and user as  enterpisedb instead of
>> postgres how to specify these changes in the stanza creation step if  EDB
>> Supports pgbackrest tool ?
>>
> You can enter this connection information in the PbBackRest Conf file for
> the stanza you create for your EDB Instance.
>
> e.g
>
> [global]repo1-path=/var/lib/edb/as15/backups
> [demo]pg1-path=/var/lib/edb/as15/datapg1-user=enterprisedbpg1-port=5444pg-version-force=15
>
> Refer to following edb documentation
>
>
> https://www.enterprisedb.com/docs/supported-open-source/pgbackrest/03-quick_start/
>
>
>> OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB
>> ] ?
>>
>>
>> Any hints much appreciated.
>>
>> Thank you,
>> Krishane
>>
>>
>> ERROR:
>> root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
>> --log-level-console=info  stanza-create
>> 2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
>> --exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
>> --pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
>> --repo1-host-user=postgres --stanza=OD_DM2
>> WARN: unable to check pg1: [DbConnectError] unable to connect to
>> 'dbname='postgres' port=5432': connection to server on socket
>> "/tmp/.s.PGSQL.5432" failed: No such file or directory
>> Is the server running locally and accepting connections on that
>> socket?
>> ERROR: [056]: unable to find primary cluster - cannot proceed
>>HINT: are all available clusters in recovery?
>> 2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted
>> with exception [056]
>> [root@uaterssdrservice01 ~]#
>>
>>
>>
>>
>>


PgbackRest and EDB Query

2024-07-17 Thread KK CHN
Hi ,

I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
configured with pgbackrest and everything working fine as specified in the
documentation.

note:  here I am running postgres server and pgbackrest everything as
postgres user and no issues in  backup and recovery.



Query
1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
backup and recovery process? Or pgback works only with the community
PostgreSQL database ?


[ when I ran  initdb script of EDB while installing EDB it creates the
enterpisedb  as user and edb as initial  database by the script. ]


when I try to create the stanza on the EDB server it throws error  (pasted
at bottom ).



NOTE:
I know that  my EDB  running on  port 5444 instead of  5432 and the dbname
= edb instead of postgres, and user as  enterpisedb instead of postgres how
to specify these changes in the stanza creation step if  EDB Supports
pgbackrest tool ?

OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB ] ?


Any hints much appreciated.

Thank you,
Krishane


ERROR:
root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
--log-level-console=info  stanza-create
2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
--repo1-host-user=postgres --stanza=OD_DM2
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5432': connection to server on socket
"/tmp/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that
socket?
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@uaterssdrservice01 ~]#


Re: pgBackRest on old installation

2023-11-20 Thread KK CHN
Thank you.  Its worked out well. But a basic doubt ? is storing the DB
superuser password in .pgpass is advisable ? What other options do we have ?
#su postgres
bash-4.2$ cd

bash-4.2$ cat .pgpass
*:*:*:postgres:your_password
bash-4.2$


On Mon, Nov 20, 2023 at 4:16 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:

>
> On 11/20/23 12:31, KK CHN wrote:
>
> list,
>
> I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
> legacy application.)
>
> I have installed pgbackrest through  package install on RHEL7.6
> But unable to get the basic stanza-creation working It throws an error.
>
>
> * /etc/pgbackrest.conf  as follows..*
> 
> [demo]
> pg1-path=/app/edb/as10/data
> pg1-port = 5444
> pg1-socket-path=/tmp
>
> [global]
>
> repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw
>
> repo1-cipher-type=aes-256-cbc
> repo1-path=/var/lib/pgbackrest
> repo1-retention-full=2
> backup-user=postgres
>
>
> [global:archive-push]
> compress-level=3
> #
>
>
>
> [root@dbs ~]# pgbackrest version
> pgBackRest 2.48
> [root@dbs ~]#
> #
>
> *Postgres conf as follows... *
>
> listen_addresses = '*'
> port = 5444
> unix_socket_directories = '/tmp'
>
> archive_command = 'pgbackrest --stanza=demo archive-push %p'
> archive_mode = on
> log_filename = 'postgresql.log'
> max_wal_senders = 3
> wal_level = replica
>
> #
>
>
> *ERROR  Getting as follows ..What went wrong here ??*
>
>
>  [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
> --log-level-console=info stanza-create
> 2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
> --exec-id=29527-bf5e2f80 --log-level-console=info
> --pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
> --repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
> --repo1-path=/var/lib/pgbackrest --stanza=demo
> WARN: unable to check pg1: [DbConnectError] unable to connect to
> 'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
> "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
> ERROR: [056]: unable to find primary cluster - cannot proceed
>HINT: are all available clusters in recovery?
> 2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted
> with exception [056]
> [root@dbs ~]#
>
> It complains about the password.  I followed the below tutorial link, but
> no mention of password (Where to supply password, what parameter where ?)
> setting here ==> https://pgbackrest.org/user-guide-rhel.html
>
> This is about the user connecting to the db, in general, pgbackrest has to
> connect like any other app/user. So, change your .pgpass to contain smth
> like the below on the top of the file :
>
> /tmp:5444:*:postgres:your_whatever_pgsql_password
>
> and retry
>
>
>
> Any hints welcome..  What am I missing here ??
>
> Best,
> Krishane
>
>
>
>
>
>
>
>


pgBackRest on old installation

2023-11-20 Thread KK CHN
list,

I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
legacy application.)

I have installed pgbackrest through  package install on RHEL7.6
But unable to get the basic stanza-creation working It throws an error.


* /etc/pgbackrest.conf  as follows..*

[demo]
pg1-path=/app/edb/as10/data
pg1-port = 5444
pg1-socket-path=/tmp

[global]
repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw

repo1-cipher-type=aes-256-cbc
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
backup-user=postgres


[global:archive-push]
compress-level=3
#



[root@dbs ~]# pgbackrest version
pgBackRest 2.48
[root@dbs ~]#
#

*Postgres conf as follows... *

listen_addresses = '*'
port = 5444
unix_socket_directories = '/tmp'

archive_command = 'pgbackrest --stanza=demo archive-push %p'
archive_mode = on
log_filename = 'postgresql.log'
max_wal_senders = 3
wal_level = replica

#


*ERROR  Getting as follows ..What went wrong here ??*


 [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
--log-level-console=info stanza-create
2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
--exec-id=29527-bf5e2f80 --log-level-console=info
--pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/var/lib/pgbackrest --stanza=demo
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
"/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@dbs ~]#

It complains about the password.  I followed the below tutorial link, but
no mention of password (Where to supply password, what parameter where ?)
setting here ==> https://pgbackrest.org/user-guide-rhel.html


Any hints welcome..  What am I missing here ??

Best,
Krishane


capacity planning question

2023-10-30 Thread KK CHN
Hi,



I am in need of  an infrastructure set up for data analytics / live video
stream analytics application  using big data and analytics technology..


The data is basically right now stored as structured data(no video
streaming) in PostgresDatabase. ( Its an emergency call handling solution,
In  database which stores,  caller info (address, mobile number, locations
co-ordinates, emergency category metadata and dispatch information
regarding rescue vehicles.,  Rescue vehicle location update (lat,
long)every 30 seconds all are stored in the Postgres Database ..



Input1 :   I have to do an analytics on these data( say 600 GB for the last
2 years its the size grown from initial setup).To perform an analytical
application development( using python and data analytics libraries, and
displaying the results and analytical predication through a dashboard
application.)


Query 1. How much resource in terms of compute(GPU?(CPU) cores required for
this analytical application? and memory ? And any specific type of
storage(in memory like redis required ? ) etc, which   I have to provision
for these kind of application processing. ?? any hints most welcome..  Any
more input required let me know I can provide if available.


Input 2

In addition to the above I have to do video analytics from bodyworn cameras
by police personnel, drone surveillance

 Videos from any emergency sites,  patrol vehicle (from a mobile tablet
device over 5G )live streaming of indent locations for few  minutes ( say 3
to 5 minutes live streaming for each incident. )  There are 50 drones, 500
Emergency  rescue service vehicles, 300 body worn camera personnels..   and
roughly  5000 incidents / emergency incidents  per day  happening,  which
needs video streaming for at least 1000 incidents for a time duration of 4
to 5 minutes live streaming.


Query2. What/(how many) kind of computing resources GPUs(CPUs)?  RAM,
Storage solutions I have to deploy in numbers( or cores of GPUs/how
many/(CPUs)?  RAM ?   In Memory (Redis or similar ) or any other specific
data storage mechanisms ?   Any hints much appreciated..


Best,

Krishane


Re: pgBackRest for a 50 TB database

2023-10-03 Thread KK CHN
Greetings,
Happy to hear you successfully performed pgBackRest for a 50TB DB. Out of
curiosity I would like to know your infrastructure settings.

1. The  connectivity protocoal and bandwidth you used for your backend
storage ?  Is it iSCSI, FC FCoE or GbE ? what's the exact reason for
the 26 Hours it took in the best case ? What factors may reduce 26 Hours to
much less time say 10 Hour or so for a 50 TB DB to  backup destination ??
What to  fine tune or deploy  for a better performance?

2. It has been said that  you are running the DB on a 2 slot 18 core
processor = 36 Physical cores ..  Is it a dedicated Server H/W entirely
dedicated for a 50 TB database alone ?
Why I asked, nowadays mostly we may run the DB servers on VMs in
virtualized environments..  So I would like to know  all 36 Physical cores
and associated RAM are all utilized by your 50 TB Database server ? or any
vacant CPU cores/Free RAM on those server machines?

3.  What kind of connectivity/bandwidth between DB server and Storage
backend you established ( I Want to know the server NIC card details,
Connectivity Channel protocol/bandwidth and Connecting Switch spec from DB
Server to Storage backend( NAS in this case right ?)

Could you share the recommendations / details as in your case , Becoz I'm
also in need to perform such a pgBackRest trial from a  production DB  to
a  suitable Storage Device( Mostly Unified storage  DELL Unity)

Any inputs are most welcome.

Thanks,
Krishane

On Tue, Oct 3, 2023 at 12:14 PM Abhishek Bhola <
abhishek.bh...@japannext.co.jp> wrote:

> Hello,
>
> As said above, I tested pgBackRest on my bigger DB and here are the
> results.
> Server on which this is running has the following config:
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):36
> On-line CPU(s) list:   0-35
> Thread(s) per core:1
> Core(s) per socket:18
> Socket(s): 2
> NUMA node(s):  2
>
> Data folder size: 52 TB (has some duplicate files since it is restored
> from tapes)
> Backup is being written on to DELL Storage, mounted on the server.
>
> pgbackrest.conf with following options enabled
> repo1-block=y
> repo1-bundle=y
> start-fast=y
>
>
> 1. *Using process-max: 30, Time taken: ~26 hours*
> full backup: 20230926-092555F
> timestamp start/stop: 2023-09-26 09:25:55+09 / 2023-09-27
> 11:07:18+09
> wal start/stop: 00010001AC0E0044 /
> 00010001AC0E0044
> database size: 38248.9GB, database backup size: 38248.9GB
> repo1: backup size: 6222.0GB
>
> 2. *Using process-max: 10, Time taken: ~37 hours*
>  full backup: 20230930-190002F
> timestamp start/stop: 2023-09-30 19:00:02+09 / 2023-10-02
> 08:01:20+09
> wal start/stop: 00010001AC0E004E /
> 00010001AC0E004E
> database size: 38248.9GB, database backup size: 38248.9GB
> repo1: backup size: 6222.0GB
>
> Hope it helps someone to use these numbers as some reference.
>
> Thanks
>
>
> On Mon, Aug 28, 2023 at 12:30 AM Abhishek Bhola <
> abhishek.bh...@japannext.co.jp> wrote:
>
>> Hi Stephen
>>
>> Thank you for the prompt response.
>> Hearing it from you makes me more confident about rolling it to PROD.
>> I will have a discussion with the network team once about and hear what
>> they have to say and make an estimate accordingly.
>>
>> If you happen to know anyone using it with that size and having published
>> their numbers, that would be great, but if not, I will post them once I set
>> it up.
>>
>> Thanks for your help.
>>
>> Cheers,
>> Abhishek
>>
>> On Mon, Aug 28, 2023 at 12:22 AM Stephen Frost 
>> wrote:
>>
>>> Greetings,
>>>
>>> * Abhishek Bhola (abhishek.bh...@japannext.co.jp) wrote:
>>> > I am trying to use pgBackRest for all my Postgres servers. I have
>>> tested it
>>> > on a sample database and it works fine. But my concern is for some of
>>> the
>>> > bigger DB clusters, the largest one being 50TB and growing by about
>>> > 200-300GB a day.
>>>
>>> Glad pgBackRest has been working well for you.
>>>
>>> > I plan to mount NAS storage on my DB server to store my backup. The
>>> server
>>> > with 50 TB data is using DELL Storage underneath to store this data
>>> and has
>>> > 36 18-core CPUs.
>>>
>>> How much free CPU capacity does the system have?
>>>
>>> > As I understand, pgBackRest recommends having 2 full backups and then
>>> > having incremental or differential backups as per requirement. Does
>>> anyone
>>> > have any reference numbers on how much time a backup for such a DB
>>> would
>>> > usually take, just for reference. If I take a full backup every Sunday
>>> and
>>> > then incremental backups for the rest of the week, I believe the
>>> > incremental backups should not be a problem, but the full backup every
>>> > Sunday might not finish in time.
>>>
>>> pgBackRest scales extremely well- what's going to matter here is how
>>> much you can 

Re: DB Server slow down & hang during Peak hours of Usage

2023-08-08 Thread KK CHN
On Tue, Aug 8, 2023 at 5:49 PM Marc Millas  wrote:

> also,
> checkpoint setup are all default values
>
> you may try to
> checkpoint_completion_target = 0.9
> checkpoint_timeout = 15min
> max_wal_size = 5GB
>
> and, as said in the previous mail, check the checkpoint logs
>
> Also, all vacuum and autovacuum values are defaults
> so, as autovacuum_work_mem = -1
> the autovacuum processes will use the 4 GB setuped by maintenance_work_mem
> = 4096MB
> as there are 3 launched at the same time, its 12 GB "eaten"
> which doesn't look like a good idea, so set
> autovacuum_work_mem = 128MB
>
> also pls read the autovacuum doc for your version (which is ?) here for
> postgres 12:
> https://www.postgresql.org/docs/12/runtime-config-autovacuum.html
>
>
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>
>
> On Tue, Aug 8, 2023 at 1:59 PM Marc Millas  wrote:
>
>> Hello,
>> in the postgresql.conf joined, 2 things (at least) look strange:
>> 1) the values for background writer are the default values, fit for a
>> server with a limited writes throughput.
>> you may want to increase those, like:
>> bgwriter_delay = 50ms
>> bgwriter_lru_maxpages = 400
>> bgwriter_lru_multiplier = 4.0
>> and check the checkpoint log to see if there are still backend processes
>> writes.
>>
>> 2) work_mem is set to 2 GB.
>> so, if 50 simultaneous requests use at least one buffer for sorting,
>> joining, ..., you will consume 100 GB of RAM
>> this value seems huge for the kind of config/usage you describe.
>> You may try to set work_mem to 100 MB and check what's happening.
>>
>> Also check the logs, postgres tells his life there...
>>
>>
>>
>>
>>
>> Marc MILLAS
>> Senior Architect
>> +33607850334
>> www.mokadb.com
>>
>>
>>
Thank you all for your time and the valuable inputs to fix the issue.  Let
me tune conf parameters as advised and   will get back with the results and
log outputs .

Krishane

>
>> On Mon, Aug 7, 2023 at 3:36 PM KK CHN  wrote:
>>
>>> List ,
>>>
>>> *Description:*
>>>
>>> Maintaining a DB Server Postgres and with a lot of read writes to this
>>> Server( virtual machine running on  ESXi 7 with CentOS 7) .
>>>
>>> ( I am not sure how to get the read / write counts or required IOPS or
>>> any other parameters for you. If  you point our  I can execute those
>>> commands and get the data. )
>>>
>>> Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
>>> Emergency call response system  writing many  Emergency Response vehicles
>>> locations coordinates to the DB every 30 Seconds and every emergency call
>>> metadata (username, phone number, location info and address of the caller
>>> to the DB for each call)
>>>
>>> During these hours  the system hangs and the  Application ( which shows
>>> the location of the vehicles on a  GIS map hangs ) and the CAD machines
>>> which connects to the system hangs as those machines can't  connect to the
>>> DB and get data for displaying the caller information to the call taking
>>> persons working on them. )
>>>
>>> *Issue : *
>>> How to trace out what makes this DB  hangs and make it slow  and how to
>>> fix it..
>>>
>>> *Resource poured on the system :*
>>>
>>> *64 vCPUs  allocate ( Out of a host machine comprised of 2 processor
>>> slots of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage
>>> show 50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
>>> showing around 33 GB only ) *
>>>
>>> *Query :*
>>>
>>> How to rectify the issues that makes the DB server underperforming and
>>> find a permanent fix for this slow down issue*. *
>>>
>>> *Attached the  Postgres.conf file here for reference .*
>>>
>>> *Any more information required I can share for analysis to fix the
>>> issue. *
>>>
>>>
>>> *Krishane *
>>>
>>


Re: My 1st TABLESPACE

2023-08-08 Thread KK CHN
On Mon, Aug 7, 2023 at 5:47 PM Amn Ojee Uw  wrote:

> Thanks Negora.
>
> Makes sense, I will check it out.
>
> On 8/7/23 1:48 a.m., negora wrote:
>
> Hi:
>
> Although the "postgres" user owns the "data" directory, Has he access to
> the whole branch of directories? Maybe the problem is that he can't reach
> the "data" directory.
>
> Regards.
>
>
> On 07/08/2023 07:43, Amn Ojee Uw wrote:
>
> I'd like to create a TABLESPACE, so, following this web page
> ,  I
> have done the following :
>
> *mkdir
> /home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*
>
> *sudo chown postgres:postgres
> /home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*
>
> *sudo -u postgres psql*
>
> *\du*
> * arbolone| Cannot login  | {}*
> * chispa
> || {prosafe}*
> * workerbee | Superuser, Create DB| {arbolone}*
> * jme
> || {arbolone}*
> * postgres| Superuser, Create role, Create DB, Replication, Bypass RLS
> | {}*
> * prosafe  | Cannot login  | {}*
>
> *CREATE TABLESPACE jmetablespace OWNER jme LOCATION
> '/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data';*
>
>
Here owner is jme   and the  data dir  you created must have owner jme..

> The *CREATE **TABLESPACE* schema throws this error message :
>
> *ERROR:  could not set permissions on directory
> "/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data":
> Permission denied*
>
> I have followed the web page to the best of my abilities, and AFAIK, the
> postgres user owns the folder '*data*'.
>
> I know that something is missing, where did I go wrong and how can I
> resolve this issue?
>
>
> Thanks in advance.
>
>
>


DB Server slow down & hang during Peak hours of Usage

2023-08-07 Thread KK CHN
List ,

*Description:*

Maintaining a DB Server Postgres and with a lot of read writes to this
Server( virtual machine running on  ESXi 7 with CentOS 7) .

( I am not sure how to get the read / write counts or required IOPS or any
other parameters for you. If  you point our  I can execute those commands
and get the data. )

Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
Emergency call response system  writing many  Emergency Response vehicles
locations coordinates to the DB every 30 Seconds and every emergency call
metadata (username, phone number, location info and address of the caller
to the DB for each call)

During these hours  the system hangs and the  Application ( which shows the
location of the vehicles on a  GIS map hangs ) and the CAD machines which
connects to the system hangs as those machines can't  connect to the DB and
get data for displaying the caller information to the call taking persons
working on them. )

*Issue : *
How to trace out what makes this DB  hangs and make it slow  and how to fix
it..

*Resource poured on the system :*

*64 vCPUs  allocate ( Out of a host machine comprised of 2 processor slots
of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage show
50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
showing around 33 GB only ) *

*Query :*

How to rectify the issues that makes the DB server underperforming and find
a permanent fix for this slow down issue*. *

*Attached the  Postgres.conf file here for reference .*

*Any more information required I can share for analysis to fix the issue. *


*Krishane *


postgresql(1).conf
Description: Binary data


Re: Backup Copy of a Production server.

2023-08-07 Thread KK CHN
On Mon, Aug 7, 2023 at 10:49 AM Ron  wrote:

> On 8/7/23 00:02, KK CHN wrote:
>
> List,
>
> I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
> external storage( Say USB Hard Drive) and need to set up a backup server
> with this data dir.
>
> What is the trivial method to achieve this ??
>
> 1. Is Sqldump an option at a production server ?? (  Will this affect the
> server performance  and possible slowdown of the production server ? This
> server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
> Any bottlenecks ?
>
>
> Whether or not there will be bottlenecks depends on how busy (CPU and disk
> load) the current server is.
>
>
> 2. Is copying the data directory from the production server to an external
> storage and replace the data dir  at a  backup server with same postgres
> version and replace it's data directory with this data dir copy is a viable
> option ?
>
>
> # cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
> database server performance ??)   due to the copy command overhead ?
>
>
> OR  doing a WAL Replication Configuration to a standby is the right method
> to achieve this ??
>
>
> But you say you can't establish a network connection outside the DC.  ( I
> can't do for a remote machine .. But I can do  a WAL replication to another
> host in the same network inside the DC. So that If I  do a sqldump  or Copy
> of Data dir of the standby server it won't affect the production server, is
> this sounds good  ?  )
>
>
>  This is to take out the database backup outside the Datacenter and our DC
> policy won't allow us to establish a network connection outside the DC to a
> remote location for WAL replication .
>
>
> If you're unsure of what Linux distro & version and Postgresql version
> that you'll be restoring the database to, then the solution is:
> DB=the_database_you_want_to_backup
> THREADS=
> cd $PGDATA
> cp -v pg_hba.conf postgresql.conf /media/mydb_backup
> cd /media/mydb_backup
> pg_dumpall --globals-only > globals.sql
>

What is the relevance of  globals-only and  what this will do  ${DB}.log
// or is it  ${DB}.sql  ?

pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log  // .log
> couldn't get an idea what it mean
>
> If you're 100% positive that the system you might someday restore to is
> *exactly* the same distro & version, and Postgresql major version, then
> I'd use PgBackRest.
>
> --
> Born in Arizona, moved to Babylonia.
>


Backup Copy of a Production server.

2023-08-06 Thread KK CHN
List,

I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
external storage( Say USB Hard Drive) and need to set up a backup server
with this data dir.

What is the trivial method to achieve this ??

1. Is Sqldump an option at a production server ?? (  Will this affect the
server performance  and possible slowdown of the production server ? This
server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
Any bottlenecks ?

2. Is copying the data directory from the production server to an external
storage and replace the data dir  at a  backup server with same postgres
version and replace it's data directory with this data dir copy is a viable
option ?


# cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
database server performance ??)   due to the copy command overhead ?


OR  doing a WAL Replication Configuration to a standby is the right method
to achieve this ??

 This is to take out the database backup outside the Datacenter and our DC
policy won't allow us to establish a network connection outside the DC to a
remote location for WAL replication .

Any hints most welcome ..

Thank you
Krishane


EDB to Postgres Migration

2023-07-13 Thread KK CHN
List,

Recently I happened to have  managed a few  EDB instances running on the
EDB-10 version .

I am looking for an option for migrating all these EDB instances  to
Postgres Community edition.

1. What  major steps / actions involved ( in  bird's eye view ) for a
successful migration  to postgres community edition . ( From EDB 10 to
Postgres 14 )

2. What major challenges are involved?  (or any hurdles ?)


Please enlighten me with your experience..

Any reference  links most welcome ..

PS: -  The EDB instances are live and in production.. I can get a down time
( 5  to 15 Minutes Maximum)  Or can live  porting and upgrading to postgres
14  is possible  with minimal downtime ?

Request your  guidance,
Krishane.


BI Reports and Postgres

2023-07-11 Thread KK CHN
List,
1. For generating BI reports, which  Databases are more suitable ( RDBMS
like Postgres  OR NoSQL like MongoDB ) ? Which is best? Why ?

2. Is NoSQL DBs like MongoDB et all useful in which scenarios  and
application context ? or NoSQLs are losing the initial hype ?

3. Could someone point out which BI report tool (  OpenSource tool / Free
Software tool )  available for  generating BI reports from Postgres ?
 What does the community use ?

4. For Generating BI reports does it make sense to keep your data in RDBMS
or do we need to port data to MongoDB or similar NoSQLs ?

Any hints are much appreciated.
Krishane


PostgreSQL Server Hang​

2023-06-21 Thread KK CHN
*Description of System: *
1. We are running a Postgres Server (version 12, on CentOS 6) for an
emergency call attending and  vehicle tracking system fitted with mobile
devices for vehicles with navigation apps for emergency service.

2.   vehicles every 30 Seconds sending location coordinates( Lat /Long ) and
getting stored into the DB server at the emergency call center cum control
room.

*Issue: *
We are facing an issue of  the database hanging and becoming unresponsive
for
applications running which try to connect to the DB. So eventually applications
are also crawling on its knees.


*Mitigation done so far : *What mitigation we have done is increasing the
resources,CPU(vCPUs)  from  32 to 64  ( Not sure is this the right
approach / may be too dumb idea, but the hanging issue for the time being
rectified. )..

RAM 32 GB increased to 48 GB,  but  it observed that RAM usage was always
below 32GB only ( So foolishly increased the RAM !!!)

*Question: *
How to optimize and fine tune this database performance issue ?  Definitely
pouring the resources like the above is not a solution.

What to check the root cause and find for the performance bottle neck
reasons  ?

Thank you,
Krishane


*Additional Inputs If required: *

*##*
The DB machine   is running on a CentOS6 platform ..  Only a single Database
Instance  running as a Virtual Machine.

The  Database server also stores call center call related ( call arrival and
dispatch time stamps and short messages to and  from around 300 Desktop
application operators and data from Mobile tablets  fitted on Vehicles
with  VTS App installed on it.  The vehicle locations every 30 seconds are
continuously stored into the database..

Voice calls from callers in emergency which are each 3MB in size not stored
in the Database, but as  files stored in an NFS mount folder and  the
Database stores only the references to that voice call  files for future
reference ( The call volumes are around 1 lakh / day )   Only  meta data
information related to the calls ,  caller name, caller number, lat/long data
of caller, short description of caller situation which are less than 200
Characters  x  3 messages per each call  stored into DB.

This database  is also used for  making  reports on the action taken by
call takers/despatchers, Vehicle tracking reports etc on a daily basis.
 Around 2000 Vehicles are fitted with Mobile tablets with the emergency
navigation apps in the fleet.

The database grows roughly 1 GB / Day  )





Re: NEO6 GPS with Py PICO with micropython

2022-11-30 Thread KK CHN
List,

Just commented the // gpsModule.readline() in the while loop,  (
refer the link
https://microcontrollerslab.com/neo-6m-gps-module-raspberry-pi-pico-micropython/
)


while True: # gpsModule.readline() // This line commented out and the "GPS
not found message disappeared". buff = str(gpsModule.readline()) parts =
buff.split(',')


The GPS not found error which appears intermittently in the output python
console for few seconds ( say 7 to 8 seconds  its printing the lines   "
GPS data not found" )   now  disappears.

 Any thoughts?  How the above line comment made it vanish the  "GPS data
not found" error output.

Krishane

On Wed, Nov 30, 2022 at 3:58 AM rbowman  wrote:

> On Tue, 29 Nov 2022 17:23:31 +0530, KK CHN wrote:
>
>
> > When I ran the program I am able to see the output of  latitude and
> > longitude in the console of thony IDE.  But  between certain intervals
> > of a few seconds  I am getting the latitude and longitude data ( its
> > printing GPS data not found ?? ) in the python console.
>
> I would guess the 8 seconds in
>
> timeout = time.time() + 8
>
> is too short. Most GPS receivers repeat a sequence on NMEA sentences and
> the code is specifically looking for $GPGGA. Add
>
> print(buff)
>
> to see the sentences being received. I use the $GPRMC since I'm interested
> in the position, speed, and heading. It's a different format but if you
> only want lat/lon you could decode it in a similar fashion as the $GPGGA.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


NEO6 GPS with Py PICO with micropython

2022-11-29 Thread KK CHN
List ,
I am following this tutorial to  get latitude and longitude data  using
NEO6 GPS module and Py PICO  to read the GPS data from the device.

I followed the code specified in this tutorial.
https://microcontrollerslab.com/neo-6m-gps-module-raspberry-pi-pico-micropython/

I have installed thony IDE in my Desktop(windows PC) and  run the code
after the devices all connected and using USB cable connected to my PC.

When I ran the program I am able to see the output of  latitude and
longitude in the console of thony IDE.  But  between certain intervals of a
few seconds  I am getting the latitude and longitude data ( its printing
GPS data not found ?? ) in the python console.

The satellite count from the $GGPA output showing 03 ..
and the GPS data not found repeating randomly for intervals of seconds.
Any hints why it is missing the GPS data (randomly) ??

PS:-  The GPS device I placed outside  my window and connected to the PC
with a USB cable from the PICO  module. GPS device NEO6 light (Red LED
) blinking even though the "  GPS data not found" messages in th python
console.

Any hints ?? most welcome

Yours,
Krishane
-- 
https://mail.python.org/mailman/listinfo/python-list


Python code: brief

2022-07-26 Thread KK CHN
List ,

I have come across a difficulty to understand the code in this file.  I am
unable to understand exactly what the code snippet is doing here.

https://raw.githubusercontent.com/CODARcode/MDTrAnal/master/lib/codar/oas/MDTrSampler.py

I am new to this type of scientific computing code snippets and it is coded
by someone.  Due to a requirement I would like to understand what these
lines of code will do exactly.  If someone  could explain to me what the
code snippets do in the code blocks, it will be a great help .

Thanks in advance
Krish
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [users@httpd] site compromised and httpd log analysis

2022-07-06 Thread KK CHN
On Wed, Jul 6, 2022 at 8:33 AM Yehuda Katz  wrote:

> Your log doesn't start early enough. Someone uploaded a web shell (or
> found an existing web shell) to your server, possibly using an upload for
> that doesn't validate the input, then used that shell to run commands on
> your server.
>

Yes, that was not too old log

Here is another old log  paste
https://zerobin.net/?a4d9f5b146676594#hkpTU0ljaG5W0GUNVEsaYqvffQilrXavBmbK+V9mzUw=


.

Here is another log which starts earlier than the earlier logs.  Which may
help to investigate more.

I would consider your entire server to be compromised at this point since
> you have no record of what else the attacker could have done once they had
> a shell.
>
> Yes we took the server down, and recreated the VM with an old backup. Also
informed the developer/maintainer about this simple.shell execution and the
need of regular patching of the PHP7 version and the wordpress framework
they used for hosting.

I would like to know what other details / analysis we need to perform to
find out how the attacker got access and what time the backdoor was
installed and through what vulnerability they exploited ?

I request your tips  to investigate further and to find the root cause of
this kind of attack and how to prevent it in future..??



Make sure that you do not allow users to upload files and then execute
> those files.
>
> - Y
>
> On Tue, Jul 5, 2022 at 9:53 PM KK CHN  wrote:
>
>> https://pastebin.com/YspPiWif
>>
>> One of the websites hosted  by a customer on our Cloud infrastructure was
>> compromised, and the attackers were able to replace the home page with
>> their banner html page.
>>
>> The log files output I have pasted above.
>>
>> The site compromised was PHP 7 with MySQL.
>>
>> From the above log, can someone point out what exactly happened and how
>> they are able to deface the home page.
>>
>> How to prevent these attacks ? What is the root cause of this
>> vulnerability  and how the attackers got access ?
>>
>> Any other logs or command line outputs required to trace back kindly let
>> me know what other details  I have to produce ?
>>
>> Kindly shed your expertise in dealing with these kind of attacks and
>> trace the root cause and prevention measures to block this.
>>
>> Regards,
>> Krish
>>
>>
>>


[users@httpd] site compromised and httpd log analysis

2022-07-05 Thread KK CHN
https://pastebin.com/YspPiWif

One of the websites hosted  by a customer on our Cloud infrastructure was
compromised, and the attackers were able to replace the home page with
their banner html page.

The log files output I have pasted above.

The site compromised was PHP 7 with MySQL.

>From the above log, can someone point out what exactly happened and how
they are able to deface the home page.

How to prevent these attacks ? What is the root cause of this
vulnerability  and how the attackers got access ?

Any other logs or command line outputs required to trace back kindly let me
know what other details  I have to produce ?

Kindly shed your expertise in dealing with these kind of attacks and trace
the root cause and prevention measures to block this.

Regards,
Krish


[users@httpd] Defaced Website : Few forensic tips and help

2022-07-04 Thread KK CHN
List ,

https://pastebin.com/YspPiWif

One of our PHP  website hacked on 3rd july 2022.  I am attaching the httpd
access files contents in the above pastebin.I hide the original URL of
the website due to a SLA policy.

Can anybody point out from the logs what exactly made the attacker able to
bring the site down..

Has he used this php site for attacking ?

Any other logs or command line outputs needed  let me know. I will share
the required files.   I am new to this area of forensic analysis to find
out the root cause of the attack .

Kindly shed some tips to find out where the vulnerability is and how to
prevent it in future.

Any more inputs/details  required  keep me informed, I can share those too.

Regards,
Krish


[users@httpd] Slow web site response..PHP-8/CSS/Apache/

2022-06-23 Thread KK CHN
List,

I am facing a slow response for a hosted PHP8 web site..   It takes 30
seconds to load the website fully .  The application and database(
postgresql ) both are separately running on two Virtual Machines in
OpenStack cloud.  in two 10.184.x.221  and 10.184.y.221 networks
respectively.



When I  used tools like  GTMetrix and Webpagetest.org   it says   there are
render  blocking resources

Resources are blocking the first paint of your page. Consider delivering
critical JS/CSS inline and deferring all non-critical JS/styles.
Learn how to improve this


Resources that *may* be contributing to render-blocking include:
URL Transfer Size Download Time
 xxx.mysite.com/css/bootstrap.min.css   152KB 6.6s
xxx.mysite.com/css/style.css 14.2KB 5.9s
xxx.mysite.com/css/font/font.css  3.33KB  5.7s

here this bootstrap.css, which take  TTFB  6 seconds   and full loading of
the website taking almost extra 24 seconds total  30 seconds to render it..

https://pastebin.mozilla.org/SX3Cyhpg


The GTmetrix.com site also  show  this  issue also

The Critical Request Chains below show you what resources are loaded with a
high priority. Consider reducing the length of chains, reducing the
download size of resources, or deferring the download of unnecessary
resources to improve page load.
Learn how to improve this


Maximum critical path latency: *24.9s*



How can I overcome this issue   ?  Is this a  VM performance issue or PHP
issue ?/Apache issue ?or PHP applicaiton to Database  backend
connection issue..

Excuse me if this an off topic post to httpd list. Hope a lot of people
might have their experience to share how to trouble shoot or what may the
root cause making this site response too slow.

Kindly shed some light here.  Any hints where to start most welcome..

Any more data needed pls let me know ..I can share .

Thanks in advance,
Krish.


[ovirt-users] Injecting VirtIO drivers : Query

2021-08-25 Thread KK CHN
Hi,

I am in the process of importing  multi disk  Windows VMs from   HyperV
 environment to   my  OpenStack Setup( Ussuri version, glance and QEMU-KVM
)

I am referring online documents as  in the trailing lines.   But Is this
relevant to inject  VirtIO drivers to the Windows VMs ( as the articles
date back to 2015) .Some where it  mentions when you perform P2V
migration its necessary.

Is this VirtIO injection is necessary in my case ?  I am exporting from
HyperV and importing to OpenStack.

1. Kindly advise me the relevance of VirtIO injection and is it applicable
to my requirements.

2. Is there any  uptodate reference materials link for performing Windows
Multidisk VM importing to OpenStack(ussurin, glance and KVM) .  Or Am I
doing an impossible thing by beat around the bush ?


These are the links which I referred but it too old :  So the relevance of
the contents still applicable ?   ( Windows VMs are   Windows 2012 Server,
2008 and 2003 which I need to import to OpenStack)

https://superuser.openstack.org/articles/how-to-migrate-from-vmware-and-hyper-v-to-openstack/

https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e00kAWeCAM

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXAGSMCIE44FBN4GXJSZEG2XHTTM5NFU/


[ovirt-users] Automigration of VMs from other hypervisors

2021-08-11 Thread KK CHN
Hi list,

I am in the process of migrating 150+ VMs running on Rhevm4.1 toKVM
based OpenStack installation ( Ussuri with KVm and glance as image storage.)

What I am doing now, manually shutdown each VM through RHVM GUI  and export
to export domain and  scp those image files of each VM to our OpenStack
controller node and uploading to glance and creating each VM manually.

query 1.
Is there a better way to automate this migration by any utility or scripts ?
Any one done this kind of automatic migration before what was your
approach?  Or whats the better approach instead of doing manual migration ?

Or only manually I have to repeat the process for all 150+ Virtual
machines?  ( guest VMs are  CentOS7 and Redhat Linux 7 with LVM data
partitions attached)

Kindly share your thoughts..

Query 2.

other than this 150+ VMs Redhat Linux 7 and Centos VMs  on Rhevm 4.1, I
have to migrate  50+ VMs  which hosted on hyperV.

What the method / approach for exporting from HyperV and importing to
OpenStack Ussuri version  with glance with KVM hpervisor ? ( This is the
ffirst time I am going to use hyperV, no much idea about export from hyperv
and Import to KVM)

  Will the images exported form HyperV(vhdx image disks with single disk
and multiple disk(max 3 disk)  VMs) can be directly imported to KVM ? does
KVM support this or  need to modify vhdx disk images to any other format ?
What is the  best approach should be in case of HyperV hosted VMs( Windows
2012 guest machines and Linux guest machines ) to be imported to KVM based
OpenStack(Ussuri version with glance as image storage ).

Thanks in advance

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7KSQLVOSV5I6QGBAYC4U7SWQIJ2PPC5/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-05 Thread KK CHN
its mount point ? or any other suggestions or
correcton ?  Because its a live host. I can't do trail and error on service
maintainer's rhevm host machines.

kindly correct me if any thing wrong in my steps .  I have to perform this
script running on mylaptop to  rhevm host machines without breaking
anything.

Kindly guide me.

Kris

On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> This requires ovirt.conf file:
>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
> >>
> >> RHV supports download of disks to one image per disk, which you can move
> >> to another system.
> >>
> >> We also have export to ova, which creates one tar file with all
> exported disks,
> >> if this helps.
> >>
> >> Nir
> >>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQ5RAHW3E6F5IL6QYOG7W3P3BI35MJSU/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-04 Thread KK CHN
Appreciate all  for sharing the valuable information.

1.  I am downloading centos 8  as the Python Ovirt SDK  installation says
it works on  Centos 8 and Need to setup a VM with this OS and install
 ovirt Python SDK on this   VM.   The requirement is that   this
Centos 8 VM should able to communicate with the   Rhevm 4.1  Host node
where the ovirt shell ( Rhevm Shell [connected] #is
 availableright ?

2.  pinging to the  host with "Rhevm Shell [connected]# "   and  that
should  be ssh ed  from the CentOS 8 VM where python3 and oVirt SDK
installed and going to execute the  script  (with ovirt configuration file
on this VM.).  Is these two connectivity checks are enough for executing
the script ?  or any other protocols need to be enabled in the firewall
between these two machine?



3.  while googling  I saw a post
https://users.ovirt.narkive.com/CeEW3lcj/ovirt-users-clone-and-export-vm-by-ovirt-shell


action vm myvm export --storage_domain-name myexport

Will this command export ?  and which format it will export to  the export
domain ?
 Is there any  option to provide with this command to  specify  any
supported format the vm image  to be exported  ?

 Thisneed to be executed from "Rhevm Shell [connected]# "   TTY  right
?



On Wed, Aug 4, 2021 at 1:00 PM Vojtech Juranek  wrote:

> On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote:
> > On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:
> > > On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> > > > I have asked our VM maintainer to run the  command
> > > >
> > > > # virsh -r dumpxml vm-name_blah//as Super user
> > > >
> > > > But no output :   No matching domains found that was the TTY  output
> on
> > >
> > > that rhevm node when I executed the command.
> > >
> > > > Then I tried to execute #  virsh list //  it doesn't list any VMs
> > >
> > > !!!   ( How come this ? Does the Rhevm node need to enable any CLI
> with
> > > License key or something to list Vms or  to dumpxml   with   virsh ? or
> > > its
> > > CLI commands ?
> > >
> > > RHV undefine the vms when they are not running.
> > >
> > > > Any way I want to know what I have to ask the   maintainerto
> provide
> > >
> > > a working a working  CLI   or ? which do the tasks expected to do with
> > > command line utilities in rhevm.
> > >
> > > If the vm is not running you can get the vm configuration from ovirt
> > >
> > > using the API:
> > > GET /api/vms/{vm-id}
> > >
> > > You may need more API calls to get info about the disks, follow the
> > > 
> > > in the returned xml.
> > >
> > > > I have one more question :Which command can I execute on an rhevm
> > >
> > > node  to manually export ( not through GUI portal) a   VMs to
>  required
> > > format  ?
> > >
> > > > For example;   1.  I need to get  one  VM and disks attached to it
> as
> > >
> > > raw images.  Is this possible how?
> > >
> > > > and another2. VM and disk attached to it as  Ova or( what other
> good
> > >
> > > format) which suitable to upload to glance ?
> > >
> > > Arik can add more info on exporting.
> > >
> > > >   Each VMs are around 200 to 300 GB with disk volumes ( so where
> should
> > >
> > > be the images exported to which path to specify ? to the host node(if
> the
> > > host doesn't have space  or NFS mount ? how to specify the target
> location
> > > where the VM image get stored in case of NFS mount ( available ?)
> > >
> > > You have 2 options:
> > > - Download the disks using the SDK
> > > - Export the VM to OVA
> > >
> > > When exporting to OVA, you will always get qcow2 images, which you can
> > > later
> > > convert to raw using "qemu-img convert"
> > >
> > > When downloading the disks, you control the image format, for example
> > > this will download
> > >
> > > the disk in any format, collapsing all snapshots to the raw format:
> > >  $ python3
> > >
> > > /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> > > -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
> > >
> > > To perform this which modules/packages need to be installed in the
> rhevm
> >
> > host node ?  Does the rhevm hosts come with python3 installed by default
> ?
> > or I need to install  python3 on r

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> To perform this which modules/packages need to be installed in the rhevm
host node ?  Does the rhevm hosts come with python3 installed by default ?
or I need to install  python3 on rhevm node ? Then  using pip3 to install
the  download_disk.py / what the module name to install this sdk ?  any
dependency before installing this sdk ? like java need to be installed on
the rhevm node ?

One doubt:  came across  virt v2v while google search,  can virtv2v  be
used in rhevm node to export VMs to images ?  or only from other
hypervisors   to rhevm only virt v2v supports ?

This requires ovirt.conf file:   // ovirt.conf file need to be created
? or already there  in any rhevm node?

>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm it will be

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
I have asked our VM maintainer to run the  command

# virsh -r dumpxml vm-name_blah//as Super user

But no output :   No matching domains found that was the TTY  output on
that rhevm node when I executed the command.

Then I tried to execute #  virsh list //  it doesn't list any VMs  !!!
 ( How come this ? Does the Rhevm node need to enable any CLI  with License
key or something to list Vms or  to dumpxml   with   virsh ? or its CLI
commands ?

Any way I want to know what I have to ask the   maintainerto provide a
working a working  CLI   or ? which do the tasks expected to do with
command line utilities in rhevm.

I have one more question :Which command can I execute on an rhevm node
to manually export ( not through GUI portal) a   VMs to   required format  ?

For example;   1.  I need to get  one  VM and disks attached to it  as raw
images.  Is this possible how?

and another2. VM and disk attached to it as  Ova or( what other good
format) which suitable to upload to glance ?


  Each VMs are around 200 to 300 GB with disk volumes ( so where should be
the images exported to which path to specify ? to the host node(if the host
doesn't have space  or NFS mount ? how to specify the target location where
the VM image get stored in case of NFS mount ( available ?)

Thanks in advance


On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:

> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >
> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >
> > Now I am in the process of migrating  those VMs to  my cloud setup with
> OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >
> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >
> > There are three folders  which contain images for each VM .
> > These folders contain the base OS image, and attached LVM disk images (
> from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >
> > Is there a way to  get all these images to be exported as  Single image
> file Instead of  multiple image files from Rhevm it self.  Is this possible
> ?
> >
> > If possible how to combine e all these disk images to a single image and
> that image  can upload to our  cloud  glance storage as a single image ?
>
> It is not clear what is the vm you are trying to export. If you share
> the libvirt xml
> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
>
> RHV supports download of disks to one image per disk, which you can move
> to another system.
>
> We also have export to ova, which creates one tar file with all exported
> disks,
> if this helps.
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGBJTVT6EME4TXQ3OHY7L6YXOGZXCRC6/


Re: [Nagios-users] warning

2013-08-27 Thread KK CHN
Bharat,

Humble request Please don't scream out here! There is a way to ask help !!
You are creating a bad impression about you and your organization.

 So be a smart system admin. Tips : Work hard,  read, learn, implement
and google issues first then politely ask help if you fail !

 if you do all the tips I don't think you will be failing..

Don't think all others sitting here doesn't have any other tasks and
only to offer help for nothing !! Don't make them hesitated and offer
no help.

So first of all read documentation, configure services by yourself and
see the logs for any specific errors.  google yourself  first to fix
specific issues and errors.

  No one here will offer you the configuration script for the x,y,z
tasks you are asking..

If you done your part and if you fail and running out of options then
ask here. Definitely all of them will listen to  your problem.

Good Luck,
KK

On 8/27/13, Bruno Martins br...@bmartins.eu wrote:
 On 08/27/2013 07:51 AM, bharat Varandani wrote:
 Dear Dimitri,

 I am very thanks full for you because you help me lots of time..Now my
 Problem solved but again i facing new problem and i want help to all.
 So please help me and resolve my this issue also because i m new
  nagios and i want to perform good job for  my company so please help
 me out from problems.

 My new issue is i want to configure Email notification in nagios that
 very useful for me and my company..I want if any HOST, PRINTER,SWITCH
  is goes down i will get any notification in my email address..That is
 possible if yes so please help my configure this point ASAP.

 Thanks  Regards

 Bharat Varandani

 
 *From:* Dimitri Yioulos dyiou...@onpointfc.com
 *To:* nagios-users@lists.sourceforge.net
 *Sent:* Monday, August 26, 2013 6:03 PM
 *Subject:* Re: [Nagios-users] warning

 On Saturday 24 August 2013 6:37:37 am bharat Varandani
 wrote:
  Dear All,
 
 
  I am Facing  Some Warning in check_snmp. That i want to
  resolve so anybody please help to resolve this issue
  ASAP..
 
  Thanks  Regards
 
  Bharat Varandani


 Have you set the warning levels (-c and -w) correctly?

 Dimitri

 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.


 --
 Introducing Performance Central, a new site from SourceForge and
 AppDynamics. Performance Central is your source for news, insights,
 analysis and resources for efficient Application Performance Management.
 Visit us today!
 http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
 ___
 Nagios-users mailing list
 Nagios-users@lists.sourceforge.net
 mailto:Nagios-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/nagios-users
 ::: Please include Nagios version, plugin version (-v) and OS when
 reporting any issue.
 ::: Messages without supporting info will risk being sent to /dev/null



 --
 Introducing Performance Central, a new site from SourceForge and
 AppDynamics. Performance Central is your source for news, insights,
 analysis and resources for efficient Application Performance Management.
 Visit us today!
 http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk


 ___
 Nagios-users mailing list
 Nagios-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/nagios-users
 ::: Please include Nagios version, plugin version (-v) and OS when
 reporting any issue.
 ::: Messages without supporting info will risk being sent to /dev/null
 Hello Bharat,

 As some guys here on the list have told, you should not enforce your
 desire to get answers as soon as possible.
 We'll gladly help you, but we do this on our spare time.

 Please take a look on how to configure Postfix.
 Which SMTP server are you using for accomplishing the task of sending
 notifications via e-mail, from Nagios?

 Take in consideration that Nagios comes with two pre-built commands for
 sending notifications. Just configure Postfix and try running those
 commands from the CLI, for testing.

 Kindly,

 Bruno Martins


--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
Nagios-users mailing list
Nagios-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting 
any issue. 
::: Messages 

Re: Retrieving a FreeBSD installation

2013-06-28 Thread KK CHN
On Thu, Jun 27, 2013 at 1:17 PM, Polytropon free...@edvax.de wrote:

 On Thu, 27 Jun 2013 07:28:49 +, KK CHN wrote:
  List,
 
I accidentally installed  a Linux variant(mint OS) on my Harddisk
  where  FreeBSD is installed( which contains my data).
 
  Is there a  possibility  to retrieve that FreeBSD Installation which
  is overwritten by Linux installation.

 In most cases: What has been overwritten is lost.

 But: What has only been disallocated (data still on disk)
 can _sometimes_ be recovered.

 So it depends on _what_ is still left.

 Anyway, do not do anything with the disk. Do not try any
 recovery on the disk itself. Make an image of the disk and
 use that image file for any further action. In case you
 damage it, make a new copy. Only work with copies. One wrong
 step can massively decrease your chances of recovery.



  Any hints  welcome!

 It will be a very hard thing. You will probably have a lot
 of trial  error experience, and you will surely learn a
 lot, for example about file systems.

 I've written about this topic on this list already, and I
 will again re-use some details from a previous post to make
 a list for what you can try.

 Boot from a live CD or USB stick or a different disk. Then
 make a copy of the disk using

 # dd if=/dev/ad0 of=disk.dd

 where /dev/ad0 is the disk you have accidentally overwritten
 your OS installation. In case the disk makes any trouble, use
 dd_rescue or ddrescue (from ports).

 You can also try this:

 # fetch -rR /dev/ad0

 Also recoverdisk could be useful. Maybe there's enough information
 left to re-instantiate the file systems? Also try testdisk.

 When no file system can be re-instantiated, but you're sure
 your data is still somewhere, you can use photorec for recovery.
 It is able to recover a lot more than just photos.

 The ports collection contains further programs that might be
 worth investigating; just in case they haven't been mentioned
 yet:

 ddrescue
 dd_rescue   - use this to make an image of the disk!
 magicrescue
 testdisk- restores content
 recoverjpeg
 foremost
 photorec

 Then also

 ffs2recov
 scan_ffs

 should be mentioned.

 And finally, the cure to everything is found in The Sleuth Kit
 (in ports: tsk):

 fls
 dls
 ils
 autopsy

 Keep in mind: Read the manpages before using the programs. It's
 very important to do so. You need to _know_ what you're dealing
 with, or you'll probably fail. There is no magical tetroplyrodon
 to click ^Z and get everything back. :-)

 Proprietary (and expensive) tools like R-Studio or UFS Explorer
 can still be considered worth a try. Their trial versions are for
 free. UFS Explorer even works using wine (I've tried it).

 If you can remember significant content of your data, you can
 even use

 # grep pattern disk.dd

 to see if it's still in there. With magicrescue, you can try
 something like this:

 # magicrescue -r /usr/local/share/magicrescue/recipes -d out
 disk.dd

 where out/ is the directory where your results will be written to.
 Keep in mind that _this_ approach will _not_ recover file _names_!




 I know how bad it feels for such a simple mistake and I
 won't make fun on you, pointing you to use your backups.

 Of course you always have the option to send your disk to a
 professional recovery company. This substitutes learning and
 trying yourself by impressive amounts of money. ;-)



 Good luck!


Thank you very much, I am going to invest my time to try the valuable tips
you shared. I admit the wrong step I made. Thanks again.


 --
 Polytropon
 Magdeburg, Germany
 Happy FreeBSD user since 4.0
 Andra moi ennepe, Mousa, ...

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Retrieving a FreeBSD installation

2013-06-27 Thread KK CHN
List,

  I accidentally installed  a Linux variant(mint OS) on my Harddisk
where  FreeBSD is installed( which contains my data).

Is there a  possibility  to retrieve that FreeBSD Installation which
is overwritten by Linux installation.

Any hints  welcome!

Thanks
Chn
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Few queries FreeBSD upgrade

2012-08-17 Thread KK CHN
List,

I observe the following error while installing from ports collection
for any ports  in an old FreeBSD server

/usr/ports/Mk/bsd.options.mk, line 90: Malformed conditional
(!defined(OPTIONS_DEFINE) || empty(OPTIONS_DEFINE:Monly)))

I tried   the following

1) cvsup  and make index  throws  the following error

   Variable ALL_OPTIONS is recursive.
=== arabic/ae_fonts_mono failed
*** Error code 1
Unknown modifier 'u'



 2)   I done csup  -L2 portsupfilemake fetchindex
  This also not helped

  3) portsnap extract  portsnap fetch update

  Nothing improved

All port installation fails, this is the port I am trying to install

star# cd /usr/ports/databases/postgresql84-server/
star# make install clean
Makefile, line 115: Could not find bsd.port.options.mk
Unknown modifier 'u'

Unknown modifier 'u'

/usr/ports/Mk/bsd.options.mk, line 90: Malformed conditional
(!defined(OPTIONS_DEFINE) || empty(OPTIONS_DEFINE:Monly)))
Variable NO_OPTIONS is recursive.

star# uname -a
FreeBSD star.net 6.2-RELEASE FreeBSD 6.2-RELEASE #1: Thu May  5
15:55:38 IST 2011 r...@star.net:/usr/obj/usr/src/sys/MYKERNELSTAR
i386

 I replaced the /usr/ports/Mk   directory with the Mk directory
downloaded from the FreeBSD-6.2 archive. Still the error in port
install  remains..


1) Is there any solution for  this issue other than   upgrade ?

2) Please guide me:

I must upgrade this  old release.   Can any one tell  me which order I
need to upgrade?

I mean from  6.2 Release to which next version and next  ?  please
mention the sequence  to reach 9.0 Release

3)  Which  upgrade method I have to follow ? Source upgrade or
binary upgrade?

I am ready to do  source upgrade  please shed some light on  the pros
and cons of both ways of upgrade.

4) Which all are those system files  directories  I need to backup before
doing any of the upgrade to retrieve the system in case anything goes
wrong?

Thanks in advacne
krish
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: [SM-USERS] Single Sign On to Squirrel mail from another web application

2012-05-13 Thread KK CHN
List,

  I have enabled in the plone site for the cookie sharing for   mydomain.com
and  shared a secret   is there  blah

Now in the  server machine for apache  I enabled  mod_auth_tkt ( the
plone version supports  mod_auth_tkt compatible systems.)

The plone site and  SM both runs in the same machine  under same
apache were mod_tkt is loaded.

  Vhost entry for   Squirrel mail

VirtualHost *:80
ServerAdmin webmas...@mydomain.com
DocumentRoot /usr/local/www/SquirrelMail
ServerName webmail.mydomain.com
ServerAlias webmail.mydomain.com
TKTAuthSecret  blah
Location /src/login.php
TKTAuthIgnoreIP on
TKTAuthDebug 2
TKTAuthDomain .mydomain.com
TKTAuthTimeout 2w
TKTAuthCookieExpires 2w
TKTAuthRequireSSL off
TKTAuthCookieSecure off
/Location
ErrorLog /var/log/httpd-error.log
CustomLog /var/log/httpd-access.log combined
/VirtualHost

There is an existing   IMAP account   user:  kk...@webmail.mydomain.com
password:
 mypass

Then I created the same user  kk...@webmail.mydomain.com  in Plone
Site with same   password   mypass

Restarted  apache

 I logged in  to the plone site (intranet.mydomain.com)  with the
user name   kk...@webmail.mydomain.com  with mypassand  click
the link  for webmail.mydomain.com   but it prompts me for username
and password.

Do I miss any configuration  other than the above in the  Squirrel
Mail  virtualhost  config ? Or anything   additional  work required?

This is my  Virtual host configuration  for Plone site.

VirtualHost *:80
ServerAdmin k...@webmail.mydomain.com
ServerName intranet.mydomain.com
RewriteEngine On
RewriteRule ^/(.*)
http://127.0.0.1:8081/VirtualHostBase/http/intranet.mydomain.com:80/Intranet/VirtualHostRoot/$1
[L,P]
ErrorLog /var/log/apache/intranet.mydomain.com/error_log
CustomLog /var/log/apache/intranet.mydomain.com/access.log combined
/VirtualHost

Please shed some light on this regard.

Thanks in advance
KKCHN

On 5/12/12, Paul Lesniewski p...@squirrelmail.org wrote:
 On Fri, May 11, 2012 at 10:54 PM, KK CHN kkchn...@gmail.com wrote:
 List,

 I have a plone 4.1.4( CMS)  installation, and a squirrel mail web client
 running on
  different machines but under the same domain.  say
  intranet.mydomain.com
  and  webmail.mydomain.com

 I am trying to implement a SSO for  this plone intranet site and Squirrel
 Mail client. ( The plone site is integrated with LDAP server.  Both Plone
 site and  Squirrel mail refers the same user credentials in this LDAP
 server)


  What configurations/additional work  I have to make for the SM instance
  for SSO to work   from plone site, so clicking a  link in the  plone
  site
  to the squirrel mail site should logged in to the squirrel mail client
 so
 users can  see their  emails, without signing again to the squirrel mail
 login page.

  Please give your guidance/workarounds  how to accomplish this SSO  for
  Squirrel Mail.

 You could try to hack something into one or the other of these two
 applications (or both) so that they understand each other's cookies,
 for example (still may present problems authenticating against the
 IMAP server - keep in mind that SquirrelMail passes authentication to
 the IMAP server, so it MUST have a username and password to
 authenticate with, and in that sense, it's far smarter to ask how to
 modify plone to understand when a user has been authenticated via
 SquirrelMail), but the more robust way to handle this is to find a SSO
 authentication implementation that both applications are compatible
 with.  Shibboleth is one popular example, but there are others.  Do
 your homework.  There is a SquirrelMail plugin that is compatible with
 some such authentication systems that will be available soon - but it
 is not trivial to set this kind of system up because you must be able
 to integrate it with your IMAP server too.


 --
 Paul Lesniewski
 SquirrelMail Team
 Please support Open Source Software by donating to SquirrelMail!
 http://squirrelmail.org/donate_paul_lesniewski.php

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 -
 squirrelmail-users mailing list
 Posting guidelines: http://squirrelmail.org/postingguidelines
 List address: squirrelmail-users@lists.sourceforge.net
 List archives: http://news.gmane.org/gmane.mail.squirrelmail.user
 List info (subscribe/unsubscribe/change options):
 https://lists.sourceforge.net/lists/listinfo/squirrelmail-users


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can

[SM-USERS] Single Sign On to Squirrel mail from another web application

2012-05-11 Thread KK CHN
List,

I have a plone 4.1.4( CMS)  installation, and a squirrel mail web client
running on
 different machines but under the same domain.  say  intranet.mydomain.com and
webmail.mydomain.com

I am trying to implement a SSO for  this plone intranet site and Squirrel
Mail client. ( The plone site is integrated with LDAP server.  Both Plone
site and  Squirrel mail refers the same user credentials in this LDAP
server)


 What configurations/additional work  I have to make for the SM instance
 for SSO to work   from plone site, so clicking a  link in the  plone  site
 to the squirrel mail site should logged in to the squirrel mail client so
users can  see their  emails, without signing again to the squirrel mail
login page.

 Please give your guidance/workarounds  how to accomplish this SSO  for
 Squirrel Mail.

Any hints  appreciated much.

 Thanks in advance
Chen
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/-
squirrelmail-users mailing list
Posting guidelines: http://squirrelmail.org/postingguidelines
List address: squirrelmail-users@lists.sourceforge.net
List archives: http://news.gmane.org/gmane.mail.squirrelmail.user
List info (subscribe/unsubscribe/change options): 
https://lists.sourceforge.net/lists/listinfo/squirrelmail-users

[Zope] zope programming: few queries

2011-03-07 Thread KK CHN
List,

Let me  request  your patience and time,   I am a zope newbe  trying to get
some light in the darkness.




 I am  trying to learn zope2 for the last couple of months, reading and
doing  zope2 documentations and sample  file based python product
developments.

I know  Zope s advanced  a lot more than a decade(and zope3 / bluebream is
the there now)  but I just started few months back, sorry for my ignorance
and lack of opportunity to start at the early stage.


I have gone through zope2 docs and other tutorials on zope, and tried  a
couple of file base zope products.

But I am unable to find/ do  a complete user  registration module for  zope
sites.   and small session handling  applications  like  a shopping cart
which use  session management techniques.


1 how  zope.org  is doing  user login/register/password retrieval
module,  or how  people like you  add a user registration/login/password
retrieval module for your zope sites.

I searched a lot and learned the acl_users  folder properites,

and  can do manually add userswith  these code excerpts.

self.manage_addUserFolder()

self.acl_users.userFolderAddUser('user1', 'passwd',
['Member','Admin'], [] )
self.acl_users.userFolderAddUser('user2', 'passwd', ['Member',
'Managers'], [] )
self.acl_users.userFolderAddUser('user3', 'passwd', ['Member',
'Manager','Admin'], [])



etc.  but  unable to do a  user registration module using  acl_users  and
its web interface .



By reading  BetaBoring  and some other  tutorials  in upfront systems zope
course,  devshed.com  tutorials  I think   I am in the right track of
zope2  learning.


2.  I have done a sample application,   but I dont know how to clear  the
normal zope user credentials  when a sample user clicks   a logout   link

here some sample code :

I added an anchor   a href =logout  Logout /a tag   for logout in
my   pageTemplate  name as  myPage.zpt

from AccessControl import ClassSecurityInfo
security = ClassSecurityInfo()
security.declareProtected('adminroles', 'myPage')
myPage = PageTemplateFile(
os.path.join('views', 'myPage.zpt'), globals())



and  I added  few sample  users   such as  user1,   user2, user3  etc  to
acl_users  with sample password  'passwd'

self.acl_users.userFolderAddUser('user1', 'passwd',
['Member','Admin'], [] )
self.acl_users.userFolderAddUser('user2', 'passwd', ['Member',
'Managers'], [] )
self.acl_users.userFolderAddUser('user3', 'passwd', ['Member',
'Manager','Admin'], [])



and when I access the   protected page  'myPage'  it asks me for acl_user
credentials  and I am able to enter   sample users  and password  and it
allows me to view the  page  'myPage'

But problem  when I click  the Logout  Link  in the myPage( a
href=logout Logout/a)then  it call  a logout.zpt
page template there  I  display a message  you logged out , and  return  the
login.zptpage.

but   in the login.zpt  page  if you enter anything   I mean  garbage for
username  and password  field   it allows to view the myPage.( this happens
once you logged in first time with the credentials).

How to clear these  zope acl_users  credentials  once you click the logout
link  which in turn call a logout.zpt  template.



If  you have  some sample / demo applications  to show how to do these
issues,  can you share me those. I am stuck in this  point  in zope.

if any such application code/related works to follow it will help me a lot.



Thank  You
kkchn
___
Zope maillist  -  Zope@zope.org
https://mail.zope.org/mailman/listinfo/zope
**   No cross posts or HTML encoding!  **
(Related lists - 
 https://mail.zope.org/mailman/listinfo/zope-announce
 https://mail.zope.org/mailman/listinfo/zope-dev )


[us...@httpd] apache ajp interconnection Issue

2009-07-29 Thread KK CHN
Hi  list,

I am trying ajp   connector  for apache  to   tomcat

using   mod_ajpapache2.2   and tomcat  5.5on Debian platform

apache  and  Tomcat  are in separate  boxes  both are in  same  LAN.  and
apache  machine is bind to  a public IP address  in firwall level.

this is the virtual host configuration


VirtualHost *:80
ServerName demo.mydomain.in
ProxyPass /balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
Proxy balancer://mycluster
BalancerMember ajp://192.168.31.128:8009/MyJava min=10 max=100
Allow from all
/Proxy


when I access   demo.mydomain.in the home page of   MyJava  is
served  by my apache box,  and when  enter  login name  password to login
into the application and press submit button  ,  getting  error  in
browser
the requested resource path is incorrect   .

But  If  I access the  MyJava  application from  the  Ipaddress of the
tomcat boxby http://Ipaddress:8080/MyJava
able to login and perform  actions in the Java applications.

What I made  wrong , any   error  in  the Vhostentry.

Let me request h   your  valuable  advice  and   hints to solve the issue


Thanks in advance
kk


Re: Configuring worker MPM for Tomcat

2009-07-22 Thread KK CHN
MPM  multi processing modules are features of   apache webserver not of
Apache-Tomcat.

 There are differnent  MPM modules  default  Prefork in Linux/unix,  but for
more simultaneous connectionsyou need to recompile apache webserver with
Worker MPM  its not a feature of Tomcat .


On Tue, Jul 21, 2009 at 3:15 PM, Anand Kumar Prabhakar
anand2...@gmail.comwrote:


 I'm trying to configure worker MPM for the first time. So i need the steps
 to
 configure them. Can't we implement worker MPM in tomcat server?

 --
 View this message in context:
 http://www.nabble.com/Configuring-worker-MPM-for-Tomcat-tp24582105p24584637.html
 Sent from the Tomcat - User mailing list archive at Nabble.com.


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




[Nagios-users] accessing a remote kannel sms gateway for a Nagios box

2009-01-09 Thread KK CHN
How can I configure my nagios box to send sms  through a  kannel sms
gateway  setup which is on another PC.

Note : both nagios  kannel box in the same network..


from My nagios box , I can send sms through web browser (here
http://10.18.1.10  is the kannel gateway box )


http://10.18.1.10:13013/cgi-bin/sendsms?username=myuser_namepassword=my_PASSto=26857text=serverdown

I can send this message(eg:serverdown)  from my  nagios box's firefox
browser  to my cellphone (I am geeting message in my cellphone)

But for nagios to do this what config I have  to write  in config file ?can
anyone help me how can I do it for nagios  ?



Thanks in advance
kkchn
--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB___
Nagios-users mailing list
Nagios-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting 
any issue. 
::: Messages without supporting info will risk being sent to /dev/null

[Nagios-users] 3 doubts on SMS alert system for Nagios : by kannel SMS gateway

2008-12-30 Thread KK CHN
Hi all ;

I followed this tutorial
http://www.nagiosexchange.org/cgi-bin/pages/Detailed/1855.html



   I would like to test the sms alert system from a GSM SMS gateway , I have
added these lines in  commands.cfg


define command{
command_nameservice_notify_with_sms
command_line$USER1$/notify_sms -a   MY_MOBILE_NUMBER-u
MY_USER_NAME-p
MY_PASSWORD -m \ '$NOTIFICATIONTYPE$:$HOSTNAME$ is $SERVICESTATE$
($SERVICEOUTPUT$)\' -t $CONTACTPAGER$
}


I have 3 doubts

doubt 1) is the above configuration correct ?

 where   MY_MOBILE_NUMBER  = my cell phone number and
MY_USER_NAME = sms gateway user account

and

MY_PASSWORD = sms gateway user password




doubt 2  ) why I have to add again  the following  config section ? i mean
the purpose of the following lines ?

define command{

  command_namehost_notify_with_sms
  command_line/usr/lib/nagios/plugins/notify_sms -a 1012345 -u
myusername_for_gw -p mypassword_for_gw -m \'$NOTIFICATIONTYPE$:
$HOSTNAME$ is $HOSTSTATE$ ($HOSTOUTPUT$)\' -t $CONTACTPAGER$

}



doubt 3)   if I added the 2  define command sections   in commands.cfg   as
in the website  , can I get an sms alert ?anything else  I have to add in
localhost.cfg   ? (where I have services to check ssh,ping,http   in the
localhost . )


These are the Nagios packages I have in my box

nagios-2.9_1
nagios-plugins-1.4.9_1,1


notify_sms-1.1.tar.gzhttp://www.nagiosexchange.org/cgi-bin/jump.cgi?ID=1855view=File2;d=1

I untared this notify_sms-1.1.tar.gz package and copied it to
/usr/local/libexec/nagios/   (in this directory I hvae all the
check_ssh  check_ping etc etc comamndsby default)



Let me request you to clarify my doubts , sorry for my ignorance ,

NOTE :  The SMS gateway is in another PC , but I configured Nagios in a
localbox ,but both  machines in the same network   .
so can I use this SMS gateway   ? how to make this sms gateway service
available for  nagios box ?

Thanks in Advance
kkchn
--
___
Nagios-users mailing list
Nagios-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting 
any issue. 
::: Messages without supporting info will risk being sent to /dev/null

[Nagios-users] 3 doubts on SMS alert system for Nagios : by kannel SMS gateway

2008-12-30 Thread KK CHN
Please excuse me if this mail all ready posted to the list.


Hi all ;

I followed this tutorial
http://www.nagiosexchange.org/cgi-bin/pages/Detailed/1855.html



   I would like to test the sms alert system from a GSM SMS gateway , I have
added these lines in  commands.cfg


define command{
command_nameservice_notify_with_sms
command_line$USER1$/notify_sms -a   MY_MOBILE_NUMBER-u
MY_USER_NAME-p
MY_PASSWORD -m \ '$NOTIFICATIONTYPE$:$HOSTNAME$ is $SERVICESTATE$
($SERVICEOUTPUT$)\' -t $CONTACTPAGER$
}


I have 3 doubts

doubt 1) is the above configuration correct ?

 where   MY_MOBILE_NUMBER  = my cell phone number and
MY_USER_NAME = sms gateway user account

and

MY_PASSWORD = sms gateway user password




doubt 2  ) why I have to add again  the following  config section ? i mean
the purpose of the following lines ?

define command{

  command_namehost_notify_with_sms
  command_line/usr/lib/nagios/plugins/notify_sms -a 1012345 -u
myusername_for_gw -p mypassword_for_gw -m \'$NOTIFICATIONTYPE$:
$HOSTNAME$ is $HOSTSTATE$ ($HOSTOUTPUT$)\' -t $CONTACTPAGER$

}



doubt 3)   if I added the 2  define command sections   in commands.cfg   as
in the website  , can I get an sms alert ?anything else  I have to add in
localhost.cfg   ? (where I have services to check ssh,ping,http   in the
localhost . )


These are the Nagios packages I have in my box

nagios-2.9_1
nagios-plugins-1.4.9_1,1


notify_sms-1.1.tar.gzhttp://www.nagiosexchange.org/cgi-bin/jump.cgi?ID=1855view=File2;d=1

I untared this notify_sms-1.1.tar.gz package and copied it to
/usr/local/libexec/nagios/   (in this directory I hvae all the
check_ssh  check_ping etc etc comamndsby default)



Let me request you to clarify my doubts , sorry for my ignorance ,

NOTE :  The SMS gateway is in another PC , but I configured Nagios in a
localbox ,but both  machines in the same network   .
so can I use this SMS gateway   ? how to make this sms gateway service
available for  nagios box ?

Thanks in Advance
kkchn
--
___
Nagios-users mailing list
Nagios-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nagios-users
::: Please include Nagios version, plugin version (-v) and OS when reporting 
any issue. 
::: Messages without supporting info will risk being sent to /dev/null

[squid-users] squid setup in a DMZ 1 --DMZ-2 --- to give internet access to a LAN machine (where weblogic needs internet access)

2008-09-28 Thread kk CHN
People  ;


 I have  2 server boxes one   in DMZ-1 and  the other in DMZ-2 ,
DMZ-1  machine having internet  connection,DMZ-2 not having Internet
connection, an application in  weblogic appserver   (thats in a LAN
machine which can communicate only with  DMZ -2 machine) .


Note:
(  In this LAN machine  there is the HTTP Proxy (host name , proxy
port   etc ..) directives   for  weblogic server  to configure   )..



 How can I make the LAN machine to access Internet(  InterNet ---
DMZ-1 --- DMZ-2---LAN machine with Weblogic)   ?

where are  all I have to use   squid   inorder to  make  the LAN
machine to access the internet  ?

I am a  newbe in this kind of setup, sorry for my ignorance if any  ..
   but I can pick up if  you able to give  general Picture  of the
setup with squid  for achieving this ..

let me  request  all of you to share your expertise and tips to achieve this ..

any hints most welcome .

Thanks in advance
KK.


spamassassin and Mailman integration 3 doubts

2008-06-24 Thread kk CHN
People ;

  I have a  spamassassin  installation in my box  , detail as follows

[star]$ spamassassin -V
SpamAssassin version 3.1.8
  running on Perl version 5.8.8

I want  to  deny  the spam mails to   the mailing lists  running in
this box ,  how to do that
my   MTA is  postfix  , mailman is the mailing list manager ,

sure so many  experts here using their own setups for doing this ,
hope some of you , help with some tips  advice

Thank You


Note : I have procmailrc file and in that I specified for the 3 local
lists in my server box

I have 3 local lists ( in this machine with  users email ids  such as
[EMAIL PROTECTED], [EMAIL PROTECTED]  etc )

   Here I am usin   postfix MTA   , this is the config parameter in
postfix/main.cf

mailbox_command=/usr/local/bin/procmail  -a $EXTENSION

following is my   /usr/local/etc/procmailrc   files contents
[star ~]$ cat /usr/local/etc/procmailrc

###
PATH=$HOME/bin:/usr/bin:/bin:/usr/local/bin:.
MAILDIR=$HOME/Maildir/
DEFAULT=$MAILDIR/

:0fw
| /usr/local/bin/spamc -u spamassassin -s 256000

DROPPRIVS=YES

:0
* ^X-Spam-Flag.*YES
$MAILDIR.Junk/

:0
* ^TOlists
!devel,users,release
##



and in  /etc/aliasesI have  lines   as follows

lists: spamassassin
// to redirect all mails to lists (local lists;  not the
mailman lists ) to   //the
user spamassassin  and  that I defined in   procmailrc above

devel: [EMAIL PROTECTED],[EMAIL PROTECTED]
users: [EMAIL PROTECTED],[EMAIL PROTECTED]
release: [EMAIL PROTECTED],[EMAIL PROTECTED]
###


Q 1) so please tell me How I can use the same setup for blocking
spams to   mailman mailing lists (not only local lists  such as
devel,users,release  as in etc/aliases )

Q 2 ) Here in this box no local user mailboxes , so how to train
spamassassin by using sa-learn ?

  hint : by copying   spams from those users  gmail/yahoo spam folders
 as .txt files to a
temporay folder say   BadMails  in that server box

and do# sa-learn  --spam  ~/path to/BadMails   folder   right ?


any hints most appreciated  :
 Thank you


[Mailman-Users] mailman error file grows disk going out of space

2008-04-22 Thread kk CHN
People :

  I am facing a problem recently with ( my mailman -2.1.9  which runs
fine for a long term ) , last week my ~/mailman/logs/error  file
growing to GBs in size (5GB) then my server box goes out of disk space
 /usr  full.


/mailman/logs/error says the follows  ,  how can I fix this  ?
did anyone face this earlier ? please give me  your valuable  comments
  to fix this .


# tail -50  error
self._parsebody(root, fp, firstbodyline)
  File /usr/local/mailman/pythonlib/email/Parser.py, line 246, in _parsebody
raise Errors.BoundaryError(
BoundaryError: multipart message with no defined boundary

Apr 22 11:51:35 2008 (35739) Ignoring unparseable message:
1186840687.951453+8a33be62beafbb6e10e55ebae32cfa03ed596313
Apr 22 11:51:35 2008 (35739) Uncaught runner exception: multipart
message with no defined boundary
Apr 22 11:51:35 2008 (35739) Traceback (most recent call last):
  File /usr/local/mailman/Mailman/Queue/Runner.py, line 100, in _oneloop
msg, msgdata = self._switchboard.dequeue(filebase)
  File /usr/local/mailman/Mailman/Queue/Switchboard.py, line 164, in dequeue
msg = email.message_from_string(msg, Message.Message)
  File /usr/local/mailman/pythonlib/email/__init__.py, line 51, in
message_from_string
return Parser(_class, strict=strict).parsestr(s)
  File /usr/local/mailman/pythonlib/email/Parser.py, line 75, in parsestr
return self.parse(StringIO(text), headersonly=headersonly)
  File /usr/local/mailman/pythonlib/email/Parser.py, line 64, in parse
self._parsebody(root, fp, firstbodyline)
  File /usr/local/mailman/pythonlib/email/Parser.py, line 246, in _parsebody
raise Errors.BoundaryError(
BoundaryError: multipart message with no defined boundary

Apr 22 11:51:35 2008 (35739) Ignoring unparseable message:
1186840687.951453+8a33be62beafbb6e10e55ebae32cfa03ed596313
Apr 22 11:51:35 2008 (35739) Uncaught runner exception: multipart
message with no defined boundary
Apr 22 11:51:35 2008 (35739) Traceback (most recent call last):




I tried to find where this message
# find  / -name  1186840687.951453+8a33be62beafbb6e10e55ebae32cfa03ed596313

but no result ,


please help me with your experience .

Thanks in advance
dhanesh
--
Mailman-Users mailing list
Mailman-Users@python.org
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/
Unsubscribe: 
http://mail.python.org/mailman/options/mailman-users/archive%40jab.org

Security Policy: 
http://www.python.org/cgi-bin/faqw-mm.py?req=showamp;file=faq01.027.htp


Re: [squid-users] Inspite squid in front of apache : direct connection from foreign IP address ? how to deny this ?

2008-03-28 Thread kk CHN
On 3/28/08, Ric [EMAIL PROTECTED] wrote:

  On Mar 27, 2008, at 9:57 PM, kk CHN wrote:

   People:  in my server box , I am using squid as http accelerator
   ;setup is as follows
  
   Flow of requests from users should be like this
  
   squid listens on public ip port:80   ---apache(127.0.0.1:80) ---
   RewriteRule for apache to---zope:8080/plonesite
  
  
  
   Important  NOTE : for the last couple of days I am experiencing
   that my  plone site on zope :8080 is become not acceesible after 5/6
   hours ,after the services I restarted :
  
   when I run the command # ` sockstat -4p 80 `
   here I can see a specific IP address (164.115.5.2 )connecting
   directly  ande  using   python2.4 as  pasted below .



 Umm... Zope is a python process.  Are you perchance connecting to the
  Zope server directly yourself?

Yes I do from my lan machine , by ssh tunnel :   but thsi IP address
164.115.5.2   noway  related to ours :

I know that a couple of members other than me has  admin privileged
accounts in the Zopeserver;  but last week I changed all their account
passwords to make sure only me as the admin to check  how the site
going down after few hours a service restart.

any more info ...?


Re: [squid-users] Inspite squid in front of apache : direct connection from foreign IP address ? how to deny this ?

2008-03-28 Thread kk CHN
On 3/28/08, Ric [EMAIL PROTECTED] wrote:

  On Mar 27, 2008, at 11:37 PM, kk CHN wrote:

   On 3/28/08, Ric [EMAIL PROTECTED] wrote:
  
   On Mar 27, 2008, at 9:57 PM, kk CHN wrote:
  
   People:  in my server box , I am using squid as http accelerator
   ;setup is as follows
  
   Flow of requests from users should be like this
  
   squid listens on public ip port:80   ---apache(127.0.0.1:80) ---
   RewriteRule for apache to---zope:8080/plonesite
  
  
  
   Important  NOTE : for the last couple of days I am experiencing
   that my  plone site on zope :8080 is become not acceesible after 5/6
   hours ,after the services I restarted :
  
   when I run the command # ` sockstat -4p 80 `
   here I can see a specific IP address (164.115.5.2 )connecting
   directly  ande  using   python2.4 as  pasted below .
  
  
  
   Umm... Zope is a python process.  Are you perchance connecting to the
   Zope server directly yourself?
  
   Yes I do from my lan machine , by ssh tunnel :   but thsi IP address
   164.115.5.2   noway  related to ours :
  
   I know that a couple of members other than me has  admin privileged
   accounts in the Zopeserver;  but last week I changed all their account
   passwords to make sure only me as the admin to check  how the site
   going down after few hours a service restart.
  
   any more info ...?




 What then is on ports 65287 and 64313 on your server?

www  python2.4  44496 20 tcp4   my_Serverbox_public_IPAddress
:65287 164.115.5.2:80

Here the pid 44496 I greped

$ ps -aux|grep 44496
www 44496  0.0 21.3 445368 442940  ??  SThu11AM 203:49.39
/usr/local/bin/python2.4 /usr/local/www/Zope28/lib/python/Zope

its conecting to the zope process :  So it means some thing going
wrong with my machine? that foreign  ip  has access through some holes
 of my plone/zope application right?


Re: [squid-users] Inspite squid in front of apache : direct connection from foreign IP address ? how to deny this ?

2008-03-28 Thread kk CHN
On 3/28/08, Ric [EMAIL PROTECTED] wrote:

  On Mar 28, 2008, at 12:35 AM, kk CHN wrote:

   On 3/28/08, Ric [EMAIL PROTECTED] wrote:

  What then is on ports 65287 and 64313 on your server?
  
   www  python2.4  44496 20 tcp4   my_Serverbox_public_IPAddress
   :65287 164.115.5.2:80
  
   Here the pid 44496 I greped
  
   $ ps -aux|grep 44496
   www 44496  0.0 21.3 445368 442940  ??  SThu11AM 203:49.39
   /usr/local/bin/python2.4 /usr/local/www/Zope28/lib/python/Zope
  
   its conecting to the zope process :  So it means some thing going
   wrong with my machine? that foreign  ip  has access through some holes
   of my plone/zope application right?



 Someone connecting to the Zope server doesn't necessarily mean there
  is a hole.  Why don't you take a look at your Zope logs and see what
  that IP is doing.

  In any case, closing off ports to outside access is trivial.  Either
  throw up a firewall or configure Zope to bind only to 127.0.0.1.

I added a ipfw rule like this

ipfw add deny tcp from 164.115.5.0/24 to me in my ipfw_firewall script
and restarted the firewall sevice , but still the same ip is able to
make connection  as follows why this happens ?

storm# sockstat -4p 80
USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS
www  python2.4  79874 11 tcp4   my_ipaddress :57060 164.115.5.2:80
www  python2.4  79874 17 tcp4   my_ipaddress :64305 164.115.5.2:80
www  httpd  73932 3  tcp4   127.0.0.1:80  *:*
www  httpd  849   3  tcp4   127.0.0.1:80  *:*


[squid-users] Inspite squid in front of apache : direct connection from foreign IP address ? how to deny this ?

2008-03-27 Thread kk CHN
People:  in my server box , I am using squid as http accelerator
;setup is as follows

Flow of requests from users should be like this

squid listens on public ip port:80   ---apache(127.0.0.1:80) ---
RewriteRule for apache to---zope:8080/plonesite



Important  NOTE : for the last couple of days I am experiencing
that my  plone site on zope :8080 is become not acceesible after 5/6
hours ,after the services I restarted :

when I run the command # ` sockstat -4p 80 `
 here I can see a specific IP address (164.115.5.2 )connecting
directly  ande  using   python2.4 as  pasted below .

(My question is ,Is it normal   this foreign ipaddress  connectiong to
my public ip and executing python.2.4 ?  can I suspect this foreign Ip
address as an attacker ?)

many of you may be aware what is this  let me  request you to share
your information with me .

Thanks in advance
KK


$ sockstat -4p 80
USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS
www  httpd  73932 3  tcp4   127.0.0.1:80  *:*
www  python2.4  44496 20 tcp4   my_Serverbox_public_IPAddress
:65287 164.115.5.2:80
www  python2.4  44496 30 tcp4
my_Derverbox_public_IPAddress:64313 164.115.5.2:80
www  httpd  849   3  tcp4   127.0.0.1:80  *:*
squidsquid  603   9  tcp4   my_box_public_IPAddress:80
203.194.194.254:43451
squidsquid  603   11 tcp4   my_Serverbox_public_IPAddress:80*:*
squidsquid  603   13 tcp4   127.0.0.1:55663   127.0.0.1:80
www  httpd  516   3  tcp4   127.0.0.1:80  *:*
www  httpd  515   3  tcp4   127.0.0.1:80  *:*
www  httpd  514   3  tcp4   127.0.0.1:80  *:*
www  httpd  514   18 tcp4   127.0.0.1:80  127.0.0.1:55663
root httpd  502   3  tcp4   127.0.0.1:80  *:*
$ su


[squid-users] squid as http accelerator : now its again became slow

2008-03-12 Thread kk CHN
People ;

 I installed  squid 2.6 stable for my Webserver(FreeBSD-6.2 ,1 GB Ram)
, where I runs a Plone 2.5 site on Zope2.9 .With other applications
such as postfix,mailman  etc .

Squid--Apache 2.2--zope
(Previosly it was apache --zope ;But it was too slow so I put squid
as http accelerator)

 this setup worked fine for a  couple of weeks ,
but yesetday  my site became very slow  again , I restarted squid
,apache, and zope server again , but  for some time(nearly 20 minutes
) it will be fast after  that again became slow .

What may be the issue , ? how can I improve the speed ?

This is my TOP out put

Any hints most welcome :

Thanks in advance
KK

last pid:  1792;  load averages:  0.73,  0.50,  0.37
  up 0+01:08:08  12:59:26
145 processes: 1 running, 142 sleeping, 2 stopped
CPU states:  5.1% user,  0.0% nice, 10.3% system,  0.0% interrupt, 84.6% idle
Mem: 690M Active, 106M Inact, 133M Wired, 45M Cache, 110M Buf, 14M Free
Swap: 2048M Total, 2048M Free

  PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
  731 mailman 1   80 99732K 97904K nanslp   0:43  3.56% python
  522 www 7  200   298M   295M kserel  14:37  0.78% python2.4
 1764 root1  960  2668K  1936K RUN  0:00  0.28% top
  586 root3  200 17272K  2588K kserel   1:06  0.00% gkrellmd
  516 www 1   40 19308K 17568K select   0:33  0.00% python2.4
  505 www 3  200  3212K  2024K kserel   0:32  0.00% pound
  526 www 3  200 89988K 87408K kserel   0:17  0.00% python2.4
  765 root1   40 39340K 35628K select   0:12  0.00% perl5.8.8
  727 mailman 1   80  9744K  7764K nanslp   0:09  0.00% python
  728 mailman 1   80  8868K  7032K nanslp   0:08  0.00% python
  600 squid   1   40 13072K 11800K kqread   0:08  0.00% squid
  716 mysql   5  200 65220K 26708K kserel   0:06  0.00% mysqld
  434 bind1  960  7528K  6292K select   0:06  0.00% named
  736 mailman 1   80  9468K  7632K nanslp   0:05  0.00% python
  739 mailman 1   80  9380K  7448K nanslp   0:04  0.00% python
  560 root1   80  1236K   764K nanslp   0:04  0.00% powerd
  726 mailman 1   80  9496K  7540K nanslp   0:04  0.00% python
  605 root1  960 25720K 24420K select   0:04  0.00% perl5.8.8
 1392 tesac  1  960  2684K  1820K STOP 0:04  0.00% top
  733 mailman 1   80  8388K  6512K nanslp   0:03  0.00% python
  660 root1  960  2812K  1564K select   0:02  0.00% master
  502 www 6  200 33184K 23076K kserel   0:02  0.00% httpd
  670 postfix 1  960  4316K  3052K select   0:02  0.00% qmgr
  495 root1   80 16160K  9588K nanslp   0:01  0.00% httpd
  839 www 4  200 27420K 16040K kserel   0:01  0.00% httpd
  501 www 4  200 23224K 13504K kserel   0:01  0.00% httpd
  503 www 5  200 25468K 14676K kserel   0:01  0.00% httpd
  365 root1  960  1300K   848K select   0:01  0.00% syslogd
  553 root1  960  2920K  1480K select   0:01  0.00% ntpd
 1390 tesac1  960  6080K  2524K select   0:00  0.00% sshd
 1142 postfix 1  960  2876K  1592K select   0:00  0.00%
trivial-rewrite

Suspended


This is my  df -h  Output

 df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ad4s1a496M 91M365M20%/
devfs  1.0K1.0K  0B   100%/dev
/dev/ad4s1f 19G2.3G 15G13%/home
/dev/ad4s1d3.9G 71M3.5G 2%/tmp
/dev/ad4s1e9.7G5.6G3.3G63%/usr
/dev/ad4s1g 39G 31G4.5G88%/var
devfs  1.0K1.0K  0B   100%/var/named/dev


[squid-users] Squid Doubts

2008-02-22 Thread kk CHN
People:  I tam using  squid 2.6 in my freeBSD-6.1 machine ,  as http
accelerator on port :80

I edited  my start up script  (/usr/loca/etc/rc.d/squid)   for
squid-user as root ,//because its not starting asuser  squid for
port 80. :

Q1 :  is editing startup script to change useras  root okay ?

I have a few questions below this : Let me ask  your comments on those
 questions


whenever I am doing a sockstat -4p  80   I am getting

max # sockstat -4p 80
USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS
www  httpd  1394  3  tcp4   127.0.0.1:80  *:*
squidsquid  1392  9  tcp4   2xx.1xx.2xx.xxx:8063.42.21.53:39234
squidsquid  1392  11 tcp4  2xx.1xx.2xx.xxx:80   *:*
squidsquid  1392  14 tcp4   2xx.1xx.2xx.xxx:80 243.109.215.18:52561
squidsquid  1392  15 tcp4  2xx.1xx.2xx.xxx:80243.109.215.18:56962
squidsquid  1392  16 tcp4   127.0.0.1:58926   127.0.0.1:80
squidsquid  1392  17 tcp4   2xx.1xx.2xx.xxx:80209.131.41.48:23788
www  httpd  503   3  tcp4   127.0.0.1:80  *:*
www  httpd  503   22 tcp4   127.0.0.1:80  127.0.0.1:58926
www  httpd  502   3  tcp4   127.0.0.1:80  *:*
www  httpd  501   3  tcp4   127.0.0.1:80  *:*
root httpd  495   3  tcp4   127.0.0.1:80  *:*
max#

here squid shows
squidsquid  1392  16 tcp4   127.0.0.1:58926   127.0.0.1:80

Q 2:  why the port number 58926 here? it should be  80 right?

Q 3 : How can I check the foreign addresses connected to  port :80 are
spammers OR not ?

Q: 4  if they are spammers how to deny them?

Q :5  this is my squid folder but it not showing any cache dirs  why?

max# cd /usr/local/squid/
max# ls
cache   logs
max# cd cache/
max# ls
nonesquid.core
max#


Thanks in advance
KK


Re: [squid-users] Squid Doubts

2008-02-22 Thread kk CHN
Hi Amos ;

  Thank you very much  for answering my queries :All answers were informative :

 I found my squid.conf says cache is /var/squid/cache , I found those
in that location

sorry for asking that question without checking my squid.conf :

Thanks a lot for the  time  spent  the prompt answers you have given  to me .

warm regards
KK

On 2/22/08, Amos Jeffries [EMAIL PROTECTED] wrote:
 kk CHN wrote:
   People:  I tam using  squid 2.6 in my freeBSD-6.1 machine ,  as http
   accelerator on port :80
  
   I edited  my start up script  (/usr/loca/etc/rc.d/squid)   for
   squid-user as root ,//because its not starting asuser  squid for
   port 80. :
  
   Q1 :  is editing startup script to change useras  root okay ?


 Yes squid will reduce its user level to minimal after it has setup properly.


  
   I have a few questions below this : Let me ask  your comments on those
questions
  
  
   whenever I am doing a sockstat -4p  80   I am getting
  
   max # sockstat -4p 80
   USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS
   www  httpd  1394  3  tcp4   127.0.0.1:80  *:*
   squidsquid  1392  9  tcp4   2xx.1xx.2xx.xxx:8063.42.21.53:39234
   squidsquid  1392  11 tcp4  2xx.1xx.2xx.xxx:80   *:*
   squidsquid  1392  14 tcp4   2xx.1xx.2xx.xxx:80 
 243.109.215.18:52561
   squidsquid  1392  15 tcp4  2xx.1xx.2xx.xxx:80
 243.109.215.18:56962
   squidsquid  1392  16 tcp4   127.0.0.1:58926   127.0.0.1:80
   squidsquid  1392  17 tcp4   2xx.1xx.2xx.xxx:80
 209.131.41.48:23788
   www  httpd  503   3  tcp4   127.0.0.1:80  *:*
   www  httpd  503   22 tcp4   127.0.0.1:80  127.0.0.1:58926
   www  httpd  502   3  tcp4   127.0.0.1:80  *:*
   www  httpd  501   3  tcp4   127.0.0.1:80  *:*
   root httpd  495   3  tcp4   127.0.0.1:80  *:*
   max#
  
   here squid shows
   squidsquid  1392  16 tcp4   127.0.0.1:58926   127.0.0.1:80
  
   Q 2:  why the port number 58926 here? it should be  80 right?



 No. That is the OUT side of squid, connecting from  large random port to
  localhost:80 IN side of the www server.



  
   Q 3 : How can I check the foreign addresses connected to  port :80 are
   spammers OR not ?
  


 http://www.surbl.org/
  http://www.spamhaus.org/zen/


   Q: 4  if they are spammers how to deny them?
  


 Up to you.


   Q :5  this is my squid folder but it not showing any cache dirs  why?
  


 What does your squid.conf say about cache_dir?


   max# cd /usr/local/squid/
   max# ls
   cache   logs
   max# cd cache/
   max# ls
   nonesquid.core
   max#
  


 I think from that you have probably created a cache_dir named none :-(


  Amos

 --
  Please use Squid 2.6STABLE17+ or 3.0STABLE1+
  There are serious security advisories out on all earlier releases.



[squid-users] sqid as http accelerator (both on Port:80), but logs shows forign IPs why ?

2008-02-12 Thread kk CHN
People: i am using squid infront of my apache as an http accelerator,
but the httpd-access log shows  GET requests  from 2 foreign address ,
with all other requests from 127.0.0.1 ,

I want to know why it shows 2 foreign IPs in the logs, is it means
squid is not accepting all the requests through 127.0.0.1 right ?

This means someone is directly connecting to apache ?

This is my squid's setup

 http_port  Public-IP-of-my-machine :80  accel vhost
cache_peer 127.0.0.1 parent 80 0 originserver default

My apache listens on :80

Any comments most welcome:  all requests must be  from 127.0.0.1 in
this setup right ?OR showing foreign address in apache logs okay?

Thanks in Advance
KK


[squid-users] squid as HTTP accelerator : Three questions

2008-02-06 Thread kk CHN
Hi People ;

Thanks for your  reply , thanks to Adrain Chadds tips .
  I edited the init script  changed the squid user to root  , now its
working for port 80 ,

I would like to clarify the following Three questions

Question  1  )  Is  there any security issues  for running squid as root ?



I  am setting up my squid as  http accelerator , so squid will handle
the request first , then handed over it to Apache (which is now Listen
on 81) then from Apache  to my zope(which is on port :8080)


so what I added in squid is

http_prot 80 accel vhost

cache_peer 127.0.0.1 parent 81 0 originserver default
http_access allow from all
##

my apache Listen on 81
  and the vhost entry for my site is like this

NameVirtualHost *:81
VirtualHost *:81
RewriteEngine On
RewriteRule ^/(.*)
http://127.0.0.1:8080/VirtualHostBase/http/demo.mysite.net:81/mysite/VirtualHostRoot/$1
[L,P]
ErrorLog /var/log/apache/demo.mysite.net/error_log
CustomLog /var/log/apache/demo.mysite.net/access.log combined
/VirtualHost


  Previously This site was too slow when I use apache infront of zope
, so all request is coming to apche then apache Rewrite rule will hand
over the request to zope:8080

Now in the new setup squid is infront ,

question 2 ) how can I check to make sure  the squid is handling all
the request first , and so its  performance as an http accelerator ?

question 3)  I have a number of VirtualHost (name based) entries in my
apache , so running the sqid in front on port :80 , did accelerate
the speed of my vhost sites ?



Looking for your valuable comments on this setup ,

I am looking for the  pros/cons of this setup  squid--apache-- applications

Thanks in advance for your valuable feedback as early as possible
KK


[squid-users] squid in port 80 , not running

2008-02-05 Thread kk CHN
Hi squid-users,

I installed squid2.6 from ports, and , squid starting properly on 3128

I want to make squid to listen on port 80 , so I edited http_port 3128
to http_port 80 , and restarted it , but after changing the ports its
not running

tail  /var/log/messgaes shows that it canot open HTTP PORT , ? any
hints  most welcome


my requirement was to  handle all the request first by squid  , then
pass it to apache , (cache_peer )

Thanks in advance
KK


[squid-users] How to make squid sitting in front of apache and ZOPE

2008-02-04 Thread kk CHN
Hi  people:

 I have a zope instance in my server on port 8080 , I configured a
vhost entry  Rewrite Rule  for this instance in my httpd-vhost.conf
as in the paste , but my site is too slow due to large number of
requests , I want squid to sit infront of apache like  this
squid---apache--zope
 Please see the paste: the current setup as follows

VirtualHost *:80
ServerAdmin [EMAIL PROTECTED]
ServerName  mysite.net
ServerAlias www.mysite.net

RewriteEngine On


#Main rewrite for zope#

RewriteRule ^/(.*)
http://127.0.0.1:8080/VirtualHostBase/http/www.mysite.net:80/mysite/VirtualHostRoot/$1
[L,P]

ErrorLog /var/log/httpd/mysite.net/error.log
CustomLog /var/log/httpd/mysite.net/access.log combined
/VirtualHost
this is the exiting setup  in my machine,   so request will satisfied
like this  apache:80 --zope:8080,




 I installed squid in my FreeBSD-6.2 box from ports


Can you help me what I have change squid.conf , so as the requests
will first handled by squid , like the follows

SQUID--APACHE---ZOPE

I would like to request your kind response , that will help me to fix the issue

Thanks in advance
KK


[Mailman-Users] How to migrate existing lists to a new Mailman installation

2008-01-25 Thread kk CHN
Hi People:

 I am using mailman 2.1.9 and postfix , I want to migrate my old lists
running in another server to this new server machine , How it can be  done ?
,

  any one of you have a very  simple method OR   document for this please
let me know those :

Thanks in advance
kk
--
Mailman-Users mailing list
Mailman-Users@python.org
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/
Unsubscribe: 
http://mail.python.org/mailman/options/mailman-users/archive%40jab.org

Security Policy: 
http://www.python.org/cgi-bin/faqw-mm.py?req=showamp;file=faq01.027.htp


[Mailman-Users] Uncaught bounce notification: Why this message , I haven't sent Any mails

2007-12-16 Thread kk CHN
Hello everybody ;

I have  few mailing lists running  on Mailman(I am the list administrator
for these lists ) :  for the last few days I am getting a message regularly
with subject line   like this : Uncaught bounce notification


I havn't sent any mails to the address ([EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED])
still   I am receiving the   Uncaught bounce notificationemail to my
list administration id  got annoyed How can I control this Or whats
happening here with my  mail system ?

Anybody here plese help me why this happened , what I have to take care for
not repeat this again  .

THANKS IN ADVANCE
KK
The content of the message as follows





The attached message was received as a bounce, but either the bounce
format was not recognized, or no member addresses could be extracted
from it.  This mailing list has been configured to send all
unrecognized bounce messages to the list administrator(s).

For more information see:
http://lists.mytestsite.com/mailman/admin/technical/bounce
http://lists.iosn.net/mailman/admin/iosn-technical/bounce


--

This message was created automatically by mail delivery software.

A message that you sent has not yet been delivered to one or more of
its recipients after 3 days.

The message has not yet been delivered to the following addresses:

  [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED]

host boa-graphics.com[192.160.1.1]:
connection to mail exchanger failed with timeout

No action is required on your part. Delivery attempts will continue for
some time, and this warning may be repeated at intervals if the message
remains undelivered. Eventually the mail delivery software will give up,
and when that happens, the message will be returned to you.

--- The header of the original message is following. ---

Received-SPF: none (mxeu22: 203.189.25.12 is neither permitted nor
denied by domain of mysite.com) client-ip=203.189.25.12;
[EMAIL PROTECTED];
helo=mysite.com
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED];helo=devel.iosn.net;
Received: from mysite.com (mysite.com [203.189.25.12])
by mx.kundenserver.de (node=mxeu22) with ESMTP (Nemesis)
id 0MKr6C-1J1inZ3pt2-0003YH for [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED];
Mon, 10 Dec 2007 14:35:57 +0100
Received: from mysite.com (mysite.com [203.189.25.12])
by mysite.com (Postfix) with ESMTP id E44152EBAF0
for [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED];
Mon, 10 Dec 2007 19:05:17 +0530 (IST)
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Subject: Your message to technical awaits moderator approval
From: [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED]
To: [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED]
Message-ID: [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED]
Date: Mon, 10 Dec 2007 19:05:16 +0530
Precedence: bulk
X-BeenThere: [EMAIL PROTECTED]
http://www.webmail.cdac.in/twig/index.php?s%5Bmailbox%5D=INBOXs%5BmailGroup%5D=%2As%5Bmail_startmsg%5D=1s%5Bsortby%5D=dates%5Bsortbyway%5D=1s%5Bdelete-return%5D=msgviews%5Bmailtree%5D=0%7Cc%5Bf%5D=mailc%5Ba%5D=compose[EMAIL
 PROTECTED]
X-Mailman-Version: 2.1.9
List-Id: Technical discussions related to MyWork technical.mysite.com
X-List-Administrivia: yes
Sender: [EMAIL PROTECTED]

[Bacula-users] difficulty in starting bacula in freeBSD-6.2

2007-12-12 Thread kk CHN
Hi all ,

I followed the tutorial to a minimal setup of bacula on FreeBSD :  URL:
http://www.freebsddiary.org/bacula.php

but  in the ports collection , there are two different ports for
bacula-serverbacula-client , I think the tutorial is so old , anyways I
installed bothe  bacula-client   bacula-server through ports .


but the sectiondealing with  Starting the Bacula daemons :

 its asking us to do a

/usr/local/etc/rc.d/bacula.sh start But I found  there is no such
bacula.sh   script , instead I have seenbacula-dir   , bacula-fd ,
bacula-sdfiles  ,

 Is there   anything wrong with my bacula installation   OR how can I start
the bacula daemon  in this setup ?

any suggestions highly appreciated .


Thanks in advance
kk
-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula startup problem

2007-12-12 Thread kk CHN
Hi all ,

I followed the tutorial to a minimal setup of bacula on FreeBSD :  URL:
http://www.freebsddiary.org/bacula.php

but  in the ports collection , there are two different ports for
bacula-serverbacula-client , I think the tutorial is so old , anyways I
installed bothe  bacula-client   bacula-server through ports .


but the sectiondealing with  Starting the Bacula daemons :

 its asking us to do a

/usr/local/etc/rc.d/bacula.sh start But I found  there is no such
bacula.sh   script , instead I have seenbacula-dir   , bacula-fd ,
bacula-sdfiles  ,

 Is there   anything wrong with my bacula installation   OR how can I start
the bacula daemon  in this setup ?

any suggestions highly appreciated .


Thanks in advance
 kk
-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Gnome2.18.2+FreeBSD-6.2 , issue after six months

2007-11-05 Thread kk CHN
 I configured Gnome2.18.2 for my FreeBSD6.2 Box six months before , its
working fine upto the last week , suddenly  its behaving strangely , its not
launching the login shell from the panel ,


 so I went through application--system tools--new login


But getting the  message (GDM is not running , you might be using a
different display manager such as KDM or xdm .If you still wish to use this
feature , either start GDM your self or ask your system admin to start GDM.

I havn't altererd any system configuration , in /etc/rc.conf   these lines
are still there gdm_enable=YES gnome_enable=YES

some other problems also

I can't browse the internet (launching the FireFox web browser itselftaking
a long time ,the I tried to acess yahoo.com,google.co.in ..etc its not
allowing me to surf the net , even in the browser I configuerd for my proxy
server and port )


Ifconfig command (after a long interval I am able to get a shell prompt )
returns me the ip address and mask of my  system

Ifanybody can help to find what wrong with this  box , and how to fix it
please send me your suggestions asap

Thanks in advance
KK
___
gnome-list mailing list
gnome-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gnome-list


[Mailman-Users] Mailman list-admin-interface : Login failed

2007-10-23 Thread kk CHN
Hai everybody ;

 I had a FreeBSD 6.2 server with postfix MTA and mailman s/w running  for
last 6 months ,a couple of days before all my mailing lists down , whenever
I try to login to the admin interface for all  the lists in this box I am
getting an error like this
 Bug in Mailman version 2.1.9 We're sorry, we hit a bug!

Please inform the webmaster for this site of this problem. Printing of
traceback and other system information has been explicitly inhibited, but
the webmaster can find this information in the Mailman error logs
..

I got the error logs as like this
tail  /usr/local/mailman/logs/error is showing a message  that ther is not
enough space left on the device , so I removed some backup files from  /usr
dir and now space is available there in  /usr dir
now i can  /usr/local/mailman/bin/mailamnctl start (stop) successfully(when
ther is not enough space in /usr dir I cant execute this commands ) but now
mailman start/stop working

but still problem in the login session same error again getting
 Bug in Mailman version 2.1.9 We're sorry, we hit a bug! Please inform the
webmaster for this site of this problem. Printing of traceback and other
system information has been explicitly inhibited, but the webmaster can find
this information in the Mailman error logs


this is my error log for mailman

tail  /usr/local/mailman/logs/error is showing these only

admin(15742):   SCRIPT_URL: /mailman/admin/listx
admin(15742):   REQUEST_URI: /mailman/admin/list
admin(15742):   HTTP_ACCEPT:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=
0.8,image/png,*/*;q=0.5
admin(15742):   GATEWAY_INTERFACE: CGI/1.1
admin(15742):   REMOTE_PORT: 39221
admin(15742):   HTTP_ACCEPT_LANGUAGE: ta,en-us;q=0.7,en;q=0.3
admin(15742):   CONTENT_TYPE: application/x-www-form-urlencoded
admin(15742):   HTTP_ACCEPT_ENCODING: gzip,deflate
admin(15742):   UNIQUE_ID: W51T4suB-7YAADz9WIoAAALT
admin(15742):   PATH_INFO: /listx


lots of experts here , I request all of you to suggest your tips to  solve
this issue
that may help me lot

Thanks in advance
kk
--
Mailman-Users mailing list
Mailman-Users@python.org
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/
Unsubscribe: 
http://mail.python.org/mailman/options/mailman-users/archive%40jab.org

Security Policy: 
http://www.python.org/cgi-bin/faqw-mm.py?req=showamp;file=faq01.027.htp