mod_perl 1.0 to 2.0 migration

2022-11-21 Thread Xinhuan Zheng
Hi All,

We need to migrate our codebase from mod_perl 1.0 to 2.0. In our codebase, we 
have used “use Apache::Constants”. We want to change it to be like: 
https://perl.apache.org/docs/2.0/user/porting/compat.html#mod_perl_1_0_and_2_0_Constants_Coexistence.
 Our codebase are not only used in mod_perl environment, they are also used in 
Non mod_perl environment, like standalone program calling the modules in 
codebase. What should we change for “use Apache::Constants” in Non mod_perl 
environment?

Thank you,

Xinhuan


[389-users] nsslapd-logging-backend

2022-01-11 Thread Xinhuan Zheng
We set up a few 389 Directory Server instances and set up replication among 
them. Each instance has its own internal logs. We need to centralize all the 
logs to one place by using syslog-ng. I learned a new configuration 
nsslapd-logging-backend - 
https://directory.fedoraproject.org/docs/389ds/design/logging-multiple-backends.html.
 According to RedHat documentation - 
https://access.redhat.com/solutions/3060901, the configuration is started in 
Redhat Enterprise Linux 7: 389-ds-base-1.3.5.10-11.el7. Is it started in CentOS 
7 also?

Xinhuan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Custom Pricing on eCommerce Storefront

2021-10-11 Thread Xinhuan Zheng
Hello,

I followed the instructions to create a different price type:

https://demo-trunk.ofbiz.apache.org/catalog/control/EditProductPrices?productId=PIZZA

The PriceTypeId is *Special Interest Group Price*.

When I click the Product Page

link, it is still showing the *Product Aggregated Price.* Can I change the
UI showing to be my *Special Interest Group Price?*

Thank you,

- Xinhuan

On Wed, Oct 6, 2021 at 6:05 PM Rishi Solanki 
wrote:

> Price with types already showing, you just need to allow other price to
> show on UI. See list price and your price label for this product and can
> further show for other price types. All other information already shared
> with you by Aditya is best way to go with.
>
> -
> https://demo-trunk.ofbiz.apache.org/ecommerce/micro-chrome-widget-WG--p
>
> Rishi Solanki
> *CTO, Mindpath Technology*
> Intelligent Solutions
> cell: +91-98932-87847
> LinkedIn 
>
>
> On Wed, Oct 6, 2021 at 1:11 PM Justine Nowak 
> wrote:
>
> > Thanks Aditya, is there anyway we can also show the custom price types on
> > the eCommerce storefront?
> >
> > On Wed, Oct 6, 2021 at 1:27 AM Aditya Sharma 
> > wrote:
> >
> > > Hello Justine,
> > >
> > > Here you could find list of price types available:
> > >
> > >
> >
> https://demo-trunk.ofbiz.apache.org/webtools/control/entity/find/ProductPriceType
> > >
> > > Here you could find the seed data for the same:
> > >
> > >
> >
> https://github.com/apache/ofbiz-framework/blob/75d3367d47abae604126a97a80e715a798e7fc55/applications/datamodel/data/seed/ProductSeedData.xml#L271
> > >
> > > You could create prices for specific price types:
> > >
> > >
> >
> https://demo-trunk.ofbiz.apache.org/catalog/control/EditProductPrices?productId=WG-9943
> > >
> > > For custom values, you could just create data in ProductPriceType
> entity.
> > >
> > > HTH
> > > Thanks and Regards,
> > > Aditya Sharma
> > >
> > >
> > > On Wed, Oct 6, 2021 at 11:29 AM Justine Nowak 
> > > wrote:
> > >
> > > > Hello,
> > > >
> > > > We want to add custom pricing types to our products. For example, we
> > > have a
> > > > product that will have a "Default Price" (this is what gets charged
> to
> > > the
> > > > invoice), but we also need to add other price types for example,
> Resell
> > > > Price / MSRP / MAP price. How do we create these new types and have
> > them
> > > > show up on the eCommerce site without them interfering with the
> actual
> > > > selling price?
> > > >
> > > > -Justine
> > > >
> > >
> >
>


[389-users] Re: Insufficient Access Rights

2021-09-23 Thread Xinhuan Zheng
Hi Mark,

You are right. I figure it out the ACI to add is:

(targetattr="userPassword") (version 3.0; acl "Allow proxyagent updating their 
password"; allow (write) 
userdn="ldap:///cn=proxyagent,ou=profile,dc=mycompany,dc=com;;)

I used LDIF file to add above to the ACI attribute for 
'ou=People,dc=mycompany,dc=com'

Thank you,

- Xinhuan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Insufficient Access Rights

2021-09-15 Thread Xinhuan Zheng
I set up Self Service Password Tool. 
https://ltb-project.org/documentation/self-service-password. I configured a 
bind DN for password reset.

$ldap_binddn = "cn=proxyagent,ou=profile,dc=mycompany,dc=com";
$ldap_bindpw = "mypassword";

I'm getting "Password was refused by the LDAP directory (Insufficient access 
rights )" error when resetting a user's password. If I change the $ldap_binddn 
to "Directory Manager", it works.

I then added the "cn=proxyagent,ou=profile,dc=mycompany,dc=com" to "PD 
Managers" group with ACI:

ldapsearch -x -LLL -H ldap://server.mycompany.com:389  -s base -b 
'ou=People,dc=mycompany,dc=com' aci
dn: ou=People,dc=mycompnay,dc=com
aci: (targetattr=*")(targetfilter ="(ou=People)")(version 3.0;acl "Engineering
  Group Permissions";allow (write)(groupdn = "ldap:///cn=PD Managers,ou=groups
 ,dc=mycompany,dc=com");)

ldapsearch -x -LLL -H ldap://server.mycompnay.com:389  -s base -b 'cn=PD 
Managers,ou=Groups,dc=mycompany,dc=com'  uniquemember
dn: cn=PD Managers,ou=Groups,dc=mycompany,dc=com
uniquemember: cn=Directory Manager
uniquemember: cn=proxyagent,ou=profile,dc=mycompany,dc=com

The "PD Managers" group has ACI to allow write for ou=People for all 
attributes. The proxyagent is member of the group. Why binding proxyagent 
results in "Insufficient Access Right"?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Two Factor Authentication

2021-09-08 Thread Xinhuan Zheng
Does 389 directory server support Two Factor Authentication? Can it be 
integrated with Google Authenticator?

- Xinhuan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: What technology do you choose for Party data model?

2021-01-08 Thread Xinhuan Zheng
Hello Mr. Pritam,

Thank you for your email. Can you please provide me more resource
information about OFBiz OOTB capabilities?

Sorry for the unclear question about the Party data model. Initially the
business won't have too many product skus, but it will definitely have
limited number of product skus, suppliers, and orders. However, those types
of data won't pour in all at the same time. The very first data to be
stored initially is Party data, like suppliers including those who are
candidates, even though they may not be the real ones. So that's why I
include Excel spreadsheet, or Exchange database. In my mind, the Excel
spreadsheet can store the same kind of Party data, and the Exchange
database can store the communication messages, even without more
sophisticated  technology like MySQL. That's just my understanding. But I
wonder if the community would have the same view as me.

- Xinhuan

On Fri, Jan 8, 2021 at 12:27 AM Pritam Kute 
wrote:

> Hello Xinhuan,
>
> Happy to see that you choose Apache OFBiz for your business needs. Based on
> the overall outline of the business you are thinking of, the OFBiz has OOTB
> capabilities to fulfil all your requirements.
>
> The question about party data model is little unclear to me but I will
> recommend you to go with MySQL database for storing the data. MySQL
> integration is seamless with Apache OFBiz and a lot of live projects on top
> of Apache OFBiz are using MySQL from years.
>
> Kind Regards,
> --
> Pritam Kute
>
>
> On Thu, Jan 7, 2021 at 10:46 PM Xinhuan Zheng 
> wrote:
>
> > I am reviewing Apache OFbiz Data Model book -
> >
> https://cwiki.apache.org/confluence/download/attachments/13271792/OFBizDatamodelBook_Combined_20171001.pdf?api=v2
> ,
> > Page 8 - 9, the Party, Contact, Communication, and Security data model.
> One
> > question just comes up into my mind.
> >
> > Suppose this is a Catalog Order business. It has products of books, DVDs,
> > supplies (for example, certain cups used in certain ceremony) that are
> all
> > not perishable. When catalogs are printed and sent to prospects, the
> > business takes the order, fulfills the order from inventoried product
> > items. Obviously, the business needs to store product items before orders
> > come in. This will involve in purchase orders from suppliers. This is an
> > outline of the business.
> >
> > The goal is to keep business process consistent, efficient, and
> obviously,
> > with limited budget of finance and human resources. So technology is
> > carefully compared, and Apache OFBiz comes out as a potential candidate.
> >
> > By analyzing the Apache OFbiz Party data model, as the link referenced
> > above, combining with the outlined business process and the goal, what
> > technology do you choose just for Party data model (including Party,
> > Contact, Communication, Security)? I am listing below potential
> > technologies:
> >
> > 1. Microsoft Excel Spreadsheet
> > 2. Microsoft Exchange Database
> > 3. MySQL Database
> > 4. Mongo Database
> > 5. Oracle Database
> >
> > Thanks!
> >
> >
>


What technology do you choose for Party data model?

2021-01-07 Thread Xinhuan Zheng
I am reviewing Apache OFbiz Data Model book - 
https://cwiki.apache.org/confluence/download/attachments/13271792/OFBizDatamodelBook_Combined_20171001.pdf?api=v2,
 Page 8 - 9, the Party, Contact, Communication, and Security data model. One 
question just comes up into my mind.

Suppose this is a Catalog Order business. It has products of books, DVDs, 
supplies (for example, certain cups used in certain ceremony) that are all not 
perishable. When catalogs are printed and sent to prospects, the business takes 
the order, fulfills the order from inventoried product items. Obviously, the 
business needs to store product items before orders come in. This will involve 
in purchase orders from suppliers. This is an outline of the business.

The goal is to keep business process consistent, efficient, and obviously, with 
limited budget of finance and human resources. So technology is carefully 
compared, and Apache OFBiz comes out as a potential candidate.

By analyzing the Apache OFbiz Party data model, as the link referenced above, 
combining with the outlined business process and the goal, what technology do 
you choose just for Party data model (including Party, Contact, Communication, 
Security)? I am listing below potential technologies:

1. Microsoft Excel Spreadsheet
2. Microsoft Exchange Database
3. MySQL Database
4. Mongo Database
5. Oracle Database

Thanks!
 


Entity Engine

2020-11-29 Thread Xinhuan Zheng
I started reading ofbiz documentation to be familiar with it. I look at the 
artifact document here: 
https://cwiki.apache.org/confluence/display/OFBIZ/Framework+Introduction+Videos+and+Diagrams?preview=/7045155/14286869/18ArtRefDia.pdf
I'm truly amazed by this diagram.

There is only (5) colored big boxes. Are they virtually serving as containers? 
For example, the Entity Engine serving as a container to produce entities, that 
can be created by physical SQL engine? But what is this Entity Engine 
implementation? Is it a Tomcat container instance?

I imagine Servlet Container to the top right of the diagram is implemented by 
Tomcat, is that right? The request flow is like a loop to me. The CS: Request: 
Event are fed into Control Servlet: Request part. What are the CS: Request: 
Event files and what are the control logic for this part to happen?

>From this document: 
>https://cwiki.apache.org/confluence/display/OFBIZ/Entity+Engine+Guide. The 
>document said "The patterns used in the Entity Engine include: Business 
>Delegate, Value Object, Composite Entity (variation), Value Object Assembler, 
>Service Locator, and Data Access Object." The number of patterns is much less 
>of the number of Entities to be managed? Is that all the patterns? Is there 
>other reference that I can learn more about those Entity patterns?

Thank you,

- Xinhuan


ERD Diagram and Seed Data

2020-09-08 Thread Xinhuan Zheng
Good Afternoon,

I'm a developer working for some digital company. My use case is not too 
general, and is simpler than OFBiz. It is for a small business. I have 
maintained a few Excel spreadsheet for a few years for some business asset. The 
spreadsheet are not formal modeled, and from time to time we maintain and query 
those sheets. The data present characteristics of hierarchy structure and 
different attribute sets for different kind types.  Maintaining those data in 
database may require formal data modeling and I am not clear how it works.

I came across an old publication about Universal E-Catalog, and I learned about 
OFBiz project. I want to learn Key-Value pair modeling pattern. I downloaded 
the OFBiz ERD diagrams. I studied the ERD diagrams for a couple days, and what 
I'm interested is Product, ProductType, Content, ContentType, Entities that use 
Universal E-Catalog pattern. You can correct me if that is not the case. This 
modeling pattern needs to have seed data to understand, and I found the best 
way to view the seed data is using Excel spreadsheet. I found the project on 
Github containing Seed data, however, all Seed data are XML format. Is there 
any possibilities to extract some XML seed data to convert to CSV format? From 
the ERD diagram, I think I need Product, ProductType, ProductTypeAttr, Content, 
ContentType, ContentAttr seed data.

Thank you!

Xinhuan Zheng



[ansible-project] Generate HTML Table from Ansible Inventory File

2020-08-28 Thread Xinhuan Zheng
Hello,

I have an inventory file like below:

[group1]
server1.example.com
server2.example.com

[group2]
server3.example.com
server4.example.com

[group1:vars]
field1=a1
field2=a2

[group2:vars]
field1=a3
field2=a4

I need to generate an HTML file like below:


  Inventory
  
  
 
  HostField1Field2
 
 
  
server1.example.coma1a2
  
server2.example.coma1a2
  
server3.example.coma3a4
  
server4.example.coma4a4
 
  
  


Basically all the hosts and their values becomes HTML Table. How do I use 
Ansible playbook to accomplish it?

Thank you,

Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/9d14c705-e0e0-40fa-81dd-3150d376558fo%40googlegroups.com.


Re: [ansible-project] How to structure variables to distinguish different environments

2020-07-07 Thread Xinhuan Zheng
Hello,

That construct actually makes sense to me. The fact of Development vs. 
Production is controlled by their network. We can just create custom fact 
based on each host IP address, and return customer fact variable. Then we 
can set nfs_server per that custom fact.

Thanks for showing this tip.

- Xinhuan

On Thursday, July 2, 2020 at 6:23:27 AM UTC-4, Srinivas Naram wrote:
>
> There could be some distinction between both the environments. Can you use 
> gather_facts and get the differentiating value ?
>
> if you are able to get it using gather_facts, you can use set_facts
>
> Example
>
> set_facts:
>nfs_server: xyx
> when: ansible_distribution= 'CentOS'
>
> set_fact:
>nfs_server: abc
> when: ansible_distribution ='Ubuntu'
>
> On Thu, Jul 2, 2020 at 12:26 AM Xinhuan Zheng  > wrote:
>
>> Hello,
>>
>> I want to define a variable for playbook in *group_vars/server_genre* 
>> file. However, I don't know if Ansible can support something like 
>> server_genre@environment syntax notation. Here is the detail:
>>
>> *In inventory/environment*:
>> ---
>> [server_genre]
>> myserver1.example.com
>>
>> *In group_vars/server_genre:*
>> ---
>> nfs_server: mynfsserver1.example.com
>>
>> *In server_genre.yml playbook:*
>> ---
>> - name: Playbook for server_genre
>>   hosts: server_genre
>>   gather_facts: yes
>>
>>   tasks:
>>
>>   - name: Install Nfs client
>> package:
>>   name: nfs-utils
>>   state: present
>>   - name: mount nfs
>> mount:
>>   path: /mymount
>>   src: "{{ nfs_server }}"
>>   fstype: nfs
>>   opts: ro
>>   state: mounted
>>
>> In a different environment, the variable {{ nfs_server }} will have 
>> different value, however, I can't separate the different values using one 
>> single group_var/server_genre file, unless Ansible supports something like 
>> group_vars/server_genre@enviornment.
>>
>> How do I accomplish the variable value distinction in my case?
>>
>> Thanks,
>>
>> - Xinhuan
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to ansible...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/96edc459-2fec-4626-b7ac-2dae6d330505o%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/96edc459-2fec-4626-b7ac-2dae6d330505o%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/9bbe8409-2805-442d-af8c-f092195a8f89o%40googlegroups.com.


[ansible-project] How to structure variables to distinguish different environments

2020-07-01 Thread Xinhuan Zheng
Hello,

I want to define a variable for playbook in *group_vars/server_genre* file. 
However, I don't know if Ansible can support something like 
server_genre@environment syntax notation. Here is the detail:

*In inventory/environment*:
---
[server_genre]
myserver1.example.com

*In group_vars/server_genre:*
---
nfs_server: mynfsserver1.example.com

*In server_genre.yml playbook:*
---
- name: Playbook for server_genre
  hosts: server_genre
  gather_facts: yes

  tasks:

  - name: Install Nfs client
package:
  name: nfs-utils
  state: present
  - name: mount nfs
mount:
  path: /mymount
  src: "{{ nfs_server }}"
  fstype: nfs
  opts: ro
  state: mounted

In a different environment, the variable {{ nfs_server }} will have 
different value, however, I can't separate the different values using one 
single group_var/server_genre file, unless Ansible supports something like 
group_vars/server_genre@enviornment.

How do I accomplish the variable value distinction in my case?

Thanks,

- Xinhuan


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/96edc459-2fec-4626-b7ac-2dae6d330505o%40googlegroups.com.


[CentOS] External Array Data Migration

2020-05-21 Thread Xinhuan Zheng
Dear All,

Below question has been puzzled me for a while, and don’t know if anyone has 
experienced such puzzle:

You have an external array attached to a physical hardware. The operating 
system is CentOS 5. The file system is created on top of LVM on external array 
& mounted. The CentOS 5 can see the array controller, and manage all LVM 
configuration, etc. There is data stored in that external array. At some point, 
you want to upgrade the operating system to higher version, CentOS 7. Is there 
a fast way to migrate the external array data volume to new hardware with 
CentOS 7 without doing machine-to-machine rsync the whole external array data 
volume? In another words, can I detach external array from old hardware, and 
attach it to new hardware, then re-configure LVM, so new operating system can 
recognize the external array file system?

Thanks,

- Xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to Load Custom CentOS 7 Image

2020-05-21 Thread Xinhuan Zheng
Dear All,

> It seems like kickstart is what you are looking for.?
> Is this what you were trying to do?

Yes, kickstart is what I’m looking for. Let me detail my provisioning process:

  *   Provision a VM with standard CentOS 7 NetInstall ISO & my post script. 
The file system is created on top of LVM
  *   Export entire VM as custom ISO image to be loaded into physical hardware
  *   Kickstart physical hardware using above exported custom image & different 
grub/isolinux configuration

Although I could just use the same CentOS 7 NetInstall ISO & run all my post 
script to the physical HW, it may take a few hours, since post script is long, 
and I wonder if above process will speed up, since the VM has everything needed.

If that process isn’t feasible, then I’ll just do the same thing for physical 
hardware.

Thanks,

- Xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Live Patching Upgrade CentOS 7

2020-05-18 Thread Xinhuan Zheng
Dear All,

Does anyone know how to to live patching upgrade of CentOS 7 without rebuilding 
the whole boot volume? I have enough disk space in Boot volume. I don’t want to 
go through creating a new VM and install CentOS 7 patch release.
Thanks,

- Xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to Load Custom CentOS 7 Image

2020-05-15 Thread Xinhuan Zheng
Dear Earl Ramirez,

>I created a custom ISO a couple of years ago [0], you can use it as
>your base of one of the following links[1-4], should be sufficient to
>get you started.

>[0] https://github.com/EarlRamirez/snipeit_iso

In above github project, in 
https://github.com/EarlRamirez/snipeit_iso/blob/master/isolinux/grub.conf file, 
there is ‘@‘ symbol for splashimage, and for kernel, @KERNELPATH@, @ROOT@, etc.

Will those be replaced by actual values? Where do actual values come from? How 
does actual values substitute those variables?

Thanks again,

- Xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] How to Load Custom CentOS 7 Image

2020-05-14 Thread Xinhuan Zheng
Dear All,

I want to create a CentOS 7 VMWARE VM. I need to install various Runtime 
libraries, e.g., Tomcat, into the VM. Once all looks good, then I want to make 
a Custom Image of the entire VM. Later I want to load this Custom Image to a 
physical HW, so that I don’t need to go through all kinds Runtime installation. 
Does anyone know how to handle creating a Custom Image and Load to physical HW 
process? How does boot loader work when VM vs Physical HW has different devices?

Thank you,

- Xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [ansible-project] Ansible URI and GET_URL does not work for downloading

2020-04-03 Thread Xinhuan Zheng
This morning I just found my URL is missing .0 in /Redhat_Enterprise_7.0 
<https://www.google.com/url?q=https%3A%2F%2Fcdn.zmanda.com%2Fdownloads%2Fcommunity%2FAmanda%2F3.5.1%2FRedhat_Enterprise_7.0%2Famanda-backup_server-3.5.1-1.rhel7.x86_64.rpm=D=1=AFQjCNEjZAEdUVdeYQJfNAkXZpvhNF_lew>
 
part. Now it is working for me too. Thanks!

- Xinhuan

On Thursday, April 2, 2020 at 5:56:16 PM UTC-4, Kai Stian Olstad wrote:
>
> On Thu, Apr 02, 2020 at 02:21:35PM -0700, Xinhuan Zheng wrote: 
> > Hello, 
> > 
> > I need to use Ansible URI and GET_URL to download a piece of software 
> > called amanda backup server. I'm getting trouble with downloading. This 
> is 
> > my playbook: 
> > 
> > - name: Create cookie for later request 
> >   uri: 
> > url: 
> > "
> https://cdn.zmanda.com/downloads/community/Amanda/3.5.1/Redhat_Enterprise_7.0/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm;
>  
>
> > follow_redirects: all 
> >   register: stuff_list 
> > 
> > - name: Debug 
> >   debug: 
> > msg: "{{ stuff_list }}" 
> > 
> > - name:  Download amanda software 
> >   get_url: 
> > url: 
> > "
> https://cdn.zmanda.com/downloads/community/Amanda/3.5.1/Redhat_Enterprise_7.0/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm
>  
> <https://www.google.com/url?q=https%3A%2F%2Fcdn.zmanda.com%2Fdownloads%2Fcommunity%2FAmanda%2F3.5.1%2FRedhat_Enterprise_7.0%2Famanda-backup_server-3.5.1-1.rhel7.x86_64.rpm=D=1=AFQjCNEjZAEdUVdeYQJfNAkXZpvhNF_lew>"
>  
>
> > headers: 
> >   Cookie: "{{ stuff_list.cookies }}" 
> > dest: /mytest 
> >   
>
>  
>
> > Has anyone used URI and GET_URL to successfully download any software 
> from 
> > CloudFront? 
>
> get_url without the headers work for me. 
>
> $ cat test.yml 
> --- 
> - hosts: localhost 
>   tasks: 
> - get_url: 
> url: "
> https://cdn.zmanda.com/downloads/community/Amanda/3.5.1/Redhat_Enterprise_7.0/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm;
>  
>
> dest: /tmp 
>
> $ ansible-playbook test.yml 
>
> PLAY [localhost] 
>  
>
> TASK [get_url] 
> ** 
> changed: [localhost] 
>
> PLAY RECAP 
> **
>  
>
> localhost  : ok=1changed=1unreachable=0   
>  failed=0skipped=0 
>
> $ file /tmp/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm 
> /tmp/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm: RPM v3.0 bin 
> i386/x86_64 
>
>
> -- 
> Kai Stian Olstad 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/6d883d5f-fa98-4fe6-903d-9fa336123746%40googlegroups.com.


[ansible-project] Ansible URI and GET_URL does not work for downloading

2020-04-02 Thread Xinhuan Zheng
Hello,

I need to use Ansible URI and GET_URL to download a piece of software 
called amanda backup server. I'm getting trouble with downloading. This is 
my playbook:

- name: Create cookie for later request
  uri:
url: 
"https://cdn.zmanda.com/downloads/community/Amanda/3.5.1/Redhat_Enterprise_7.0/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm;
follow_redirects: all
  register: stuff_list

- name: Debug
  debug:
msg: "{{ stuff_list }}"

- name:  Download amanda software
  get_url:
url: 
"https://cdn.zmanda.com/downloads/community/Amanda/3.5.1/Redhat_Enterprise_7.0/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm;
headers:
  Cookie: "{{ stuff_list.cookies }}"
dest: /mytest
  
I'm getting below errors:

TASK [amanda : Create cookie for later request] 
**
fatal: [myserver]: FAILED! => {"changed": false, "connection": "close", 
"content": "\nAccessDeniedAccess 
Denied13AAEE01DDC4B1B6nc/VbplLWPwS8Z43nTSicEBc+0I7cZcdnSC7XZHUp9zV1bV6ivJhN56nqtTGNOPG95iV5yKnO1Q=",
 
"content_type": "application/xml", "date": "Thu, 02 Apr 2020 21:18:00 GMT", 
"elapsed": 0, "msg": "Status code was 403 and not [200]: HTTP Error 403: 
Forbidden", "redirected": false, "server": "AmazonS3", "status": 403, 
"transfer_encoding": "chunked", "url": 
"https://cdn.zmanda.com/downloads/community/Amanda/3.5.1/Redhat_Enterprise_7/amanda-backup_server-3.5.1-1.rhel7.x86_64.rpm;,
 
"via": "1.1 b7d10369ae737ec35cf8d7faced56ef0.cloudfront.net (CloudFront)", 
"x_amz_cf_id": "iOQFt0a3nEBiQp23AZEtTJVDF1WYWqCHSqxPQJjihj02ccKxvlhUNQ==", 
"x_amz_cf_pop": "EWR53-C2", "x_cache": "Error from cloudfront"}

Has anyone used URI and GET_URL to successfully download any software from 
CloudFront?

Thanks,

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/5de96697-c389-4a4b-8751-b356c3f1ec3d%40googlegroups.com.


Re: [ansible-project] "ONE" inventory question

2020-02-27 Thread Xinhuan Zheng
Hi Dick,

So how do we construct inventory files depending on the factors of 
deployment process and people/teams, right? I once heard DevOps has people 
over process, process over automation wisdom, but I never understand what 
people means in DevOps. So we need first consider what teams/people are 
responsible first, then what is the process for those teams/people, right?

As people, we basically are divided into system people and development 
people. System people responsible for building, administering, operating, 
monitoring, incl, but not limited to computing resources, storage, network; 
while development people responsible for writing custom code. If the custom 
code depends on any 3rd party modules, system people typically responsible 
for installing/configuring/patching that 3rd party modules. Some times 
system people have to deal with testing in the case of changing one version 
from another version, like database. 

The development team is using Gitlab as their versioning control system. 
Gitlab provides AutoDevOps, but we can't use. Our custom code isn't in 
those AutoDevOps areas. The infrastructure code is also versioning 
controlled in Gitlab. Gitlab has project repository. We try to keep 
infrastructure code in one repository. But it appears one repository is 
corresponding one pipeline. Per system perspective, all is infrastructure, 
until at the application level. So different applications may have 
different repository, and different set of infrastructure instances to 
serve them.

I don't know how people handle inventory file version controlling like us. 
Are you willing to share some of your insights?

Thanks,

- Xinhuan

On Wednesday, February 26, 2020 at 11:29:12 PM UTC-5, Dick Visser wrote:
>
> Hi 
> Hard to tell without knowing what and how things are deployed, and by what 
> people/teams. If you provide that context we can give it a try.
>
> On Wed, 26 Feb 2020 at 21:01, Xinhuan Zheng  > wrote:
>
>> Hello,
>>
>> We have ansible code that are in version control repository. This makes 
>> how to maintaining "ONE" inventory file difficult. If we need to create 
>> multiple repositories, for different purpose of Ansible playbook running, 
>> it breaks "ONE" inventory file assumption. In practice, does everyone 
>> maintain their inventory file in one single version control repository, or 
>> in multiple version control repositories? Would multiple inventory files in 
>> multiple version control repositories create inconsistency issue?
>>
>> Thanks,
>>
>> - Xinhuan Zheng
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to ansible...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/570ae61f-90e9-4143-99ee-d33c1622cbbc%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/570ae61f-90e9-4143-99ee-d33c1622cbbc%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> -- 
> Sent from a mobile device - please excuse the brevity, spelling and 
> punctuation.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/05da447b-75c8-4325-9bf6-ac1c47c6b367%40googlegroups.com.


[ansible-project] Re: How to build Ansible inventory file

2020-02-26 Thread Xinhuan Zheng
Hello All,

I'm still patiently waiting for someone to give me some hint on below 
questions. Thanks

- Xinhuan Zheng

On Thursday, February 13, 2020 at 4:46:35 PM UTC-5, Xinhuan Zheng wrote:
>
> Hello,
>
> I need to build Ansible inventory files, but somehow got stuck. Typically 
> for a service, there is development, staging and production. Initially, I 
> thought I can create 3 inventory files with those names. Later I feel it 
> may not be the case. As I look at this Ansible document: 
> https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#example-group-by-function,
>  
> it actually lists (3) grouping methods, by environment, by functions, and 
> by locations. Can I have a inventory file that is both grouped by 
> environment and by functions? It seems not possible. For example, consider 
> the following inventory file named as myservices:
>
> [development]
> testwebserver1
> testwebserver2
> testloadbalancer1
>
> [production]
> webserver1
> webserver2
> loadbalancer1
>
> [myservices:children]
> development
> production
>
> If the inventory file is development or production, that means it would 
> include all other services and it will become a big inventory file, and 
> playbook will be hard to write to manage all kinds services, for example, 
> development inventory file:
>
> [myservice1]
> testwebserver1
> testwebserver2
>
> [myservice2]
> testwebserver3
> testwebserver4
>
> [load_balancers]
> loadbalancer1
>
> [development:children]
> myservice1
> myservice2
> load_balancers
>
> Is it a good practice to break down one big inventory file containing a 
> lot of services into inventory files just for that service like the first 
> one?
>
> Thank you,
>
> - Xinhuan Zheng
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/d958113c-e9d9-467e-b937-34b2f0a1fb75%40googlegroups.com.


[ansible-project] Netscaler Ansible question

2020-02-26 Thread Xinhuan Zheng
Hello,

In my Netscaler Ansible module, I encountered a problem with 
netscaler_lb_vserver. I'm trying to use a list of servicebindings to create 
a single lb vserver with two services, but instead it only creates 
"test2-service" for that lb vserver, "test1-service" isn't there. Is there 
a way to use loop variable in netscaler servicebindings?

- hosts: netscaler
  gather_facts: no

  vars:
servicebindings:
  - servicename: "test1-service"
weight: "50"
  - servicename: "test2-service"
weight: "50"

  tasks:

- name: Create netscaler endpoint lbvservers
  delegate_to: localhost

  netscaler_lb_vserver:
nsip: "{{ nsip }}"
nitro_user: "{{ nitro_user }}"
nitro_pass: "{{ nitro_pass }}"

state: present

name: "test-lbvserver"
servicetype: "HTTP"
ipv46: "10.10.10.10"
port: "80"
lbmethod: "ROUNDROBIN"
servicebindings:
      - servicename: "{{ item.servicename }}"
weight: "{{ item.weight }}"
  with_items: "{{ servicebindings }}"

Thank you,

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/9bb1a439-3ad1-48bd-800c-2b9b19dff62e%40googlegroups.com.


[ansible-project] "ONE" inventory question

2020-02-26 Thread Xinhuan Zheng
Hello,

We have ansible code that are in version control repository. This makes how 
to maintaining "ONE" inventory file difficult. If we need to create 
multiple repositories, for different purpose of Ansible playbook running, 
it breaks "ONE" inventory file assumption. In practice, does everyone 
maintain their inventory file in one single version control repository, or 
in multiple version control repositories? Would multiple inventory files in 
multiple version control repositories create inconsistency issue?

Thanks,

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/570ae61f-90e9-4143-99ee-d33c1622cbbc%40googlegroups.com.


[ansible-project] How to build Ansible inventory file

2020-02-13 Thread Xinhuan Zheng
Hello,

I need to build Ansible inventory files, but somehow got stuck. Typically 
for a service, there is development, staging and production. Initially, I 
thought I can create 3 inventory files with those names. Later I feel it 
may not be the case. As I look at this Ansible document: 
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#example-group-by-function,
 
it actually lists (3) grouping methods, by environment, by functions, and 
by locations. Can I have a inventory file that is both grouped by 
environment and by functions? It seems not possible. For example, consider 
the following inventory file named as myservices:

[development]
testwebserver1
testwebserver2
testloadbalancer1

[production]
webserver1
webserver2
loadbalancer1

[myservices:children]
development
production

If the inventory file is development or production, that means it would 
include all other services and it will become a big inventory file, and 
playbook will be hard to write to manage all kinds services, for example, 
development inventory file:

[myservice1]
testwebserver1
testwebserver2

[myservice2]
testwebserver3
testwebserver4

[load_balancers]
loadbalancer1

[development:children]
myservice1
myservice2
load_balancers

Is it a good practice to break down one big inventory file containing a lot 
of services into inventory files just for that service like the first one?

Thank you,

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/1517c5c1-31e0-4eaa-b793-1d7fd3846cd3%40googlegroups.com.


Re: [ansible-project] How do I include another playbook in current playbook?

2020-02-11 Thread Xinhuan Zheng
Hello Alicia,

This is great. I got the idea. Thanks for your help!

- Xinhuan

On Thursday, January 23, 2020 at 12:33:57 PM UTC-5, alicia wrote:
>
> You cannot import a playbook anywhere inside a play - importing a playbook 
> is a play of its own. 
>
> If you want to run the imported playbook first, try:
>
> - name: this play runs ‘another.yml' on the hosts it defines
>   import_playbook: another.yml
>
> - name: this play runs two roles on all hosts in the mywebservers group
>   hosts: mywebservers
>   gather_facts: yes
>
>   roles:
> - role: testrole1
>   tags: testrole1
> - role: testrole2
>   tags: othertag
>
> If you want to run the roles first, reverse the order of the two plays.
>
> You can also review the general documentation on importing and including 
> at https://docs.ansible.com/ansible/devel/user_guide/playbooks_reuse.html. 
> You may want to edit “another.yml” to make it a tasks file instead of a 
> playbook for greater flexibility.
>
> Hope this helps,
> Alicia
>
> On Jan 22, 2020, at 3:24 PM, Xinhuan Zheng  > wrote:
>
>
>   Take a look again at the example in 
>>
>> https://docs.ansible.com/ansible/latest/modules/import_playbook_module.html. 
>>
>> To me it does not know where 
>>
>>
>>   roles: 
>> - role: testrole1 
>>   tags: testrole1 
>>
>> belongs. Should it be 
>>
>> - hosts: mywebservers 
>>   gather_facts: yes 
>>
>>   roles: 
>> - role: testrole1 
>>   tags: testrole1 
>>
>> #- import_playbook: another.yml 
>>
>> i.e. those roles are related to mywebservers? Or are they related to 
>> all hosts as defined in another.yml? 
>>
>>
>>
> testrole1 belongs to mywebservers. It isn't related to all hosts as 
> defined in another.yml file. However, another.yml file needs to be called 
> first. I tried using pre_tasks with import_playbook. It doesn't work 
> either. I also tried using include, still not working. Since another.yml 
> file contains a list of roles, it is supposed to be import_playbook, but 
> I'm not sure how to make import_playbook working in current_playbook.yml 
> file.
>
> Thanks again,
>
> - Xinhuan
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to ansible...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ansible-project/ea2a316c-0d8e-4641-9e94-d10e708b3ea5%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/ansible-project/ea2a316c-0d8e-4641-9e94-d10e708b3ea5%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/0c1fa28a-b8a6-4df2-9d20-1e0efd99789c%40googlegroups.com.


Re: [ansible-project] How do I include another playbook in current playbook?

2020-01-22 Thread Xinhuan Zheng


>   Take a look again at the example in 
> https://docs.ansible.com/ansible/latest/modules/import_playbook_module.html. 
>
> To me it does not know where 
>
>
>   roles: 
> - role: testrole1 
>   tags: testrole1 
>
> belongs. Should it be 
>
> - hosts: mywebservers 
>   gather_facts: yes 
>
>   roles: 
> - role: testrole1 
>   tags: testrole1 
>
> #- import_playbook: another.yml 
>
> i.e. those roles are related to mywebservers? Or are they related to 
> all hosts as defined in another.yml? 
>
>
>
testrole1 belongs to mywebservers. It isn't related to all hosts as defined 
in another.yml file. However, another.yml file needs to be called first. I 
tried using pre_tasks with import_playbook. It doesn't work either. I also 
tried using include, still not working. Since another.yml file contains a 
list of roles, it is supposed to be import_playbook, but I'm not sure how 
to make import_playbook working in current_playbook.yml file.

Thanks again,

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/ea2a316c-0d8e-4641-9e94-d10e708b3ea5%40googlegroups.com.


[ansible-project] How do I include another playbook in current playbook?

2020-01-22 Thread Xinhuan Zheng
Hello,

I created a playbook which needs to call another playbook. This is my 
current playbook:

---
# file: current_playbook.yml

- hosts: mywebservers
  gather_facts: yes

#- import_playbook: another.yml

  roles:
- role: testrole1
  tags: testrole1

  post_tasks:
- name: Install configuration file
  template:
src: myconf.j2
dest: /remote-path/myconf
  tags: testrole1

- name: Install cron
  cron:
 name: 'run every day'
 minute: '0'
 hour: '0'
 job: "/remote-path/job"
   tags: testrole1

I want to run playbook like: ansible-playbook -i myinventory -l 
mywebservers current_playbook.yml --tags testrole1. But another.yml 
playbook needs to run first. Here is another.yml playbook:

---
# file: another.yml

- hosts: all
  gather_facts: yes

  roles:
- role: myrole1
- role: myrole2

When I comment back in `#- import_playbook: another.yml' line in 
current_playbook.yml file, I got below error:

ERROR! 'roles' is not a valid attribute for a PlaybookInclude
- import_playbook: another.yml
  ^ here

How do I call another.yml playbook in my current_playbook.yml file?

Thank you,

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/d725708b-622c-4f93-b25c-89cddc114d38%40googlegroups.com.


[ansible-project] Ansible firewalld module question

2020-01-16 Thread Xinhuan Zheng
Hello,


In Ansible firewalld module: 
https://docs.ansible.com/ansible/latest/modules/firewalld_module.html?highlight=firewalld,
 
in the examples like below:


- firewalld:
service: https
permanent: yes
state: enabled

We want to make it more descriptive in our role's tasks/main.yml file, since it 
is part of a large playbook:

- name: Firewalld open https
  firewalld:
service: https
permanent: yes
state: enabled

However, it doesn't work in the large playbook which calls that role. Does 
anybody know why?

Thank you,

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/df271954-87ac-4f01-b969-a1e0fd152dfe%40googlegroups.com.


Re: [ansible-project] Re: Does current Ansible support templates macro?

2020-01-16 Thread Xinhuan Zheng
Hello Mr. Botka,

This is exactly what I am looking for. It looks so neat with ini_file 
module instead of template. I'll play with it in my tasks.

Thank you very much! :)

- Xinhuan

On Thursday, January 16, 2020 at 10:50:25 AM UTC-5, Vladimir Botka wrote:
>
> On Thu, 16 Jan 2020 05:49:07 -0800 (PST) 
> Xinhuan Zheng > wrote: 
>
> > sssd_config: 
> >   sssd: 
> > debug_level: 1 
> > additional_key: additional_value 
> > another_addtional_key: another_additional_value 
> >   nss: 
> > reconnection_retries: 3 
> > additional_key: additional_value 
> > another_addtional_key: another_additional_value 
> >   pam: 
> > debug_level: 5 
> > additional_key: additional_value 
> > another_addtional_key: another_additional_value 
> > 
> > Because this is so difficult to manipulate in template (I spend most 
> > yesterday to figure it out), I think it is probably better just put 
> > INI-style content into template file 
>
> It also possible to use module 'ini_file' 
> https://docs.ansible.com/ansible/latest/modules/ini_file_module.html 
>
> With the configuration data transformed to this list 
>
>   sssd_config: 
> - params: 
> - additional_key: additional_value 
> - reconnection_retries: 3 
> - another_addtional_key: another_additional_value 
>   section: nss 
> - params: 
> - debug_level: 5 
> - another_addtional_key: another_additional_value 
> - additional_key: additional_value 
>   section: pam 
> - params: 
> - debug_level: 1 
> - another_addtional_key: another_additional_value 
> - additional_key: additional_value 
>   section: sssd 
>
> the task below 
>
> - ini_file: 
> path: /scratch/tmp/config.ini 
> section: "{{ item.0.section }}" 
> option: "{{ item.1.keys()|list|first }}" 
> value: "{{ item.1.values()|list|first }}" 
>   with_subelements: 
> - "{{ sssd_config }}" 
> - params 
>
> gives 
>
> $ cat /scratch/tmp/config.ini 
> [nss] 
> additional_key = additional_value 
> reconnection_retries = 3 
> another_addtional_key = another_additional_value 
> [pam] 
> debug_level = 5 
> another_addtional_key = another_additional_value 
> additional_key = additional_value 
> [sssd] 
> debug_level = 1 
> another_addtional_key = another_additional_value 
> additional_key = additional_value 
>
> -- 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/eeb236a9-a093-4f7c-b981-943f43989367%40googlegroups.com.


Re: [ansible-project] Re: Does current Ansible support templates macro?

2020-01-16 Thread Xinhuan Zheng
Hello,

Your testing looks fine in the test data model. However, the sssd_config 
real data model is like below:

sssd_config: 
  sssd: 
debug_level: 1 
additional_key: additional_value
another_addtional_key: another_additional_value
  nss: 
reconnection_retries: 3 
additional_key: additional_value
another_addtional_key: another_additional_value
  pam: 
debug_level: 5 
additional_key: additional_value
another_addtional_key: another_additional_value

The addtional_key and another_additional_key isn't same per section 
(pam,nss,sssd), and number of additional_key per section isn't identical 
either. So nss section may have 5 key/value pairs, pam may have 6 key/value 
pairs, and sssd may have only 3 key/value pairs. Each additional_key is 
pretty much unique to that section.

Because this is so difficult to manipulate in template (I spend most 
yesterday to figure it out), I think it is probably better just put 
INI-style content into template file, then fill in the values from 
variables that varies for that key/value pair. It makes the work more 
simpler.

Thank you for providing the test case. I'll remember this lesson.

- Xinhuan


On Wednesday, January 15, 2020 at 3:45:44 PM UTC-5, Vladimir Botka wrote:
>
> On Wed, 15 Jan 2020 11:57:49 -0800 (PST) 
> Xinhuan Zheng > wrote: 
>
> > I tested the solution, it doesn't work. item.1 becomes: 
> > {u'id_provider': u'local', u'auth_provider': u'local', u'enumerate': 
> True} 
> > So I get error there is no keys on {{ item.1.keys().0 }} 
>
> Both versions works for me. Double-check the code. The playbook 
>
> - hosts: localhost 
>   vars: 
> sssd_config: 
>   sssd: 
> debug_level: 1 
>   nss: 
> reconnection_retries: 3 
>   pam: 
> debug_level: 5 
>   tasks: 
> - template: 
> src: template.j2 
> dest: config.ini 
>
> with the template 
>
> % for item in sssd_config.items() %} 
> [{{ item.0 }}] 
> {{ item.1.keys().0 }}={{ item.1.values().0 }} 
> {% endfor %} 
> # -- 
> {% for item in sssd_config.items() %} 
> [{{ item.0 }}] 
> {% for iitem in item.1.items() %} 
> {{ iitem.0 }}={{ iitem.1 }} 
> {% endfor %} 
> {% endfor %} 
>
> gives 
>
> [nss] 
> reconnection_retries=3 
> [pam] 
> debug_level=5 
> [sssd] 
> debug_level=1 
> # -- 
> [nss] 
> reconnection_retries=3 
> [pam] 
> debug_level=5 
> [sssd] 
> debug_level=1 
>
> -- 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/2b863977-2dc5-425f-86f9-e4d7130fad70%40googlegroups.com.


Re: [ansible-project] Re: Does current Ansible support templates macro?

2020-01-15 Thread Xinhuan Zheng
Got this error:

"AnsibleUndefinedVariable: 'list object' has no attribute 'items' for 
item.1.items()

- Xinhuan

On Wednesday, January 15, 2020 at 1:56:58 PM UTC-5, Vladimir Botka wrote:
>
> On Wed, 15 Jan 2020 19:47:23 +0100 
> Vladimir Botka > wrote: 
>
> > Fit the template to your needs. For example the template 
> > 
> > {% for item in sssd_config.items() %} 
> > [{{ item.0 }}] 
> > {{ item.1.keys().0 }}={{ item.1.values().0 }} 
> > {% endfor %} 
> > 
> > gives 
> > 
> > [nss] 
> > reconnection_retries=3 
> > [pam] 
> > debug_level=5 
> > [sssd] 
> > debug_level=1 
>
> There might be more items in the configuration sections. The template 
> below gives the same result and would include other parameters if present 
>
> {% for item in sssd_config.items() %} 
> [{{ item.0 }}] 
> {% for iitem in item.1.items() %} 
> {{ iitem.0 }}={{ iitem.1 }} 
> {% endfor %} 
> {% endfor %} 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/adc7d7b9-8e64-4b26-967a-2e74a3f03bd2%40googlegroups.com.


Re: [ansible-project] Re: Does current Ansible support templates macro?

2020-01-15 Thread Xinhuan Zheng
I tested the solution, it doesn't work. item.1 becomes:
{u'id_provider': u'local', u'auth_provider': u'local', u'enumerate': True}

So I get error there is no keys on {{ item.1.keys().0 }}

- Xinhuan


On Wednesday, January 15, 2020 at 1:47:42 PM UTC-5, Vladimir Botka wrote:
>
> On Wed, 15 Jan 2020 10:40:53 -0800 (PST) 
> Xinhuan Zheng > wrote: 
>
> > Tried what you said. Here is what {{ item }} look like: 
> > 
> > [(u'sssd', {u'debug_level': 5, u'reconnection_retries': 3, 
> > u'config_file_version': 2, u'sbus_timeout': 30})] 
> > [(u'services', [u'nss', u'pam', u'ssh'])] 
> > 
> > What should I do next? 
>
> Fit the template to your needs. For example the template 
>
> {% for item in sssd_config.items() %} 
> [{{ item.0 }}] 
> {{ item.1.keys().0 }}={{ item.1.values().0 }} 
> {% endfor %} 
>
> gives 
>
> [nss] 
> reconnection_retries=3 
> [pam] 
> debug_level=5 
> [sssd] 
> debug_level=1 
>
> HTH, 
>
> -vlado 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/e5bb467d-0c5c-440e-adfc-41dbcbef9e6f%40googlegroups.com.


[ansible-project] Re: Does current Ansible support templates macro?

2020-01-15 Thread Xinhuan Zheng
Tried what you said. Here is what {{ item }} look like:

[(u'sssd', {u'debug_level': 5, u'reconnection_retries': 3, 
u'config_file_version': 2, u'sbus_timeout': 30})]

[(u'services', [u'nss', u'pam', u'ssh'])]

...

What should I do next?

Thanks again,

- Xinhuan

On Tuesday, January 14, 2020 at 4:27:45 PM UTC-5, Xinhuan Zheng wrote:
>
> Hello,
>
> I'm working on a role for system SSSD daemon. I found this piece of code 
> online:
>
> https://github.com/picotrading/ansible-sssd/blob/master/templates/sssd.conf.j2
>
> I have defined my own sssd_config variable in my role's defaults 
> directory, so I'd like to use that piece of code. That code is neat. 
> However, I don't understand what it is doing in line:
> {% from "templates/encoder/macros/ini_encode_macro.j2" import ini_encode 
> with context -%}
>
> Also does current Ansible support templates macro like above?
>
> If it doesn't, then sssd_config variable is a large dictionary map, with 
> INI-style different sections. What really needs to happen is based on that 
> variable, change it to use = symbol as delimiter for each INI section. For 
> example:
>
> sssd_config:
>   sssd:
> debug_level: 1
> config_file_version: 2
> ...
>
> Then the produced sssd.conf file would look like below:
>
> [sssd]
> debug_level=1
> config_file_version=2
>
> Thank you,
>
> Xinhuan Zheng
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/58c3be45-1e40-4c86-8421-ebfc5112d88d%40googlegroups.com.


Re: [ansible-project] Does current Ansible support templates macro?

2020-01-15 Thread Xinhuan Zheng
*I still cannot figure out how to loop through my variable:*

*sssd_config:*
*  sssd:*
*debug_level: 1*
*  nss:*
*reconnection_retries: 3*
* pam:*
*  debug_level: 5*

Here is my template code:

{% for item in sssd_config %}
[{{ item }}]
{% set list = sssd_config[item] %}
{% for i in list %}
{{ i }} =
{% endfor %}
{% endfor %}

I cannot figure out what to put after {{ i }}. Please HELP!

Thank you,

- Xinhuan Zheng

On Tuesday, January 14, 2020 at 5:42:22 PM UTC-5, Vladimir Botka wrote:
>
> On Tue, 14 Jan 2020 13:27:45 -0800 (PST) 
> Xinhuan Zheng > wrote: 
>
> > ... However, I don't understand what it is doing in line: 
> > {% from "templates/encoder/macros/ini_encode_macro.j2" import ini_encode 
> > with context -%} 
>
> This link to the Jinja doc will help you 
> https://jinja.palletsprojects.com/en/2.10.x/templates/#import 
>
> -vlado 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/f83716c1-d475-4034-8b54-95c578503d0b%40googlegroups.com.


[ansible-project] Does current Ansible support templates macro?

2020-01-14 Thread Xinhuan Zheng
Hello,

I'm working on a role for system SSSD daemon. I found this piece of code 
online:
https://github.com/picotrading/ansible-sssd/blob/master/templates/sssd.conf.j2

I have defined my own sssd_config variable in my role's defaults directory, 
so I'd like to use that piece of code. That code is neat. However, I don't 
understand what it is doing in line:
{% from "templates/encoder/macros/ini_encode_macro.j2" import ini_encode 
with context -%}

Also does current Ansible support templates macro like above?

If it doesn't, then sssd_config variable is a large dictionary map, with 
INI-style different sections. What really needs to happen is based on that 
variable, change it to use = symbol as delimiter for each INI section. For 
example:

sssd_config:
  sssd:
debug_level: 1
config_file_version: 2
...

Then the produced sssd.conf file would look like below:

[sssd]
debug_level=1
config_file_version=2

Thank you,

Xinhuan Zheng


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/8604b2d6-8af5-476a-a9be-74439659f806%40googlegroups.com.


[ansible-project] How do I use Ansible loop for generalizing things in a role

2020-01-08 Thread Xinhuan Zheng
Hello,

I'm creating a Ansible role to place a set of scripts to managed hosts so 
that they can start by cron in a defined schedule. Since it is a set of 
scripts, I want to generalize the play execution sequence to use a loop. 
This will be before loop:

---

- name: Create directory
  file:
path: /mydirectory
state: directory
owner: root
group: root
mode: 0755

- name: Install script1
  template:
src: script1.sh.j2
dest: /mydirectory/script1.sh

- name: Create cronjob for script1
  cron:
name: script1 run every minute
cron_file: script1_cron
user: root
job: /mydirectory/script1.sh

- name: Install script2
  template:
src: script2.sh.j2
dest: /mydirectory/script2.sh

- name: Create cronjob for script2
  cron:
name: script2 run every 10 minutes
minute: */10
cron_file: script2_cron
user: root
job: /mydirectory/script2.sh

Since it is a set of scripts, I figured it would be possible to generalize 
using loop but I don't know how. In place of script1/script2, it will be a 
variable name. How do I generalize every minute, every 10 minutes, every 
hour, or specific date & time as a variable within a loop?

Thank you,

- Xinhuan Zheng


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/2c5294c6-d1f6-4bd9-b1c6-7bf0bfb31ca2%40googlegroups.com.


[ansible-project] Re: Running Ansible ping and getting error: Operation not permitted\r\n",

2019-12-27 Thread Xinhuan Zheng


On Thursday, December 26, 2019 at 11:49:26 AM UTC-5, gefela wrote:
>
>
> When I am running a ansible ping from a ubuntu VM to a host using the 
> command 
>
> ansible -m ping juniper 
>
> It gives me the error message ...
>
>
>
> WARNING]: Platform freebsd on host 172.16.203.122 is using the discovered 
> Python interpreter at /usr/bin/python, but future installation of another 
> Python interpreter could change this. See 
> https://docs.ansible.com/ansible/2.9/ 
> reference_appendices/interpreter_discovery.html for more information.
>
> 172.16.203.122 | FAILED! => { "ansible_facts": { 
> "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, 
> "module_stderr": "Shared connection to 172.16.203.122 closed.\r\n", 
> "module_stdout": "/bin/sh: /usr/bin/python: Operation not permitted\r\n", 
> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 126
>
> My hosts file has the following entry .. 
>
> [juniper]
>
> 172.16.203.122 ansible_ssh_user=root ansible_ssh_pass=my password 
>
> What is missing as i am running out of ideas 
>

You need to deploy ssh private key for the user you are running on control 
node, and ssh public key for the same user on managed node, and escalated 
privileges, for example, sudo for that same user for commands that require 
escalated privileges (like reboot).

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/a88fe47d-e8d0-43c0-b71d-9163b9b950d7%40googlegroups.com.


Re: [ansible-project] Force ansible-playbook to collect ansible facts

2019-12-27 Thread Xinhuan Zheng
Hello Mr. Kai,

Sorry for not being clear. What I'm working on is probably a big playbook. 
So I used Ansible roles wisdom, for code re-usability. The main playbook 
will invoke each role as I develop, and I need to debug each role by 
calling the main playbook. That's why I figure it to use tags, since each 
role is also tagged differently. Today I found that some pieces of task in 
roles are not tagged at all, so it skipped. As I added missing tags, 
everything worked as expected.

One thing I want to share with you and others is it is probably not a good 
idea to tag the main playbook, if you already tag the roles.  If main 
playbook is tagged, when invoking with both main playbook's tag and role's 
tag, it causes unwanted roles to be invoked, and when invoking only role's 
tag, gathering facts will be skipped. So it is better to not tag the main 
playbook. 

There is no need to create setup module. Using gather_facts: yes in main 
playbook is fine.

Just find tags can be applied to roles this way: 
https://docs.ansible.com/ansible/latest/user_guide/playbooks_tags.html:

roles:
  - role: webserver

tags: [ web, foo ]

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/ab8b723e-d40d-416f-b43c-732e83cb3afd%40googlegroups.com.


Re: [ansible-project] Not replacing invalid character(s) in group name warning

2019-12-27 Thread Xinhuan Zheng
Hello Mr. Kai,

I used {{  ansible_default_ipv4.network.replace('.', '_') }} and created 
group_vars/subnet_xxx_xxx_xxx_xxx. It worked perfectly. Thank you very much 
for help!

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/9c3491dd-0281-4832-833a-dd4fa3343feb%40googlegroups.com.


Re: [ansible-project] Force ansible-playbook to collect ansible facts

2019-12-26 Thread Xinhuan Zheng


> Because you have replaced the tags with the config tag. 
>
>
>
I want to configure only networking part inside config. That is, config is 
parent level tag, I only want to invoke networking tag of the parent tag. 
Can I do that? BTW, below isn't working:

ansible-playbook -i test  -l my_servers pb.yml --tags config,networking

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/963ae457-3f5d-4a67-8faf-20126b6ab700%40googlegroups.com.


[ansible-project] Force ansible-playbook to collect ansible facts

2019-12-26 Thread Xinhuan Zheng
Hello,

I ran my ansible playbook, pb.yml file, with --tags, like following:

ansible-playbook -i "192.168.100.1," pb.yml --tags "networking"

ansible-playbook does NOT collect ansible facts. However, if I run it 
without --tages, the ansible facts are collected. 

Why is that?

Here is my directory layout:

pb.yml
roles/
  networking.yml

Here is the pb.yml:

---

- hosts: all
  gather_facts: yes

  roles:
- roles: networking
  tags:
- config

Here is the roles/networking/tasks/main.yml file:



  - name: install networking packages
yum:
  name: 'NetworkManager'
  state: installed
tags: networking

Thanks,

- Xinhuan


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/5db3f7bc-2241-4448-9ded-0f2e26e773a6%40googlegroups.com.


Re: [ansible-project] Not replacing invalid character(s) in group name warning

2019-12-26 Thread Xinhuan Zheng
Hello,

The group names by subnet will have dot. How do I change that to avoid dot 
characters?

- Xinhuan

On Saturday, December 21, 2019 at 2:27:52 AM UTC-5, Kai Stian Olstad wrote:
>
> On 20.12.2019 22:08, Xinhuan Zheng wrote: 
> >- name: Group system by subnet 
> >  group_by: 
> >key: subnet_{{ansible_default_ipv4.network}} 
> > 
> > While I run my playbook, I got this error: 
> > 
> > TASK [os-networking : Group system by subnet] 
> > 
> *
>  
>
> > task path: os-networking/tasks/main.yml:24 
> > Not replacing invalid character(s) "set([u'.'])" in group name 
> > (subnet_192.168.101.0) 
> > [DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set 
> to 
> > allow bad characters in group names by default, this 
> > will change, but still be user configurable on deprecation. This feature 
> > will be removed in version 2.10. Deprecation warnings can 
> >   be disabled by setting deprecation_warnings=False in ansible.cfg. 
> > [WARNING]: Invalid characters were found in group names but not 
> replaced, 
> > use - to see details 
> > 
> > How should I fix this warning? 
>
> Don't use dot in group name or set TRANSFORM_INVALID_GROUP_CHARS to a 
> value of your choice. 
>
>
> -- 
> Kai Stian Olstad 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/6636b7ef-f393-48de-bfa1-24b139580120%40googlegroups.com.


[ansible-project] Not replacing invalid character(s) in group name warning

2019-12-20 Thread Xinhuan Zheng
Hello,

I'm using Ansible group_by for networking specific information 
configuration. I need to use group_by to collect data values from my 
group_var/subnet_ file.  My group_vars/subnet_192.168.101.0 
matches group_by key. Here is my playbook:

  - name: Group system by subnet
group_by:
  key: subnet_{{ansible_default_ipv4.network}}

While I run my playbook, I got this error:

TASK [os-networking : Group system by subnet] 
*
task path: os-networking/tasks/main.yml:24
Not replacing invalid character(s) "set([u'.'])" in group name 
(subnet_192.168.101.0)
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to 
allow bad characters in group names by default, this
will change, but still be user configurable on deprecation. This feature 
will be removed in version 2.10. Deprecation warnings can
 be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, 
use - to see details

ok: [myserver] => {
"add_group": "subnet_192.168.101.0",
"changed": false,
"parent_groups": [
"all"
]
}

How should I fix this warning?

Thanks,

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/cdbb14ec-afab-4e18-b75e-e3154c235bcf%40googlegroups.com.


[ansible-project] How do I assign different variable values for different group system

2019-12-12 Thread Xinhuan Zheng
Hello,

I'm trying to create an Ansible role, networking, to automate 
/etc/resolv.conf file. I created this role like below:

production
networking.yml
group_vars/
  agroup
  bgroup
roles/
  networking/
tasks/main.yml
templates/resolv.conf.j2

In my resolv.conf.j2 file, I put variables in this file:


{% for item in nameserver %}
nameserver {{ item }}
{% endfor %}

The name servers will be different for different group systems. 
In group_vars/agroup:

---

nameservers:
  - 192.168.10.251
  - 192.168.10.252

In group_vars/bgroup:

---

nameservers:
  - 192.168.101.251
  - 192.168.101.252

In playbook networking.yml:

---

- hosts: all
  roles:
- networking

In production (inventory) file:

[agroup]
myserver

[bgroup]
myserver2

Here is command I want to use for play:

$ ansible-playbook -i production networking.yml -l myserver
$ ansible-playbook -i production  networking.yml -l myserver2

Does above play use agroup defined nameservers for myserver, and bgroup 
defined nameservers for myserver2?

Thank you,

- Xinhuan

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/dcbd0424-dd54-4fe5-8de7-940a470c0476%40googlegroups.com.


[ansible-project] Re: Define global variables in ansible group_vars/all file

2019-12-06 Thread Xinhuan Zheng
Hello All,

I just figured out myself and I want to post it for sharing with other who 
would have similar issue like myself.

The issue is what variable names I use in group_vars/all file.

remote_user, become and become_method, are not inventory-like recognized 
variables, they are recognized in playbook, but not in inventory, although 
those variables are not put in inventory file, I believe, they are 
inventory variables.

The correct names when putting in group_vars/all file are:

---
# group_vars/all

ansible_user: ansible
ansible_become: true
ansible_become_method: sudo

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/865c9020-02b8-4e0a-a798-57fb9e01adfb%40googlegroups.com.


Re: [ansible-project] Define global variables in ansible group_vars/all file

2019-12-06 Thread Xinhuan Zheng
Hello Dick,

Yes, I added -b option. That worked fine.

$ ansible all -i production  -u ansible -l mygroup -a "uptime" -b
myserver | CHANGED | rc=0 >>  
  12:26:39 up 11 days,  2:40,  2 users,  load average: 0.00, 0.01, 0.05

On Friday, December 6, 2019 at 12:19:50 PM UTC-5, Dick Visser wrote:
>
> And if you add the ‘-b’ option to that?
>
> On Fri, 6 Dec 2019 at 18:15, Xinhuan Zheng  > wrote:
>
>> Hello Alicia,
>>
>> I just ran ad-hoc command with -u ansible parameter like below:
>>
>> $ ansible all -i production  -u ansible -l mygroup -a "uptime"
>> myserver | CHANGED | rc=0 >>
>>  12:13:22 up 11 days,  2:26,  2 users,  load average: 0.00, 0.02, 0.05
>>
>> ansible user is defined on myserver, and it is in sudoers file in wheel 
>> group without password required.
>>
>> On Friday, December 6, 2019 at 12:10:05 PM UTC-5, alicia wrote:
>>>
>>> I don’t think the failure is related to using or not using “sudo”. The 
>>> playbook failed because Ansible could not connect to the remote machine. 
>>>
>>> The error message:
>>>
>>> fatal: [myserver]: UNREACHABLE! => {"changed": false, "msg": "*Failed 
>>> to connect to the host via ssh*: 
>>> \n|Permission 
>>> denied (publickey,password,keyboard-
>>> interactive).", "unreachable": true}
>>>
>>> tells you that Ansible cannot connect to ‘myserver’ over SSH. If you try 
>>> to connect to the target machine using SSH and the user ‘ansible’ from the 
>>> command line, does that work? Do you have to type in a password? If you’re 
>>> using SSH keys, does the ‘ansible’ user have permission to access the 
>>> correct key?
>>>
>>> See 
>>> https://docs.ansible.com/ansible/latest/user_guide/connection_details.html#ssh-key-setup
>>>  for 
>>> information on setting up SSH keys. 
>>>
>>> Hope this helps point you in the right direction.
>>>
>>> Alicia
>>>
>>> On Dec 6, 2019, at 10:44 AM, Xinhuan Zheng  wrote:
>>>
>>> Hello,
>>>
>>>
>>>> It's not necessary to use the "vars:" directive in the files. 
>>>> See "Organizing host and group variables" 
>>>>
>>>> https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#organizing-host-and-group-variables
>>>>  
>>>>
>>>> # group_vars/all 
>>>> remote_user: ansible 
>>>> become: yes 
>>>> become_method: sudo 
>>>>
>>>> Cheers, 
>>>>
>>>> -vlado 
>>>>
>>>
>>> I changed per advice.  Here is my changed file:
>>>
>>> ---
>>> # group_vars/all
>>>
>>> remote_user: ansible
>>> become: yes
>>> become_method: sudo
>>>
>>> However, this doesn't work either. I got same Permission Denied error 
>>> like my previous run
>>>
>>> - Xinhuan Zheng
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Ansible Project" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to ansible...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/ansible-project/c0f93f0f-315a-47f5-a645-17a35bd7ae82%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/ansible-project/c0f93f0f-315a-47f5-a645-17a35bd7ae82%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to ansible...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/44684370-2c51-468b-9165-feb6ec743eca%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/44684370-2c51-468b-9165-feb6ec743eca%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> -- 
> Sent from a mobile device - please excuse the brevity, spelling and 
> punctuation.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/595e0e94-ee0a-4f81-b7cb-9de6c34b366f%40googlegroups.com.


Re: [ansible-project] Define global variables in ansible group_vars/all file

2019-12-06 Thread Xinhuan Zheng
Hello Alicia,

I just ran ad-hoc command with -u ansible parameter like below:

$ ansible all -i production  -u ansible -l mygroup -a "uptime"
myserver | CHANGED | rc=0 >>
 12:13:22 up 11 days,  2:26,  2 users,  load average: 0.00, 0.02, 0.05

ansible user is defined on myserver, and it is in sudoers file in wheel 
group without password required.

On Friday, December 6, 2019 at 12:10:05 PM UTC-5, alicia wrote:
>
> I don’t think the failure is related to using or not using “sudo”. The 
> playbook failed because Ansible could not connect to the remote machine. 
>
> The error message:
>
> fatal: [myserver]: UNREACHABLE! => {"changed": false, "msg": "*Failed to 
> connect to the host via ssh*: 
> \n|Permission 
> denied (publickey,password,keyboard-
> interactive).", "unreachable": true}
>
> tells you that Ansible cannot connect to ‘myserver’ over SSH. If you try 
> to connect to the target machine using SSH and the user ‘ansible’ from the 
> command line, does that work? Do you have to type in a password? If you’re 
> using SSH keys, does the ‘ansible’ user have permission to access the 
> correct key?
>
> See 
> https://docs.ansible.com/ansible/latest/user_guide/connection_details.html#ssh-key-setup
>  for 
> information on setting up SSH keys. 
>
> Hope this helps point you in the right direction.
>
> Alicia
>
> On Dec 6, 2019, at 10:44 AM, Xinhuan Zheng  > wrote:
>
> Hello,
>
>
>> It's not necessary to use the "vars:" directive in the files. 
>> See "Organizing host and group variables" 
>>
>> https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#organizing-host-and-group-variables
>>  
>>
>> # group_vars/all 
>> remote_user: ansible 
>> become: yes 
>> become_method: sudo 
>>
>> Cheers, 
>>
>>     -vlado 
>>
>
> I changed per advice.  Here is my changed file:
>
> ---
> # group_vars/all
>
> remote_user: ansible
> become: yes
> become_method: sudo
>
> However, this doesn't work either. I got same Permission Denied error like 
> my previous run
>
> - Xinhuan Zheng
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to ansible...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ansible-project/c0f93f0f-315a-47f5-a645-17a35bd7ae82%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/ansible-project/c0f93f0f-315a-47f5-a645-17a35bd7ae82%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/44684370-2c51-468b-9165-feb6ec743eca%40googlegroups.com.


Re: [ansible-project] Define global variables in ansible group_vars/all file

2019-12-06 Thread Xinhuan Zheng
Hello,


> It's not necessary to use the "vars:" directive in the files. 
> See "Organizing host and group variables" 
>
> https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#organizing-host-and-group-variables
>  
>
> # group_vars/all 
> remote_user: ansible 
> become: yes 
> become_method: sudo 
>
> Cheers, 
>
> -vlado 
>

I changed per advice.  Here is my changed file:

---
# group_vars/all

remote_user: ansible
become: yes
become_method: sudo

However, this doesn't work either. I got same Permission Denied error like 
my previous run

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/c0f93f0f-315a-47f5-a645-17a35bd7ae82%40googlegroups.com.


[ansible-project] Define global variables in ansible group_vars/all file

2019-12-06 Thread Xinhuan Zheng
Hello,

I followed Ansible best practice to create my ansible automation directory 
structure like following:

group_vars
  group_vars/all
host_vars
os.yml
production
roles
  roles/os-issue

Since I'm using ansible user as a remote user, and sudo method to escalate 
its privileges globally, I want to define this in group_vars/all file. Here 
is what I define in group_vars/all directory:

---
# group_vars/all

vars:
  - remote_user: ansible
  - become: yes
  - become_method: sudo

However, when I ran my playbook, I'm still getting Permission Denied error:

ansible-playbook -i production os.yml -l Cluster1 -v

Using /etc/ansible/ansible.cfg as config file

PLAY [all] 


TASK [Gathering Facts] 

fatal: [myserver]: UNREACHABLE! => {"changed": false, "msg": "Failed to 
connect to the host via ssh: 
\n|Permission 
denied (publickey,password,keyboard-interactive).", "unreachable": true}

PLAY RECAP 

myserver   : ok=0changed=0unreachable=1failed=0
skipped=0rescued=0ignored=0

Here is my os.yml playbook:

---
# file: os.yml
# This playbook file is to configure operating system after kickstarting

- hosts: all
  roles:
- role: os-issue


It is to set /etc/issue file for remote myserver.

Please advise me how I can define global variables in group_vars/all file.

Thank you,

- Xinhuan Zheng

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/43c05320-4326-4a0a-9fef-fd54f1d9bb3f%40googlegroups.com.


[CentOS] How to dump/restore a CentOS 7 system

2019-09-25 Thread Xinhuan Zheng
Hello All,

I guess it is very common for administrative purpose, to dump and restore a 
CentOS 7 system. I usually use dump/restore commands. However, I’m having 
trouble to handle installing bootloader and creating initramfs for C7 system. 
Does anyone know a good document source that details those procedure?
Thank you,

Xinhuan Zheng
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 5 file system read only issue

2019-08-21 Thread Xinhuan Zheng
Hello Everyone,

We are using CentOS 5 system for certain application. Those are VM guests
running in VMware. There is datastore issue occasionally, causing all file
systems becoming read only file systems. So application stop working, and
opened files cannot be written either. We cannot even ssh login to the
system. Typically we had to power cycle the VM. We are trying to add
reliability to the application so that if files cannot be written,
application should time out. We are trying to use IO::Select to handle
timeout. Per investigation, we found below does not work as expected:

my $s = IO::Select->new( $fh );
if ( $io->can_write( 10 ) {
  # print to the file
}


It seems like can_write returns true even we manually made file system
read only in our testing case.

Is this something we can accomplish using select system call with timeout
value?

Thanks in advance,

- Xinhuan Zheng 
















___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[389-users] Re: How to invalidate local cache after user changed their password

2019-02-28 Thread xinhuan zheng
 Hello Mr. Brown,
Thanks for your input.
Here is my config after trim for public email:
[domain/MYLDAP]auth_provider = ldapcache_credentials = true
dns_resolver_timeout = 5entry_cache_timeout = 300enumerate = trueid_provider = 
ldapldap_id_use_start_tls = true
[domain/LOCAL]auth_provider = localenumerate = trueid_provider = local
[nss]entry_cache_nowait_percentage = 75entry_cache_timeout = 300
reconnection_retries = 3

[pam]debug_level = 5offline_credentials_expiration = 
2offline_failed_login_attempts = 3offline_failed_login_delay = 
5reconnection_retries = 3
[sssd]config_file_version = 2debug_level = 5domains = 
LOCAL,MYLDAPreconnection_retries = 3sbus_timeout = 30services = nss,pam,ssh
- Xinhuan
On Wednesday, February 27, 2019, 7:42:38 PM EST, William Brown 
 wrote:  
 
 

> On 28 Feb 2019, at 05:22, xinhuan zheng  wrote:
> 
> Hello,
> 
> I have been struggling with this problem for a while. When a user changed 
> their password, our 389 directory servers received new password and saved 
> into directory server. However, when user tries to login to a server whose 
> authentication is using 389 directory server, their new password won't work 
> for the first few minutes. There is a local cache process, sssd, running on 
> the server the user tries to login. Apparently sssd is still using old 
> password information, and does not know password has changed on directory 
> servers. I have set sssd to keep cache information for 5 minutes only, and do 
> pre-fetch prior to cache information expiring. But I don't know how to tell 
> sssd to ignore cache completely when information has changed on 389 directory 
> server side. 
> 
> Is there a way to completely disable sssd local cache, and only use it when 
> 389 directory servers are not available?

I’ve never seen SSSD behave like this - but I would also believe it to be true.

My SSSD configuration has an extremely low cache timeout for avoiding this 
issue. 


[domain/blackhats.net.au]
ignore_group_members = False
entry_cache_group_timeout = 60

cache_credentials = True
id_provider = ldap
auth_provider = ldap
access_provider = ldap
chpass_provider = ldap

ldap_referrals = False

ldap_access_order = filter, expire

[nss]
memcache_timeout = 60


I’ve trimmed the config (obviously) for email. 

Can you provide your SSSD config to me to examine to see if I can spot any 
issues? 


> 
> Thank you,
> 
> - Xinhuan
> ___
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

—
Sincerely,

William Brown
Software Engineer, 389 Directory Server
SUSE Labs
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
  ___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


[389-users] How to invalidate local cache after user changed their password

2019-02-27 Thread xinhuan zheng
Hello,
I have been struggling with this problem for a while. When a user changed their 
password, our 389 directory servers received new password and saved into 
directory server. However, when user tries to login to a server whose 
authentication is using 389 directory server, their new password won't work for 
the first few minutes. There is a local cache process, sssd, running on the 
server the user tries to login. Apparently sssd is still using old password 
information, and does not know password has changed on directory servers. I 
have set sssd to keep cache information for 5 minutes only, and do pre-fetch 
prior to cache information expiring. But I don't know how to tell sssd to 
ignore cache completely when information has changed on 389 directory server 
side. 
Is there a way to completely disable sssd local cache, and only use it when 389 
directory servers are not available?
Thank you,
- Xinhuan___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org


Re: [Openvas-discuss] OpenVAS HTTP test OPTIONS requests

2018-08-22 Thread Xinhuan Zheng
Hi Christian,

For some reason, our target host returns content as if they were getting
GET requests, not returning Allow: header. I thought it may be redirect
can cause that. I have to figure out how to change target host
configuration to disabling OPTIONS requests.
Thanks,

- xinhuan

On 8/22/18, 12:43 PM, "Christian Fischer"
 wrote:

>Hi,
>
>On 17.08.2018 18:08, Xinhuan Zheng wrote:
>> Hello,
>> 
>> In our recent OpenVAS scan, our host has HTTP service running so the
>> scanning software tests a lot of URLs. However, in the target host
>>access
>> log, we saw tons of OPTIONS requests being issued by scanning software.
>> Per some research, OPTIONS is a type of HTTP request that is pre-flight
>>in
>> Cross-origin resource. The normal GET request would return a document
>>with
>> bunch of objects, like json, images, etc. Can I limit OpenVAS not
>>issuing
>> OPTIONS requests?
>> Thank you,
>
>there is no such possibility included in OpenVAS besides excluding the
>NVT(s) doing those OPTIONS requests from your scan configuration.
>
>Could you elaborate why you want to limit OpenVAS not issuing OPTIONS
>requests?
>
>Regards,
>
>--
>
>Christian Fischer | PGP Key: 0x54F3CE5B76C597AD
>Greenbone Networks GmbH | https://www.greenbone.net
>Neumarkt 12, 49074 Osnabrück, Germany | AG Osnabrück, HR B 202460
>Geschäftsführer: Lukas Grunwald, Dr. Jan-Oliver Wagner

___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

[Openvas-discuss] OpenVAS HTTP test OPTIONS requests

2018-08-17 Thread Xinhuan Zheng
Hello,

In our recent OpenVAS scan, our host has HTTP service running so the
scanning software tests a lot of URLs. However, in the target host access
log, we saw tons of OPTIONS requests being issued by scanning software.
Per some research, OPTIONS is a type of HTTP request that is pre-flight in
Cross-origin resource. The normal GET request would return a document with
bunch of objects, like json, images, etc. Can I limit OpenVAS not issuing
OPTIONS requests?
Thank you,

- Xinhuan Zheng

___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

Re: [Openvas-discuss] openvasmd 100% CPU utilization

2018-06-13 Thread Xinhuan Zheng
Daniel,

Since you run nightly scans, check /etc/cron.d the openvas scheduled cron jobs. 
They probably need to be re-scheduled when the system is quiet, i.e., no scan 
running. If you re-schedule the cron jobs and leave the scan running nightly, 
would you see the same thing happens?

- Xinhuan

From: Openvas-discuss 
mailto:openvas-discuss-boun...@wald.intevation.org>>
 on behalf of Daniel Bray 
mailto:db...@satcomdirect.com>>
Date: Wednesday, June 13, 2018 at 10:23 AM
To: 
"openvas-discuss@wald.intevation.org<mailto:openvas-discuss@wald.intevation.org>"
 
mailto:openvas-discuss@wald.intevation.org>>
Subject: Re: [Openvas-discuss] openvasmd 100% CPU utilization

Xinhuan,

Thanks for the reply. I’ve done that, daily, and every day after our nightly 
scans run the same thing happens. The scans finish, I come in the next morning 
to review, and I notice the CPU is back up to 100% utilization, and it’s 
openvasmd.


Daniel Bray
Office: +1 321-525-8081
Mobile: +1 321-213-8360

From: Xinhuan Zheng mailto:xzh...@christianbook.com>>
Sent: Wednesday, June 13, 2018 10:21 AM
To: Daniel Bray mailto:db...@satcomdirect.com>>; 
openvas-discuss@wald.intevation.org<mailto:openvas-discuss@wald.intevation.org>
Subject: Re: [Openvas-discuss] openvasmd 100% CPU utilization

Hello Daniel,

It appears the openvasmd process is stuck and placed into CPU run queue but not 
able to proceed. Because your system overall CPU idle is 87.2%, that shows you 
have enough CPU capacity on the system. I think you should kill the current 
openvasmd process and restart openvas service.

- Xinhuan

From: Openvas-discuss 
mailto:openvas-discuss-boun...@wald.intevation.org>>
 on behalf of Daniel Bray 
mailto:db...@satcomdirect.com>>
Date: Tuesday, June 12, 2018 at 3:56 PM
To: 
"openvas-discuss@wald.intevation.org<mailto:openvas-discuss@wald.intevation.org>"
 
mailto:openvas-discuss@wald.intevation.org>>
Subject: [Openvas-discuss] openvasmd 100% CPU utilization

Recently, I’ve noticed the web interface a bit sluggish. Upon examination of 
the server, I noticed the openvasmd process stuck at 100% CPU. There was no 
active scan going on, and I’m the only one that uses this server. Nothing 
should have been utilizing the CPU like that. Here is some of the specifics I 
noticed:

CentOS 7 (latest patches)
8x vCPU and 16 GB RAM

Results of top:

Tasks: 254 total, 2 running, 252 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.6 us, 7.2 sy, 0.0 ni, 87.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16249820 total, 8476724 free, 1105044 used, 6668052 buff/cache
KiB Swap: 4063228 total, 4063228 free, 0 used. 14679544 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
69700 root 20 0 445832 152984 1820 R 100.0 0.9 5329:38 openvasmd: Updating


Output from :sudo /usr/bin/openvas-check-setup --v9

Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 45368 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.


/var/log/openvas/openvasmd.log

md omp:WARNING:2018-06-12 02h56.43 utc:37560: Authentication failure for 
'sadmin' from ::
md omp:WARNING:2018-06-12 02h56.46 utc:37567: Authentication failure for 
'admin' from ::
md omp:WARNING:2018-06-12 02h56.47 utc:37583: Authentication failure for 
'admin' from ::
md main:MESSAGE:2018-06-12 16h47.05 utc:68214: OpenVAS Manager version 7.0.2 
(DB revision 184)
md manage: INFO:2018-06-12 16h47.05 utc:68214: Getting users.


Database size:
-rw--- 1 root root 217M Jun  9 01:39 /var/lib/openvas/mgr/tasks.db


I was curious if I should openvas-migrate-to-postgres would be a path to fix 
this issue. Nothing in the log files show any issue, so I’m not really sure 
what openvasmd is stuck “Updating”.

Any suggestions?


Daniel Bray
Office: +1 321-525-8081
Mobile: +1 321-213-8360

___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

Re: [Openvas-discuss] openvasmd 100% CPU utilization

2018-06-13 Thread Xinhuan Zheng
Hello Daniel,

It appears the openvasmd process is stuck and placed into CPU run queue but not 
able to proceed. Because your system overall CPU idle is 87.2%, that shows you 
have enough CPU capacity on the system. I think you should kill the current 
openvasmd process and restart openvas service.

- Xinhuan

From: Openvas-discuss 
mailto:openvas-discuss-boun...@wald.intevation.org>>
 on behalf of Daniel Bray 
mailto:db...@satcomdirect.com>>
Date: Tuesday, June 12, 2018 at 3:56 PM
To: 
"openvas-discuss@wald.intevation.org"
 
mailto:openvas-discuss@wald.intevation.org>>
Subject: [Openvas-discuss] openvasmd 100% CPU utilization

Recently, I’ve noticed the web interface a bit sluggish. Upon examination of 
the server, I noticed the openvasmd process stuck at 100% CPU. There was no 
active scan going on, and I’m the only one that uses this server. Nothing 
should have been utilizing the CPU like that. Here is some of the specifics I 
noticed:

CentOS 7 (latest patches)
8x vCPU and 16 GB RAM

Results of top:

Tasks: 254 total, 2 running, 252 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.6 us, 7.2 sy, 0.0 ni, 87.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16249820 total, 8476724 free, 1105044 used, 6668052 buff/cache
KiB Swap: 4063228 total, 4063228 free, 0 used. 14679544 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
69700 root 20 0 445832 152984 1820 R 100.0 0.9 5329:38 openvasmd: Updating


Output from :sudo /usr/bin/openvas-check-setup --v9

Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 45368 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.


/var/log/openvas/openvasmd.log

md omp:WARNING:2018-06-12 02h56.43 utc:37560: Authentication failure for 
'sadmin' from ::
md omp:WARNING:2018-06-12 02h56.46 utc:37567: Authentication failure for 
'admin' from ::
md omp:WARNING:2018-06-12 02h56.47 utc:37583: Authentication failure for 
'admin' from ::
md main:MESSAGE:2018-06-12 16h47.05 utc:68214: OpenVAS Manager version 7.0.2 
(DB revision 184)
md manage: INFO:2018-06-12 16h47.05 utc:68214: Getting users.


Database size:
-rw--- 1 root root 217M Jun  9 01:39 /var/lib/openvas/mgr/tasks.db


I was curious if I should openvas-migrate-to-postgres would be a path to fix 
this issue. Nothing in the log files show any issue, so I’m not really sure 
what openvasmd is stuck “Updating”.

Any suggestions?


Daniel Bray
Office: +1 321-525-8081
Mobile: +1 321-213-8360

___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

Re: [Openvas-discuss] No OpenVAS SCAP database found

2018-05-17 Thread Xinhuan Zheng
I created this directory under /var/lib/openvas:
mkdir -p scap-download
Ran greenbone-scapdata-sync, still getting the same error.
Where is supposed to be /scap-download directory created?
Thanks,
- xinhuan

On 5/17/18, 3:50 PM, "Openvas-discuss on behalf of Reindl Harald"
<openvas-discuss-boun...@wald.intevation.org on behalf of
h.rei...@thelounge.net> wrote:

>
>receiving incremental file list
>rsync: opendir "/scap-download" (in scap-data) failed: Permission denied
>(13)
>IO error encountered -- skipping file deletion
>
>Am 17.05.2018 um 21:48 schrieb Xinhuan Zheng:
>> Hello,
>> 
>> Today when I set up a brand new OpenVAS server on CentOS 7 system, after
>> running openvas-setup, I received below error when logging into GUI:
>> 
>> Warning: SecInfo Database Missing
>> 
>> I ran openvas-check-setup -v9. There is errors:
>> 
>> Step 2: Checking OpenVAS Manager ...
>> OK: OpenVAS Manager is present in version 7.0.2.
>> OK: OpenVAS Manager database found in
>>/var/lib/openvas/mgr/tasks.db.
>> OK: Access rights for the OpenVAS Manager database are correct.
>> OK: sqlite3 found, extended checks of the OpenVAS Manager
>> installation enabled.
>> OK: OpenVAS Manager database is at revision 184.
>> OK: OpenVAS Manager expects database at revision 184.
>> OK: Database schema is up to date.
>> OK: OpenVAS Manager database contains information about 45004
>>NVTs.
>> OK: At least one user exists.
>> ERROR: No OpenVAS SCAP database found. (Tried:
>> /var/lib/openvas/scap-data/scap.db)
>> FIX: Run a SCAP synchronization script like
>>greenbone-scapdata-sync.
>> 
>>  ERROR: Your OpenVAS-9 installation is not yet complete!
>> 
>> However, when I tried to run /usr/sbin/greenbone-scapdata-sync, I
>> received below errors:
>> 
>> #  /usr/sbin/greenbone-scapdata-sync
>> OpenVAS community feed server - http://www.openvas.org/
>> This service is hosted by Greenbone Networks - http://www.greenbone.net/
>> 
>> All transactions are logged.
>> 
>> If you have any questions, please use the OpenVAS mailing lists
>> or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
>> 
>> By using this service you agree to our terms and conditions.
>> 
>> Only one sync per time, otherwise the source ip will be blocked.
>> 
>> receiving incremental file list
>> timestamp
>>  13 100%   12.70kB/s0:00:00 (xfr#1, to-chk=0/1)
>> 
>> sent 43 bytes  received 105 bytes  98.67 bytes/sec
>> total size is 13  speedup is 0.09
>> OpenVAS community feed server - http://www.openvas.org/
>> This service is hosted by Greenbone Networks - http://www.greenbone.net/
>> 
>> All transactions are logged.
>> 
>> If you have any questions, please use the OpenVAS mailing lists
>> or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
>> 
>> By using this service you agree to our terms and conditions.
>> 
>> Only one sync per time, otherwise the source ip will be blocked.
>> 
>> receiving incremental file list
>> rsync: opendir "/scap-download" (in scap-data) failed: Permission denied
>> (13)
>> IO error encountered -- skipping file deletion
>> ./
>> timestamp
>>  13 100%   12.70kB/s0:00:00 (xfr#1, to-chk=102/162)
>> 
>> sent 99 bytes  received 4,236 bytes  2,890.00 bytes/sec
>> total size is 1,866,433,683  speedup is 430,549.87
>> rsync error: some files/attrs were not transferred (see previous errors)
>> (code 23) at main.c(1650) [generator=3.1.2]
>> 
>> Can someone please help?
>___
>Openvas-discuss mailing list
>Openvas-discuss@wald.intevation.org
>https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss


[Openvas-discuss] No OpenVAS SCAP database found

2018-05-17 Thread Xinhuan Zheng
Hello,

Today when I set up a brand new OpenVAS server on CentOS 7 system, after 
running openvas-setup, I received below error when logging into GUI:

Warning: SecInfo Database Missing

I ran openvas-check-setup -v9. There is errors:

Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation 
enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 45004 NVTs.
OK: At least one user exists.
ERROR: No OpenVAS SCAP database found. (Tried: 
/var/lib/openvas/scap-data/scap.db)
FIX: Run a SCAP synchronization script like greenbone-scapdata-sync.

 ERROR: Your OpenVAS-9 installation is not yet complete!

However, when I tried to run /usr/sbin/greenbone-scapdata-sync, I received 
below errors:

#  /usr/sbin/greenbone-scapdata-sync
OpenVAS community feed server - http://www.openvas.org/
This service is hosted by Greenbone Networks - http://www.greenbone.net/

All transactions are logged.

If you have any questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.

By using this service you agree to our terms and conditions.

Only one sync per time, otherwise the source ip will be blocked.

receiving incremental file list
timestamp
 13 100%   12.70kB/s0:00:00 (xfr#1, to-chk=0/1)

sent 43 bytes  received 105 bytes  98.67 bytes/sec
total size is 13  speedup is 0.09
OpenVAS community feed server - http://www.openvas.org/
This service is hosted by Greenbone Networks - http://www.greenbone.net/

All transactions are logged.

If you have any questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.

By using this service you agree to our terms and conditions.

Only one sync per time, otherwise the source ip will be blocked.

receiving incremental file list
rsync: opendir "/scap-download" (in scap-data) failed: Permission denied (13)
IO error encountered -- skipping file deletion
./
timestamp
 13 100%   12.70kB/s0:00:00 (xfr#1, to-chk=102/162)

sent 99 bytes  received 4,236 bytes  2,890.00 bytes/sec
total size is 1,866,433,683  speedup is 430,549.87
rsync error: some files/attrs were not transferred (see previous errors) (code 
23) at main.c(1650) [generator=3.1.2]

Can someone please help?

- xinhuan
___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

[Openvas-discuss] openvas plugin update problem

2018-05-01 Thread Xinhuan Zheng
Hello,

Good morning everyone.

I was trying to update our openvas plugin, I.e., NVT feeds. I ran
openvas-nvt-sync command successfully. Then I ran openvasmd --update and
openvasmd --rebuild. Both commands are done without any errors. Then I
restarted openvas-administrator, openvas-manager, openvas-scanner and gsad
daemons. Plugins appear to be loaded successfully. It is ~42K plugins are
loaded. However, when I looked at my admin GUI, in SecInfos Management ‹>
NVTs, it only shows ~6K NVTs are loaded. Then I tried to scan a host, then
report shows below:

Total: 0 0 0 0 0 0
Vendor security updates are not trusted.

Overrides are on. When a result has an override, this report uses the
threat of the override.
Notes are included in the report.
This report might not show details of all issues that were found.
It only lists hosts that produced issues.
Issues with the threat level "Debug" are not shown.
This report contains 0 results.


There isn¹t anything shown in report. I feel something is wrong and don¹t
know what it was. Can this be caused by old version of software or
something else? I plan to rebuild into new scanning server, but would like
to hear your advise about what most recent OS should be and what is
current stable version I should go with?

Thanks,

- xinhuan

 


___
Openvas-discuss mailing list
Openvas-discuss@wald.intevation.org
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss


Re: How to manage various CPAN modules as a package for distribution purpose?

2018-02-01 Thread Xinhuan Zheng
Hello Gonzalo,

Good morning. Thanks for your reply. I just found perlbrew is a great tool: 
http://search.cpan.org/~gugod/App-perlbrew-0.82/bin/perlbrew. Does it have 
ability to create a package of local perl the interpreter and all its 
associated modules for easy distribution? I don’t see such command options.
Thanks,

- xinhuan

From: Gonzalo Barco <gbarc...@gmail.com<mailto:gbarc...@gmail.com>>
Date: Wednesday, January 31, 2018 at 3:46 PM
To: Xinhuan Zheng <xzh...@christianbook.com<mailto:xzh...@christianbook.com>>
Cc: "cpan-discuss@perl.org<mailto:cpan-discuss@perl.org>" 
<cpan-discuss@perl.org<mailto:cpan-discuss@perl.org>>
Subject: Re: How to manage various CPAN modules as a package for distribution 
purpose?

I suggest perlbrew.
I know a 300+ line project with 400+ servers to deploy using perlbrew.
Never "misses" the deploy path for libs.

Also check out local::lib and Carton for some specific scenarios you might want 
to support.

Regards,
gonzalo

On Jan 31, 2018 1:18 PM, "Xinhuan Zheng" 
<xzh...@christianbook.com<mailto:xzh...@christianbook.com>> wrote:
Dear CPAN,

We are using Perl CPAN modules a lot. From time to time we need to build 
various CPAN modules for various our projects. However, we want to have a way 
to bundle them in one package so that this package can be easily distributed 
into various environments. What is best way to manage CPAN modules as a 
package? Note we are not using Active Perl. We use Unix/Linux platform 
distributed Perl.

Another related question, for some reason, if a particular CPAN module we need 
requires a specific Perl version, we normally need to install local version of 
Perl, instead of using Perl from OS distribution. What is best way of managing 
two versions of Perl on same host because what would mean there might be two 
cpan paths and Perl modules to be installed could go to wrong Perl library path.

Thanks,

- xinhuan



[CentOS] What is best way of managing isolated network environment?

2018-01-31 Thread Xinhuan Zheng
Hello,

We need to manage isolated network environment so that even though host name 
and ip address could be same, but they are located in isolated network 
environment so that’s not a problem. However, that would be very challenging to 
build a server in such an isolated environment. For example, we could do 
kickstart to build a server in non-isolated network environment, but for 
isolated one, we don’t have such a kickstart server. If we had such one, that 
would mean we consume more resources for building server. What is best way of 
managing such isolated environment in terms of easily building servers?
Thank you,

- xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Sendmail is considered deprecated

2017-03-31 Thread Xinhuan Zheng
Hello,

Today I searched redhat official portal and learned that Sendmail is considered 
deprecated. By default, CentOS 7 will use postfix as MTA. I need good advise on 
what it means to us. We are CentOS customers. We use that operating system for 
quite a few years. We rely on Sendmail for years for us to relay large quantity 
of emails to our customers for marketing purpose. We build our additional 
fallback servers as well for fallback relays. We build our customized 
configuration for Sendmail too. I really need help to figure out if we can 
continue using Sendmail (even deprecated) for future long term and what 
implication would be doing so.
Thanks,

- xinhuan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Postfix for bulk email and TLS

2017-03-31 Thread Xinhuan Zheng
Hello,

Does anyone in postfix mailing list have experience using Postifx software for 
sending bulk emails with TLS encryption? Can you share your experience with me? 
The amount of bulk email is quite large, normally for marketing purpose. In the 
past we have been using sendmail bundled with CentOS 6 OS for a few years 
already. But we need to upgrade our system and one new requirement is to use 
TLS. So I’m planning on using recent CentOS 7 operating system. But as I look 
at its repository, postfix and openssl appear to be old versions.
Thanks,

- xinhuan


[389-users] Re: passwordexpirationtime question

2017-02-28 Thread xinhuan zheng
passwordMaxAge can be expressed by days. I set it to 60 (days) before and it 
did work as expected. The only thing that blocks me is when password needs to 
change. my hope is that upon user being prompted for changing password and 
doing so, the passwordexpirationtime would be changed accordingly to the time 
of current + passwordMaxAge but that didn't happen automatically. I have found 
that I must set passwordmustchange to off and set passwordexpirationtime to 
1970010100Z (time 0). Once that step is done, the next time when user 
login, the passwordexpirationtime would be set to new and correct time. 

That would mean every user changing password would need administrative 
intervention. That seems not right. What would be a better way to handle 
passwordexpirationtime?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] passwordexpirationtime question

2017-02-28 Thread xinhuan zheng
Hello,

I have setup password policy for user account to enforce a few things:

passwordchange: on
passwordchecksyntax: on
passwordexp: on
passwordlockout: on
passwordlockoutduration: 180
passwordmaxage: 7
passwordmaxfailure: 3
passwordmustchange: on
passwordwarning: 518400

With that policy on a user account, I changed one user's password from 389 
console. It basically resets user's password.

When user login, user gets "Password expired. Change your password now." 
prompt. The user goes through prompt to change the password. Then user gets 
login shell successfully. User then logout.

Next time when user login again, the user still gets "Password expired. Change 
your password now." prompt. It appears 'passwordexpirationtime' attribute is 
set to the very first time when user changed password, but never set to 
password change time + 7 days, as the policy is configured. 

What went wrong in my previous procedure? How do I get passwordexpirationtime 
set to correct time when user change their password from administrative reset?

Thanks,
- xinhuan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: "Manage Certificate" task gives an error

2016-09-30 Thread xinhuan zheng
I found that the admin console is using wrong host ip. I must use ldapmodify 
command to change admin config then restart admin-server
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] "Manage Certificate" task gives an error

2016-09-30 Thread xinhuan zheng
Hi All,

Today I just found when I click "Manage Certificate" in administration console, 
I got an error. Below is the error message:

An error has occured.

Could not open file (null). File does not exist or filename is invalid. A 
filename that exists in the server security directory must be specified. 
Absolute or relative paths should not be specified.

Does anybody know what's going on?

- xinhuan
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] 389-ds-base upgrade

2016-08-22 Thread xinhuan zheng
Hello,

I received the announcement on Friday about 389-ds-base upgrade. below is the 
short excerpt from the email:

---
389 Directory Server 1.3.5.13

The 389 Directory Server team is proud to announce 389-ds-base version 1.3.5.13.

Fedora packages are available from the Fedora 24, 25 and Rawhide repositories.

The new packages and versions are:

389-ds-base-1.3.5.13-1
---

However, since I am using Cent OS 6, I don't see the latest package available 
in epel. I tried to do 'yum upgrade 389-ds-base' but I just get the same 
version as my previous installation, not the newer version. What is the good 
way to upgrade on Cent OS 6?

Thank you,
- xinhuan
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Re: Managing user password policy problem

2016-06-15 Thread xinhuan zheng
I finally found my problem. Our uid starts with a lower number so I have to 
change system-auth and password-auth the uid number from 500 to ours. Password 
policy worked as expected then.
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Re: Managing user password policy problem

2016-06-15 Thread xinhuan zheng
I found more information today.

Frist -

I found 
https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/account-usability.html,
 so I have added aci to the oid. 

dn: oid=1.3.6.1.4.1.42.2.27.9.5.8,cn=features,cn=config
changetype: modify
add: aci
aci: (targetattr != "aci")(version 3.0; acl "Account Usable"; allow (read, 
search, compare, proxy)(groupdn = 
"ldap:///cn=groupname,ou=group,dc=christianbook,dc=com;);)

Next -

I can set my password from 389 console. Once it's set, the passwordexpiration 
becomes '1970...', which means it is expired.

Then if I do bind using myself from client:

ldapsearch -x -Z  -D "uid=xinhuan,ou=people,dc=christianbook,dc=com"  -W - -b 
'dc=christianbook,dc=com'  pwdpolicysubentry

Below is the response:

# search result
search: 3
result: 53 Server is unwilling to perform
control: 2.16.840.1.113730.3.4.4 false MA==

It appears ldapsearch sees my password has expired so server unwilling to 
respond. However, I can proceed login using ssh, despite the /var/log/secure 
message as mentioned before:

Jun 15 12:11:48 dclientdev1 sshd[7894]: pam_sss(sshd:auth): authentication 
failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=localhost user=xinhuan
Jun 15 12:11:48 dclientdev1 sshd[7894]: pam_sss(sshd:auth): received for user 
xinhuan: 12 (Authentication token is no longer valid; new one required)  < 
pam_sss(sshd:auth) got password invalid response from directory server already
Jun 15 12:11:48 dclientdev1 sshd[7894]: Accepted password for xinhuan from ::1 
port 41588 ssh2 < proceed login

Next -

I change passwordMaxAge to 1 in policy. Once I login, I use 'passwd' command to 
change my password:

$ passwd
Changing password for user xinhuan.
Current Password:
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Since the passwordMaxAge is set to 1, the next time when I login, I got prompt:

Your password has expired. You have 2 grace login(s) remaining.

below is from /var/log/secure:

Jun 15 12:28:07 dclientdev1 sshd[8000]: pam_sss(sshd:auth): User info message: 
Your password has expired. You have 2 grace login(s) remaining.
Jun 15 12:28:07 dclientdev1 sshd[8000]: pam_sss(sshd:auth): authentication 
success; logname= uid=0 euid=0 tty=ssh ruser= rhost=localhost user=xinhuan

After consuming all grace login(s), I am able to login using the expired 
password. Still /var/log/secure will show up it's expired password but I can 
login.
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Re: 389 directory server wildcard certificate

2016-06-14 Thread xinhuan zheng
Good Afternoon William,

Yes, it does help a lot. Thanks.

- xinhuan
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Re: Managing user password policy problem

2016-06-13 Thread xinhuan zheng
Later on I used command:

/usr/lib64/dirsrv/slapd-cbdds1/ns-newpwpolicy.pl -D "cn=directory manager" -w - 
-U "uid=xinhuan,ou=people,dc=christianbook,dc=com"

The script works fine with below output:

adding new entry "cn=nsPwPolicyContainer,ou=people,dc=christianbook,dc=com"

adding new entry 
"cn=cn\=nsPwPolicyEntry\,uid\=xinhuan\,ou\=people\,dc\=christianbook\,dc\=com,cn=nsPwPolicyContainer,ou=people,dc=christianbook,dc=com"

modifying entry "uid=xinhuan,ou=people,dc=christianbook,dc=com"

modifying entry "cn=config"

However, none of the password policy I set into nsPwPolicyEntry worked.

- xinhuan
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Managing user password policy problem

2016-06-13 Thread xinhuan zheng
Hi All,

I am having difficulty to make managing user password policy working. I want to 
use local per-user based password policy. Here is the configuration I use:

containter configuration -
dn: cn=nsPwPolicyContainer,ou=people,dc=christianbook,dc=com
objectClass: top
objectClass: nsContainer
cn: nsPwPolicyContainer

entry configuration -
dn: 
cn=userPasswordPolicy,cn=nsPwPolicyContainer,ou=people,dc=christianbook,dc=com
cn: userPasswordPolicy
objectclass: top
objectclass: extensibleObject
objectclass: ldapsubentry
objectclass: passwordpolicy
passwordGraceLimit: 3
passwordMustChange: on
passwordChange: on
passwordExp: on
passwordMaxAge: 2
passwordHistory: on
passwordCheckSyntax: on

nsslapd-pwpolicy-local -
dn: cn=config
changetype: modify
replace: nsslapd-pwpolicy-local
nsslapd-pwpolicy-local: on

per-user password policy configuration -
dn: uid=xinhuan,ou=people,dc=christianbook,dc=com
changetype: modify
add: pwdpolicysubentry
pwdpolicysubentry: 
cn=userPasswordPolicy,cn=nsPwPolicyContainer,ou=people,dc=christianbook,dc=com

However, when I did my userpassword reset using ldapmodify command, I am able 
to login from the remote client that authenticates with my 389 directory 
server, without prompting to change my password the first time I login, which 
is against the 'passwordMustChange' policy.

The second thing is that I tried to expire my password so I can test 
'passwordExp'. However, when I did 'passwd -e xinhuan' on LDAP client, I got 
error:

Expiring password for user xinhuan.
passwd: Error

What's going on?

Thanks,
- xinhuan
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Re: 389 directory server wildcard certificate

2016-06-13 Thread xinhuan zheng
Hello William,
Thanks for your valuable information. For SubjectAlternativeNames, the 
alternative names you have shown in the example contains '-' symbol, like 
'nss-alt.dev.example.com'. Is '-' symbol required in the server's hostname? 
Since we don't use that hostname naming convention. We use something like 
'nssdev1.example.com', 'nssdev2.example.com'. So if I purchase 
'nssdev.example.com' SubjectAlternativeNames, would it work for 
'nssdev1.example.com' and 'nssdev2.example.com'?
- xinhuan
  From: William Brown <wibr...@redhat.com>
 To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org> 
 Sent: Sunday, June 12, 2016 5:22 PM
 Subject: [389-users] Re: 389 directory server wildcard certificate
   
On Sun, 2016-06-12 at 16:39 +0000, xinhuan zheng wrote:
> I need to deploy multiple 389 directory server instances into production 
> environment. I want to know if 389 directory server
> supports wildcard server certificate. Currently the subject for my instance 
> is:
> 
> Subject: "CN=dmdev1.christianbook.com,OU=389 Directory Server"
> 
> When using wildcard, it will be:
> 
> Subject: "CN=*.christianbook.com,OU=389 Directory Server"

Yes.

> 
> Is it possible?
> 
> I guess GoDaddy might be able to support wildcard certificate but I am not 
> sure. Does anyone know about it?

No sorry. Wild cards cost a lot.


I would recommend a better approach. NSS supports SAN (SubjectAltenativeNames) 
on certs.

So you make a cert with:

certutil -R -f pwdfile.txt -d . -t "C,," -x -n "Server-Cert" -g 2048\
-s "CN=nss.dev.example.com,O=Testing,L=example,ST=Queensland,C=AU" \
-8 "nss.dev.example.com,nss-alt.dev.example.com" -o nss.dev.example.com.csr

This certificate once signed would be useable with:

* nss.dev.example.com
* nss-alt.dev.example.com

There's no real limit to how many alternative names you can have, but it's a 
good idea to plan your deployment so you don't have
to keep re-issuing these when you request more certs.

Remember, this still needs signing so you would need to send the .csr to your CA


I hope that helps you,

> 
> Thanks,
> - xinhuan
> --
> 389-users mailing list
> 389-users@lists.fedoraproject.org
> https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Brisbane

--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


  --
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] 389 directory server wildcard certificate

2016-06-12 Thread xinhuan zheng
I need to deploy multiple 389 directory server instances into production 
environment. I want to know if 389 directory server supports wildcard server 
certificate. Currently the subject for my instance is:

Subject: "CN=dmdev1.christianbook.com,OU=389 Directory Server"

When using wildcard, it will be:

Subject: "CN=*.christianbook.com,OU=389 Directory Server"

Is it possible?

I guess GoDaddy might be able to support wildcard certificate but I am not 
sure. Does anyone know about it?

Thanks,
- xinhuan
--
389-users mailing list
389-users@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Create Certificate Signing Request File

2016-04-20 Thread xinhuan zheng
Hello,

I need to create certificate signing request file that can be sent to 
certificate authority vendors, like GoDaddy, etc. I have two questions:

1) The certutil command line output a CSR file which has different format than 
the CSR file generated using 389-console the GUI. The main difference is that 
the certutil command line generates something like:

Certificate request generated by Netscape certutil
Phone: xxx-xxx-

Common Name: 
Email: (not specified)
Organization: my organization
State: ...
Country: US

Following above, it's the "BEGIN NEW CERTIFICATE" section. 

However, if it's GUI, only "BEGIN NEW CERTIFICATE" section is there.

Why the two methods generates output file different?  Will it be ok to just use 
certuti command output with "BEGIN NEW CERTIFICATE" section to send to vendor?

2) Do I also need to create certificate signing request file for each admin 
server? Will that be the same procedure for the directory server instance?

Thanks,
- xinhuan
--
389-users mailing list
389-users@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


[389-users] Re: 389 directory server console and httpd.worker process

2016-04-13 Thread xinhuan zheng
With that explanation, if I install console into a server different than the 
server the directory server instance runs on, while Admin Server (http) is 
installed on the server the directory server runs on, is it possible? In 
another words, can console be separate installation into another server so from 
one console it can control multiple directory server instances?

- xinhuan
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] 389 directory server console and httpd.worker process

2016-04-13 Thread xinhuan zheng
I want to understand more about 389 directory server. There is a administrative 
console, 389-console, appearing to be a complete GUI written in Java. There is 
another process, httpd.worker. When I launch the 389-console, I need to type in 
(3) information. The administrative cn, bind passwor, and the URL of that 
httpd.worker is listening on. How does the GUI console interact with the 
httpd.worker? Who is submitting the requests to the directory server instance? 
The 389 GUI console or the httpd.worker? Why it needs two separate processes to 
interact with directory server? Is there a diagram to describe such interaction 
so I can visualize?

- xinhuan
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] Re: Create 389 directory server secure connections

2016-04-12 Thread xinhuan zheng
Hello Mr. Brown,

I found that the procedure you give me is part of my problem. I ended up with 
running remove-ds-admin.pl command then re-create my directory server instance 
and admin server instance. Luckily I kept answers file and ldif data. I also 
re-run the setupssl2.sh script. Since this time I didn't mess up with 
environment variables, it did go through successfully. I also did go through 
your procedure, and successfully run the ldapsearch command. However, how do I 
know "ldapsearch -x -L -b 'dc=christianbook,dc=com'" uses secure connection or 
not? I tail my access log, and it shows something like below:

[12/Apr/2016:21:24:47 -0400] conn=46 op=0 EXT oid="1.3.6.1.4.1.1466.20037" 
name="startTLS"
[12/Apr/2016:21:24:47 -0400] conn=46 op=0 RESULT err=0 tag=120 nentries=0 
etime=0
[12/Apr/2016:21:24:47 -0400] conn=46 op=-1 fd=64 closed - Peer does not 
recognize and trust the CA that issued your certificate.
[12/Apr/2016:21:26:46 -0400] conn=47 fd=64 slot=64 connection from ::1 to ::1
[12/Apr/2016:21:26:46 -0400] conn=47 op=0 BIND dn="" method=128 version=3
[12/Apr/2016:21:26:46 -0400] conn=47 op=0 RESULT err=0 tag=97 nentries=0 
etime=0 dn=""
[12/Apr/2016:21:26:46 -0400] conn=47 op=1 SRCH base="dc=christianbook,dc=com" 
scope=2 filter="(objectClass=*)" attrs=ALL
[12/Apr/2016:21:26:46 -0400] conn=47 op=1 RESULT err=0 tag=101 nentries=103 
etime=0 notes=U
[12/Apr/2016:21:26:46 -0400] conn=47 op=2 UNBIND
[12/Apr/2016:21:26:46 -0400] conn=47 op=2 fd=64 closed - U1

It did say "Peer does not recognize and trust the CA that issued your 
certificate." However, it did return 103 entries for the ldapsearch. I don't 
understand that. Is the secure connection successfully established or is it 
actually failed back to non-secure connection?

Thanks,
- xinhuan
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] Re: Create 389 directory server secure connections

2016-04-11 Thread xinhuan zheng
Hello Mr. Brown,

I used below ldapsearch command:

ldapsearch  -d 5 -H ldaps://labd1.christianbook.com -x -D "cn=Directory 
Manager" -w** -s base -b "" objectclass=*

I got below result:

ldap_url_parse_ext(ldaps://labd1.christianbook.com)
ldap_create
ldap_url_parse_ext(ldaps://labd1.christianbook.com:636/??base)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP labd1.christianbook.com:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 192.168.13.26:636
ldap_pvt_connect: fd: 3 tm: -1 async: 0
TLS: certdb config: configDir='/etc/openldap/certs' tokenDescription='ldap(0)' 
certPrefix='' keyPrefix='' flags=readOnly
TLS: cannot open certdb '/etc/openldap/certs', error -8018:Unknown PKCS #11 
error.
TLS: certificate [CN=CAcert] is not valid - error -8172:Peer's certificate 
issuer has been marked as not trusted by the user..
TLS: error: connect - force handshake failure: errno 0 - moznss error -8172
TLS: can't connect: TLS error -8172:Peer's certificate issuer has been marked 
as not trusted by the user..
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

Since I ran the setupssl2.sh twice, the first time there were errors in 
error_log, the second time it didn't appear to be:

[07/Apr/2016:13:21:37 -0400] - Warning: Adding configuration attribute 
"nsslapd-security"
[07/Apr/2016:13:21:37 -0400] - The change of nsslapd-secureport will not take 
effect until the server is restarted
[07/Apr/2016:13:23:55 -0400] - slapd shutting down - signaling operation threads
[07/Apr/2016:13:23:55 -0400] - slapd shutting down - closing down internal 
subsystems and plugins
[07/Apr/2016:13:23:55 -0400] - Waiting for 4 database threads to stop
[07/Apr/2016:13:23:56 -0400] - All database threads now stopped
[07/Apr/2016:13:23:56 -0400] - slapd stopped.
[07/Apr/2016:13:24:17 -0400] - SSL alert: Security Initialization: Can't find 
certificate (Server-Cert) for family cn=RSA,cn=encryption,cn=config (Netscape 
Portable Runtime error -8174 - security library: bad database.)
[07/Apr/2016:13:24:17 -0400] - SSL alert: Security Initialization: Unable to 
retrieve private key for cert Server-Cert of family 
cn=RSA,cn=encryption,cn=config (Netscape Portable Runtime error -8174 - 
security library: bad database.)
[07/Apr/2016:13:24:17 -0400] - SSL failure: None of the cipher are valid
[07/Apr/2016:13:24:17 -0400] - ERROR: SSL Initialization phase 2 Failed.
[07/Apr/2016:13:33:11 -0400] - SSL alert: Security Initialization: Can't find 
certificate (Server-Cert) for family cn=RSA,cn=encryption,cn=config (Netscape 
Portable Runtime error -8174 - security library: bad database.)
[07/Apr/2016:13:33:11 -0400] - SSL alert: Security Initialization: Unable to 
retrieve private key for cert Server-Cert of family 
cn=RSA,cn=encryption,cn=config (Netscape Portable Runtime error -8174 - 
security library: bad database.)
[07/Apr/2016:13:33:11 -0400] - SSL failure: None of the cipher are valid
[07/Apr/2016:13:33:11 -0400] - ERROR: SSL Initialization phase 2 Failed.
[07/Apr/2016:13:35:07 -0400] - 389-Directory/1.2.11.15 B2016.082.1529 starting 
up
[07/Apr/2016:13:35:07 -0400] - Db home directory is not set. Possibly 
nsslapd-directory (optinally nsslapd-db-home-directory) is missing in the 
config file.
[07/Apr/2016:13:35:08 -0400] - slapd started.  Listening on All Interfaces port 
389 for LDAP requests
[07/Apr/2016:13:35:23 -0400] - Warning: Adding configuration attribute 
"nsslapd-security"
[07/Apr/2016:13:35:23 -0400] - The change of nsslapd-secureport will not take 
effect until the server is restarted
[07/Apr/2016:13:36:20 -0400] - slapd shutting down - signaling operation threads
[07/Apr/2016:13:36:20 -0400] - slapd shutting down - waiting for 27 threads to 
terminate
[07/Apr/2016:13:36:20 -0400] - slapd shutting down - closing down internal 
subsystems and plugins
[07/Apr/2016:13:36:20 -0400] - Waiting for 4 database threads to stop
[07/Apr/2016:13:36:21 -0400] - All database threads now stopped
[07/Apr/2016:13:36:21 -0400] - slapd stopped.
[07/Apr/2016:13:36:33 -0400] - 389-Directory/1.2.11.15 B2016.082.1529 starting 
up
[07/Apr/2016:13:36:33 -0400] attrcrypt - No symmetric key found for cipher AES 
in backend userRoot, attempting to create one...
[07/Apr/2016:13:36:33 -0400] attrcrypt - Key for cipher AES successfully 
generated and stored
[07/Apr/2016:13:36:33 -0400] attrcrypt - No symmetric key found for cipher 3DES 
in backend userRoot, attempting to create one...
[07/Apr/2016:13:36:33 -0400] attrcrypt - Key for cipher 3DES successfully 
generated and stored
[07/Apr/2016:13:36:33 -0400] - slapd started.  Listening on All Interfaces port 
389 for LDAP requests
[07/Apr/2016:13:36:33 -0400] - Listening on All Interfaces port 636 for LDAPS 
requests
[07/Apr/2016:14:06:12 -0400] - slapd shutting down - signaling operation threads
[07/Apr/2016:14:06:12 -0400] - slapd shutting down - waiting for 28 threads to 
terminate

[389-users] Re: Create 389 directory server secure connections

2016-04-10 Thread xinhuan zheng
Hello,

I can't get my 389 directory server secure connection to work. The process is 
started. But I can't do any ldapsearch, nor get 389 console to work. Can I get 
my non-secure connection work then start all over again from scratch?

- xinhuan
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] Re: Create 389 directory server secure connections

2016-04-07 Thread xinhuan zheng
Hello William,

The slapd was down by itself for some reason. Below is from error file this 
afternoon.

[07/Apr/2016:13:36:33 -0400] - slapd started.  Listening on All Interfaces port 
389 for LDAP requests
[07/Apr/2016:13:36:33 -0400] - Listening on All Interfaces port 636 for LDAPS 
requests
[07/Apr/2016:14:06:12 -0400] - slapd shutting down - signaling operation threads
[07/Apr/2016:14:06:12 -0400] - slapd shutting down - waiting for 28 threads to 
terminate
[07/Apr/2016:14:06:12 -0400] - slapd shutting down - closing down internal 
subsystems and plugins
[07/Apr/2016:14:06:12 -0400] - Waiting for 4 database threads to stop
[07/Apr/2016:14:06:12 -0400] - All database threads now stopped
[07/Apr/2016:14:06:12 -0400] - slapd stopped.

So I started it: service dirsrv restart. From the error file again, it shows 
the process is started:

[07/Apr/2016:21:25:49 -0400] - 389-Directory/1.2.11.15 B2016.082.1529 starting 
up
[07/Apr/2016:21:25:49 -0400] - slapd started.  Listening on All Interfaces port 
389 for LDAP requests
[07/Apr/2016:21:25:49 -0400] - Listening on All Interfaces port 636 for LDAPS 
requests

The 'service dirsrv status' shows it is running. However, I still get the same 
authorization error like before when using 389-console. I can't do ldapsearch 
either like before:

ldapsearch -p389 -hlabd1.christianbook.com -x -bdc=christianbook,dc=com
# extended LDIF
#
# LDAPv3
# base 

[389-users] Create 389 directory server secure connections

2016-04-07 Thread xinhuan zheng
Hello All,

I screwed up my 389 directory server console authentication today because I 
need to set up TLS secure connections. I first started reading this document: 
http://directory.fedoraproject.org/docs/389ds/howto/howto-ssl.html. The 
document refers to a nice shell script from github: 
https://raw.githubusercontent.com/richm/scripts/master/setupssl2.sh. I 
downloaded the script, read it through. The script allows a couple environment 
variable setup, one of them is REMOTE variable. I really plan to have another 
directory server for replication so I thought that would be nice to generate 
it's certificate, etc beforehand. So I set up that environment variable. Then I 
ran command below:

REMOTE=labd2.christianbook.com; export REMOTE
./setupssl2.sh /etc/diresrv/slapd-userauth1

The very first time I got error because labd2 remote host doesn't exist yet, 
the script cannot generate the certificate for it because it cannot connect to 
it. But I typed in "Directory Manager" password, so it changed dse.ldif file. I 
tried to restart dirsrv-admin and dirsrv, only dirsrv-admin restarted 
successfully, the userauth1 instance failed restarting. Then I manually copy 
back dse.ldif.startOK file to dse.ldif file then restart userauth1 instance. It 
was restarted successfully. Then I unset REMOTE, re-run the setupssl2 script. 
Once it's finished, I then restarted both dirsrv-admin and dirsrv. They both 
restarted successfully. However, when I ran /usr/bin/389-console command, I got 
below error:

Cannot logon because of an incorrect User ID, Incorrect password or Directory 
problem.

HttpException:
HTTP/1.1 401 Authorization Required
Status: 401
URL: https://labd1.christianbook.com:9830/admin-serv/authenticate

I also tried to do ldapsearch but wasn't successful either:

# ldapsearch -d 5 -x -L -b 'dc=christianbook,dc=com'
ldap_create
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP localhost:389
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying ::1 389
ldap_pvt_connect: fd: 3 tm: -1 async: 0
ldap_close_socket: 3
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 127.0.0.1:389
ldap_pvt_connect: fd: 3 tm: -1 async: 0
ldap_close_socket: 3
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

It appears that when admin server TLS change takes effect but when the instance 
TLS wasn't in effect, then admin server cannot reconnect to instance directory 
server. I don't know how to fix that. Please help. Note this is 389 directory 
server 1.2.2 and 389 console 1.1.7. They are recent versions running on CentOS 
6.7

Thanks,
- xinhuan
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] How do I import data to 389 Directory Server?

2016-03-30 Thread xinhuan zheng
I have an old directory server running Sun One Java System Directory Service. 
Yesterday I created top dcobject - dc=christianbook,dc=com, however, I don't 
know what the best way is to import data from my old Sun Directory Server to 
389 Directory Server. It appears the object structure is different. Below is an 
example of my old dcobject:

dn: dc=christianbook,dc=com
aci: (target ="ldap:///dc=christianbook,dc=com;)(targetattr 
!="userPassword")(version 3.0;acl "Anonymous read-search access";allow (read, 
search, compare)(userdn = "ldap:///anyone;);)
aci: (target="ldap:///dc=christianbook,dc=com;) (targetattr = "*")(version 3.0; 
acl "allow all Admin group"; allow(all) groupdn = "ldap:///cn=Directory 
Administrators,ou=Groups,dc=christianbook,dc=com";)
aci: (targetattr = 
"cn||uid||uidNumber||gidNumber||homeDirectory||shadowLastChange||shadowMin||shadowMax||shadowWarning||shadowInactive||shadowExpire||shadowFlag||memberUid")(version
 3.0; acl LDAP_Naming_Services_deny_write_access; deny (write) userdn = 
"ldap:///self;;)
aci: 
(target="ldap:///dc=christianbook,dc=com;)(targetattr="userPassword")(version 
3.0; acl LDAP_Naming_Services_proxy_password_read; allow (compare,search) 
userdn = "ldap:///cn=proxyagent,ou=profile,dc=christianbook,dc=com;;)
dc: christianbook
nisdomain: christianbook.com
objectclass: top
objectclass: nisDomainObject
objectclass: domain

This is another object in my old Sun Directory Server:

dn: ou=people,dc=christianbook,dc=com
objectclass: top
objectclass: organizationalUnit
ou: people

What is the best way to convert or import from my old Sun Directory Server to 
new one?
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] Re: 389 directory server console crash and core dump

2016-03-29 Thread xinhuan zheng
I just upgraded java to 1.7. That appears to be working. However, I still don't 
know how to delete root suffix that wasn't shown up in the console. How do I 
delete a root suffix that wasn't shown in the console?
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

[389-users] 389 directory server console crash and core dump

2016-03-25 Thread xinhuan zheng
Today I installed the 389 Directory Server and Directory Console. However, the 
console keeps crashing and dumping core. However, it failed to write core dump, 
leaving a log file into root directory. 

The first time when it dumps core:
1. Launch the 389-console
2. Login
3. Select Directory Server Instance, in my case, its userauth1, open it
4. Choose Configuration Tab, expand data
5. Right click on it, choose New Suffix
6. Type in New Suffix name: dc=test2,dc=com
7. Type in Database name: test2
8. Click OK.

Then it dumps core. I re-launch the console, but I cannot find the newly added 
suffix.

So I decide to add it again, but the console spit out error that "suffix 
already exists". Console does not display the newly added suffix, however, it 
thinks it does exist.

I repeated adding another new suffix with a different name. It works without 
core dump.

I repeated adding/deleting new suffix a few more times. It appears core dump 
happens either when adding or deleting root suffix, randomly. 

I need help to figure out
1) How do I delete the root suffix that cannot be displayed by the console?
2) How do I make console stable.

The java package, java-1.6.0-openjdk-1.6.0.38-1.13.10.0.el6_7.x86_64, is 
installed and used on my CentOS 6.7 x86_64 machine.

Thanks,
--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org

Re: Needs some clarification

2015-10-16 Thread Xinhuan Zheng
Thank you. Is there a way I can use t/TEST the Apache Test to see the cgi-bin 
scripts vs handler the difference?

Also, I noticed that in startup.pl, mod_perl documentation says do

use foo();

what’s the difference compared to using “use foo;”?

- xinhuan

From: James Smith <j...@sanger.ac.uk<mailto:j...@sanger.ac.uk>>
Date: Friday, October 16, 2015 at 10:36 AM
To: Xinhuan Zheng <xzh...@christianbook.com<mailto:xzh...@christianbook.com>>
Subject: Re: Needs some clarification



  1.  cgi-bin scripts vs mod_perl handler - Does cgi-bin scripts get compiled 
per request or just once? What about those mod_perl handler scripts? Is there 
difference between mod_perl 1 and mod_perl 2?

Traditional cgi-bin scripts are compiled each request.

mod_perl whether Registry scripts or handlers are compiled once per child and 
code pre-loaded before the child is forked will be compiled only once (you can 
force code to be compiled in the parent application!)

This is where you get performance gains over traditional CGI scripts.
Thanks,
- xinhuan


-- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, 
a charity registered in England with number 1021457 and a company registered in 
England with number 2742969, whose registered office is 215 Euston Road, 
London, NW1 2BE.


Re: Large File Download

2015-03-30 Thread Xinhuan Zheng
Hello,
I encountered the large file download before. I recalled that when
sendfile directive is turned on httpd.conf, it uses sendfile(2) system
call. It is more efficient than combination of read(2) and write(2) since
sendfile is operating within the kernel.
However, my question is if the large file is served via NFS mount, I am
not sure if sendfile(2) would work just same way as a local file system.
We need to route the file downloading requests through a kind of load
balancer. The keepalive can be defined at load balancer and httpd.conf
file. By default we have keepalive set to 2 minutes on the load balancer,
meaning that the connection between load balancer and backend server needs
to keep alive for 2 minutes for the same client requests. Does it affect
overall downloading performance? Should we turn keepalive off?
- xinhuan

On 3/28/15, 3:54 PM, Issac Goldstand mar...@beamartyr.net wrote:

sendfile is much more efficient than that.  At the most basic level,
sendfile allows a file to be streamed directly from the block device (or
OS cache) to the network, all in kernel-space (see sendfile(2)).

What you describe below is less effective, since you need to ask the
kernel to read the data, chunk-by-chunk, send it to userspace, and then
from userspace back to kernel space to be sent to the net.

Beyond that, the Apache output filter stack is also spending time
examining your data, possibly buffering it differently than you are (for
example to make HTTP chunked-encoding) - by using sendfile, you'll be
bypassing the output filter chain (for the request, at least;
connection/protocol filters, such as HTTPS encryption will still get in
the way, but you probably want that to happen :)) further optimizing the
output.

If you're manipulating data, you need to stream yourself, but if you
have data on the disk and can serve it as-is, sendfile will almost
always perform much, much, much better.

  Issac

On 3/28/2015 7:40 PM, Dr James Smith wrote:
 You can effectively stream a file byte by byte - you just need to print
 a chunk at a time and mod_perl and apache will handle it
 appropriately... I do this all the time to handle large data downloads
 (the systems I manage are backed by peta bytes of data)...
 
 The art is often not in the output - but in the way you get and process
 data before sending it - I have code that will upload/download arbitrary
 large files (using HTML5's file objects) without using excessive amounts
 of memory... (all data is stored in chunks in a MySQL database)
 
 Streaming has other advantages with large data - if you wait till you
 generate all the data then you will find that you often get a time out -
 I have a script which can take up to 2 hours to generate all the output
 - but it never times out as it is sending a line of data at a time
 and do data is sent every 5-10 seconds... and the memory footprint is
 trivial - as only data for one line of output is in memory at a time..
 
 
 On 28/03/2015 16:25, John Dunlap wrote:
 sendfile sounds like its exactly what I'm looking for. I see it in the
 API documentation for Apache2::RequestIO but how do I get a reference
 to it from the reference to Apache2::RequestRec which is passed to my
 handler?

 On Sat, Mar 28, 2015 at 9:54 AM, Perrin Harkins phark...@gmail.com
 mailto:phark...@gmail.com wrote:

 Yeah, sendfile() is how I've done this in the past, although I was
 using mod_perl 1.x for it.

 On Sat, Mar 28, 2015 at 5:55 AM, André Warnier a...@ice-sa.com
 mailto:a...@ice-sa.com wrote:

 Randolf Richardson wrote:

 I know that it's possible(and arguably best practice)
 to use Apache to
 download large files efficiently and quickly, without
 passing them through
 mod_perl. However, the data I need to download from my
 application is both
 dynamically generated and sensitive so I cannot expose
 it to the internet
 for anonymous download via Apache. So, I'm wondering
 if mod_perl has a
 capability similar to the output stream of a java
 servlet. Specifically, I
 want to return bits and pieces of the file at a time
 over the wire so that
 I can avoid loading the entire file into memory prior
 to sending it to the
 browser. Currently, I'm loading the entire file into
 memory before sending
 it and

 Is this possible with mod_perl and, if so, how should
 I go about
 implementing it?


 Yes, it is possible -- instead of loading the
 entire contents of a file into RAM, just read blocks in a
 loop and keep sending them until you reach EoF (End of
File).

 You can also use $r-flush along the way if you
 

Re: [CentOS] CentOS Digest, Vol 119, Issue 19

2014-12-22 Thread Xinhuan Zheng
Hello Brian,
GPG is really what you want to be using for this.  OpenSSL is a general
toolkit that provide a lot of good functions, but you need to cobble some
things together yourself.  GPG is meant to handle all of the other parts
of
dealing with files.

I will expand on what someone else mentioned -- asymmetric encryption is
not meant for, and has very poor performance for encrypting data, and also
has a lot of limitations.  The correct way to handle this is to create a
symmetric key and use that to encrypt the data, then use asymmetric
encryption to encrypt only the symmetric key.

GPG takes care of this all internally, so that's what you should be using.


Do you have any resources that show examples of how to use GPG internally
that takes care of symmetric/asymmetric encryption?

Thanks,
- xinhuan


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Asymmetric encryption for very large tar file

2014-12-19 Thread Xinhuan Zheng
Hello,
Thanks for all feedback I got. I am pretty sure that if I used ³openssl
enc² method, it is able to handle large file over 250g size perfectly. I
think openssl installed on the system is capable of doing large file
support. However, when using ³openssl smime², it is not able to.
Apparently it¹s smime method limitations, not the openssl. Other than
smime and enc, what other methods can I use for large file support that
would use the asymmetric public/private keys?
- xinhuan

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Asymmetric encryption for very large tar file

2014-12-17 Thread Xinhuan Zheng
Hello CentOS list,
I have a requirement that I need to use encryption technology to encrypt
very large tar file on a daily basis. The tar file is over 250G size and
those are data backup. Every night the server generated a 250G data backup
and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball
file. So far I have tried two technologies with no success.
1) generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and
uses the public key to encrypt the big tar file. The encryption command I
used is openssl smime -encrypt -aes256 -in  backup.tar -binary -outform
DEM -out backup.tar.ssl  public.pem². The resulting backup.tar.ssl file is
only 2G then encryption process stops there and refuse to do more. Cannot
get around 2G.
2) generating GPG public/private key pair via ³gpg ‹gen-key² then encrypt
with gpg -e -u backup -r backup² backup.tar². However, the gpg
encryption stops at file size 50G and refuse to do more and the gpg
process took over 48 hours.
The server is very  capable. It¹s 8 CPU Intel 2.33 GHz 16G RAM installing
latest RHEL 5.11. Thought CentOS 5 is pretty much compatible in release
with RHEL 5.
I have searched google and found out a technique that utilizes the
symmetric encryption. Then it needs to generate a symmetric key every day
and uses public/private key pair to encrypt the symmetric key. However the
drawback is that we don¹t know how to manage the symmetric key securely.
We can¹t leave the un-encrypted symmetric key there on the server but we
have to use the un-encrypted symmetric key for encryption process. Plus
we¹ll need to manage the symmetric encryption key, public and private key
pair 3 things securely.
Has anyone had experience on managing the asymmetric encryption for very
large file and what¹s the best practice for that?
Thanks.
- xinhuan
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: Disconnect database connection after idle timeout

2014-11-14 Thread Xinhuan Zheng
Hi Parrin,

 the huge mod_perl-enabled server process (with all of its system resources) 
 will be tied up until the response is completely written to the client. While 
 it might take a few milliseconds for your script to complete the request, 
 there is a chance it will be still busy for some number of seconds or even 
 minutes if the request is from a slow connection client.

Are you implying that the performance will be suffered when using 
mod_perl-enabled server processes as the front tier servers?

- xinhuan

From: Perrin Harkins phark...@gmail.commailto:phark...@gmail.com
Date: Thursday, November 13, 2014 at 5:49 PM
To: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com
Cc: mod_perl list modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

On Thu, Nov 13, 2014 at 5:38 PM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
We have load balancer cache that can cache images and JavaScripts. This 
functions seems a bit duplicate.

It's not about caching. Here's a quote from that link I sent earlier:

Another drawback of this approach is that when serving output to a client with 
a slow connection, the huge mod_perl-enabled server process (with all of its 
system resources) will be tied up until the response is completely written to 
the client. While it might take a few milliseconds for your script to complete 
the request, there is a chance it will be still busy for some number of seconds 
or even minutes if the request is from a slow connection client.

You might think everyone has fast connections now so this won't matter, but it 
does. It's especially bad if you have keep-alive on for the apache server 
running mod_perl, since that means your large mod_perl processes sit around for 
some extra time doing nothing, while holding onto their database connections. 
Install a front-end proxy, turn off keep-alive  on your mod_perl and reduce 
your max idle servers, and watch what happens.

- Perrin



Re: Disconnect database connection after idle timeout

2014-11-13 Thread Xinhuan Zheng
Your understanding is correct. It’s what I am looking for. However, due to the 
apache forking child nature, I don’t feel comfortable using SIGALARM.

We use Apache::DBI. I would prefer having enhancement in this module. Currently 
the module is implementing apache process wide global cache for db connections, 
which we already use. But one missing piece in this module is to handle the TTL 
(time-to-live) for a cached db connection.  If TTL were implemented, the module 
would have to disconnect from db connection after a pre-defined timeout so the 
oracle server process could be shut down more gracefully. Would it be possible 
to implement that? Or is it too hard to implement?

- xinhuan

From: Paul Silevitch p...@silevitch.commailto:p...@silevitch.com
Date: Wednesday, November 12, 2014 at 11:53 PM
To: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com, 
modperl modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

I don't fully understand your need here.  I'm going to give my best.

You could set an alarm in the cleanup handler that calls the disconnect after a 
specified amount of time.  If a new request comes in, you could cancel the 
alarm in a postreadrequest handler (or something early in the cycle).  To cover 
the race condition where the disconnect happens right before the cancel, you 
could check to make sure the database connection is active right after the 
cancel is called.

HTH,

Paul

On Wed, Nov 12, 2014 at 9:36 PM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
Hello,

I am having a database connection management question. Our apache+mod_perl 
application initiates a database connection request when it needs to then do 
data processing. Afterwards, if there is no more requests coming to this apache 
process, the database connection basically will be sitting there in idle state 
for quite a while until the OS TCP/IP idle timeout has reached. At that point 
the database would send a message into its alert log, telling that a connection 
timeout has occurred and the server process will be cleaned. I would like to 
figure out if mod_perl application can implement keep-alive timeout mechanism. 
The mod_perl would maintain the database connection after it finishes some 
processing until an idle timeout defined in the application has reached, for 
example, 5 minutes. Then the mod_perl would initiate a database disconnection 
request so the server process can be cleaned more gracefully. We are using 
Oracle 11g database. I knew in 11G oracle has implemented the connection 
pooling. I would think the oracle side connection pooling would be the server 
side maintaining the connection idle timeout. Would it be possible on the 
client side the mod_perl implement something like that? I just don’t know which 
side is more appropriate and how on the client side it can implement something 
like that.

Thanks,
- xinhuan



Re: Disconnect database connection after idle timeout

2014-11-13 Thread Xinhuan Zheng
We don’t have any front end proxy. We don’t use DBD::Gofer nor PGBouncer. 
However, we do use Apache::DBI. The mod_perl application on our servers connect 
to database when they need to.  The database connection can be idle for a long 
time if there is no more requests then we’ll get TCP/IP timeout. We are seeing 
a lot of TNS timeout errors in oracle alert trace log. When that happens, the 
corresponding httpd process on the other hand is still there. If a request 
comes in and happens to use that httpd process (the other side oracle server 
process may have timed out already too), the customer would get “not connect to 
oracle” error. Other than that, having many idle oracle server processes for a 
long time is wasting of system resources like RAM. To address those issues, I 
would think implementing the idle TTL is appropriate. The oracle has 
implemented connection pooling in 11G. I would think this is to address the 
issue on their side. If the client implementation is too hard, then I guess the 
appropriate solution is to use Oracle 11g connection pooling. I just want to 
solicit other people’s opinion how to better address the issue.

- xinhuan

From: John Dunlap j...@lariat.comailto:j...@lariat.co
Date: Thursday, November 13, 2014 at 10:01 AM
To: Perrin Harkins phark...@gmail.commailto:phark...@gmail.com
Cc: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com, 
Paul Silevitch p...@silevitch.commailto:p...@silevitch.com, modperl 
modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

We use PGBouncer on the web server(which handles keep-alive to the database) 
and then we use Apache::DBI across localhost to talk to PGBouncer.

On Thu, Nov 13, 2014 at 9:56 AM, Perrin Harkins 
phark...@gmail.commailto:phark...@gmail.com wrote:
Hi,

Can you explain what problem you're trying to solve? Apache processes don't 
have the option of doing things when there is no request to serve, so you can't 
easily have them disconnect. It may be possible with alarms or cron jobs or 
something, but it's probably not a good idea.

If you tune your configuration to avoid leaving large numbers of servers idle, 
you should not have problems with unused connections. Also, make sure you are 
using a front-end proxy of some kind and not serving static HTTP requests from 
your mod_perl server.

If your problem is that you need more active connections than your server can 
handle, you could look at DBD::Gofer:
http://www.slideshare.net/Tim.Bunce/dbdgofer-200809

- Perrin

On Thu, Nov 13, 2014 at 9:39 AM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
Your understanding is correct. It’s what I am looking for. However, due to the 
apache forking child nature, I don’t feel comfortable using SIGALARM.

We use Apache::DBI. I would prefer having enhancement in this module. Currently 
the module is implementing apache process wide global cache for db connections, 
which we already use. But one missing piece in this module is to handle the TTL 
(time-to-live) for a cached db connection.  If TTL were implemented, the module 
would have to disconnect from db connection after a pre-defined timeout so the 
oracle server process could be shut down more gracefully. Would it be possible 
to implement that? Or is it too hard to implement?

- xinhuan

From: Paul Silevitch p...@silevitch.commailto:p...@silevitch.com
Date: Wednesday, November 12, 2014 at 11:53 PM
To: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com, 
modperl modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

I don't fully understand your need here.  I'm going to give my best.

You could set an alarm in the cleanup handler that calls the disconnect after a 
specified amount of time.  If a new request comes in, you could cancel the 
alarm in a postreadrequest handler (or something early in the cycle).  To cover 
the race condition where the disconnect happens right before the cancel, you 
could check to make sure the database connection is active right after the 
cancel is called.

HTH,

Paul

On Wed, Nov 12, 2014 at 9:36 PM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
Hello,

I am having a database connection management question. Our apache+mod_perl 
application initiates a database connection request when it needs to then do 
data processing. Afterwards, if there is no more requests coming to this apache 
process, the database connection basically will be sitting there in idle state 
for quite a while until the OS TCP/IP idle timeout has reached. At that point 
the database would send a message into its alert log, telling that a connection 
timeout has occurred and the server process will be cleaned. I would like to 
figure out if mod_perl application can implement keep-alive timeout mechanism. 
The mod_perl would maintain the database connection after

Re: Disconnect database connection after idle timeout

2014-11-13 Thread Xinhuan Zheng
I guess we do need connection caching and have persistent connections. It is 
good in our situation. But I would feel oracle 11g connection pooling might be 
more appropriate option to handle idle connection time out issue. Having 
another tier (like DBD::Gofer) looks like really messy in infrastructure plus 
it’s not certain who is going to maintain that module’s quality.

- xinhuan

From: Perrin Harkins phark...@gmail.commailto:phark...@gmail.com
Date: Thursday, November 13, 2014 at 11:42 AM
To: Dr James Smith j...@sanger.ac.ukmailto:j...@sanger.ac.uk
Cc: mod_perl list modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

On Thu, Nov 13, 2014 at 11:29 AM, Dr James Smith 
j...@sanger.ac.ukmailto:j...@sanger.ac.uk wrote:
From experience - and having chatted with our DBAs at work, with modern Oracle 
and with MySQL keeping persistent connections around is no real gain and 
usually lots of risks

It's certainly good to know how long it takes to get a fresh connection and 
consider whether you need persistent connections or not. Connecting tends to be 
fast on MySQL and caching is probably not needed unless you're running a very 
performance-sensitive site. The last time I worked with Oracle, connections 
were too slow to run without caching them. That was years ago though, and the 
situation may have improved.

- Perrin


Re: Disconnect database connection after idle timeout

2014-11-13 Thread Xinhuan Zheng
Hi Perrin,

I don’t quite understand what you mean by setting up a front-end proxy. What 
would you expect this “proxy” do? Does it take HTTP request?

Thanks,
- xinhuan

From: Perrin Harkins phark...@gmail.commailto:phark...@gmail.com
Date: Thursday, November 13, 2014 at 12:50 PM
To: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com
Cc: Dr James Smith j...@sanger.ac.ukmailto:j...@sanger.ac.uk, mod_perl list 
modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

On Thu, Nov 13, 2014 at 12:19 PM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
Having another tier (like DBD::Gofer) looks like really messy in infrastructure 
plus it’s not certain who is going to maintain that module’s quality.

I'd only recommend trying it after you set up a front-end proxy, tune your 
mod_perl configuration, and use any Oracle tools available to you.

- Perrin


Re: Disconnect database connection after idle timeout

2014-11-13 Thread Xinhuan Zheng
Hi Perrin,

Thanks for pointing out the document. We are using mod_perl enabled apache 
server for dynamic content. From the description of the document, the “proxy” 
server acts much like a memcache but it appears the difference is the “proxy” 
understands the HTTP protocol while memcache does not. We use load balancer in 
front of front end servers which can cache some static content. But load 
balancer has RAM limit. We cannot cache beyond that limit. memcache can cache 
dynamic content but it is not directly respond to HTTP request. Would you think 
this “proxy” server sitting in front of the front end server but behind the 
load balancer?

- xinhuan

From: Perrin Harkins phark...@gmail.commailto:phark...@gmail.com
Date: Thursday, November 13, 2014 at 2:23 PM
To: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com
Cc: mod_perl list modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

Yes, it's an HTTP proxy. It handles sending out the bytes to remote clients, so 
that your mod_perl server doesn't have to. A popular high-performance choice 
these days is nginx.

There's some discussion of why to use a front-end proxy here:
http://perl.apache.org/docs/1.0/guide/strategy.html

- Perrin

On Thu, Nov 13, 2014 at 2:12 PM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
Hi Perrin,

I don’t quite understand what you mean by setting up a front-end proxy. What 
would you expect this “proxy” do? Does it take HTTP request?

Thanks,
- xinhuan

From: Perrin Harkins phark...@gmail.commailto:phark...@gmail.com
Date: Thursday, November 13, 2014 at 12:50 PM
To: Xinhuan Zheng xzh...@christianbook.commailto:xzh...@christianbook.com
Cc: Dr James Smith j...@sanger.ac.ukmailto:j...@sanger.ac.uk, mod_perl list 
modperl@perl.apache.orgmailto:modperl@perl.apache.org
Subject: Re: Disconnect database connection after idle timeout

On Thu, Nov 13, 2014 at 12:19 PM, Xinhuan Zheng 
xzh...@christianbook.commailto:xzh...@christianbook.com wrote:
Having another tier (like DBD::Gofer) looks like really messy in infrastructure 
plus it’s not certain who is going to maintain that module’s quality.

I'd only recommend trying it after you set up a front-end proxy, tune your 
mod_perl configuration, and use any Oracle tools available to you.

- Perrin



Disconnect database connection after idle timeout

2014-11-12 Thread Xinhuan Zheng
Hello,

I am having a database connection management question. Our apache+mod_perl 
application initiates a database connection request when it needs to then do 
data processing. Afterwards, if there is no more requests coming to this apache 
process, the database connection basically will be sitting there in idle state 
for quite a while until the OS TCP/IP idle timeout has reached. At that point 
the database would send a message into its alert log, telling that a connection 
timeout has occurred and the server process will be cleaned. I would like to 
figure out if mod_perl application can implement keep-alive timeout mechanism. 
The mod_perl would maintain the database connection after it finishes some 
processing until an idle timeout defined in the application has reached, for 
example, 5 minutes. Then the mod_perl would initiate a database disconnection 
request so the server process can be cleaned more gracefully. We are using 
Oracle 11g database. I knew in 11G oracle has implemented the connection 
pooling. I would think the oracle side connection pooling would be the server 
side maintaining the connection idle timeout. Would it be possible on the 
client side the mod_perl implement something like that? I just don’t know which 
side is more appropriate and how on the client side it can implement something 
like that.

Thanks,
- xinhuan


  1   2   >