ZFS send dedup [PSARC/2009/557 FastTrack timeout 10/21/2009]
Garrett D'Amore wrote: In any case, I think it's safe to conclude that SHA-256 is more than adequate for filesystem block equality comparisons. That's true today. At what point will Moore's law catch up though? (In other words, how long will it take for storage densities to reach the point where where the risk of a collision becomes significant?) Start from a petabyte (probably about the largest practical filesystem size in use today), and double every 12 months. (I think storage has been outpacing Moore somewhat.) Which is why ZFS uses an extensible system for specifying checksum, compression, encryption algorithms. The NIST competition for the SHA-3 set of digests is running now and there is expected to be a SHA-3 defined by 2012. http://csrc.nist.gov/groups/ST/hash/timeline.html -- Darren J Moffat
Increase the maximum value of NGROUPS_MAX to 1024 [PSARC/2009/542 FastTrack timeout 10/14/2009]
[ Resend as it didn't make the PSARC case log ] On Thu, Oct 08, 2009 at 02:23:11AM -0700, Casper Dik wrote: NGROUPS_MAX as defined by different Unix versions are as follows (http://www.j3e.de/ngroups.html): Linux Kernel = 2.6.3 65536 Linux Kernel 2.6.332 Tru64 / OSF/1 32 IBM AIX 5.2 64 IBM AIX 5.3 ... 6.1 128 OpenBSD, NetBSD, FreeBSD, Darwin (Mac OS X) 16 This article is a bit outdated (and apache claims the document date is 13/Jul/09). This is no longer true for FreeBSD 8+, as it was bumped there to 1023. Thanks for that correction. See early discussion and proposed change. http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2005-05/1086.html http://lists.freebsd.org/pipermail/freebsd-hackers/2009-June/028939.html Thanks; I'm not expecting everyone change the default configuration so we're not, for now, changing how credentials are handled in the kernel. Since I've seen two +1 and no -1 and the timer has run out, I'm marking this fasttrack as approved. Casper
Tomcat6 example package and PID file [PSARC/2009/563 FastTrack timeout 10/22/2009]
Danek Duvall napsal(a): Peter Dennis wrote: Currently, starting Tomcat on Solaris does not create a pid file. This makes it impossible for future applications, such as Web Stack Enterprise Manager, to monitor Tomcat in the same manner as they monitor other servers. ... which on Solaris should be via contracts and SMF, right? Correct. Tomcat pid file should be useful for application which are not primary developed just for Solaris or comes from other systems. Note also that Tomcat already supports pid files, we are just enabling this feature. Allow Tomcat to create PID file at standard location: /var/run/tomcat6/pid. tomcat6 directory is owned by webserverd so that Tomcat can write into it. Note that nothing below /var/run can be delivered as part of a package, since the directory disappears at reboot, breaking the package. It's not clear from these materials if the path is simply being exported as a committed interface for other projects to use if it exists, or whether it's actually being delivered by the tomcat project when that software is installed, but not running. The original idea was to deliver /var/run/tomcat6 directory via package so that tomcat could write into it. It was wrong.. As tomcat6 is started with 'webservd' credential it's not able to write into /var/run. Therefore I'm proposing to change pid file location to: /var/tomcat6/logs/pidCommittedPID file Petr
Tomcat6 example package and PID file [PSARC/2009/563 FastTrack timeout 10/22/2009]
Petr Sumbera wrote: The original idea was to deliver /var/run/tomcat6 directory via package so that tomcat could write into it. It was wrong.. As tomcat6 is started with 'webservd' credential it's not able to write into /var/run. Therefore I'm proposing to change pid file location to: /var/tomcat6/logs/pidCommittedPID file There's no way to get tomcat to start as root and setuid to webservd and/or drop all unnecessary privileges? Perhaps have the start method do the work? If not, then yeah, this is fine. Danek
Tomcat6 example package and PID file [PSARC/2009/563 FastTrack timeout 10/22/2009]
Petr Sumbera wrote: Danek Duvall napsal(a): Peter Dennis wrote: Currently, starting Tomcat on Solaris does not create a pid file. This makes it impossible for future applications, such as Web Stack Enterprise Manager, to monitor Tomcat in the same manner as they monitor other servers. ... which on Solaris should be via contracts and SMF, right? Correct. Tomcat pid file should be useful for application which are not primary developed just for Solaris or comes from other systems. Note also that Tomcat already supports pid files, we are just enabling this feature. Could you please expand on this? I couldn't find any example, outside of startup scripts, of consumers of this file.
Tomcat6 example package and PID file [PSARC/2009/563 FastTrack timeout 10/22/2009]
Petr Sumbera wrote: Mark Martin napsal(a): Petr Sumbera wrote: Danek Duvall napsal(a): Peter Dennis wrote: Currently, starting Tomcat on Solaris does not create a pid file. This makes it impossible for future applications, such as Web Stack Enterprise Manager, to monitor Tomcat in the same manner as they monitor other servers. ... which on Solaris should be via contracts and SMF, right? Correct. Tomcat pid file should be useful for application which are not primary developed just for Solaris or comes from other systems. Note also that Tomcat already supports pid files, we are just enabling this feature. Could you please expand on this? I couldn't find any example, outside of startup scripts, of consumers of this file. Sun GlassFish Web Stack (http://www.sun.com/software/webstack/index.xml) is also supported on RHEL. The new planned tool - Web Stack Enterprise Manager therefore should be able to work on both Solairs and Linux. Has the team considered writing a wrapper for that portion (instrumenting tomcat process status)? It is regrettable that although *Solaris has its own architecture for process/service management and instrumentation, the choice is to use the archaic Linux one. If the known consumers were all upstream porting efforts I could easily buy the (oft lofted) cost argument for using the lowest common denominator. But one of them is Sun proprietary, which seems like an odd choice from this external view point.
ZFS send dedup [PSARC/2009/557 FastTrack timeout 10/21/2009]
On Thu, Oct 15, 2009 at 07:27:07PM -0700, Garrett D'Amore wrote: Scott Rotondo wrote: Perhaps it's worth pointing out that both statements above are correct, but they are answers to different questions. 10^-77 is the probability of a hash collision for a particular pair of blocks. For ZFS, we care if there is a collision between *any* pair of unequal blocks. That probability depends on the number of blocks, as Krishna points out. Finally, both of these calculations rely upon the implicit assumption that the 2^256 possible hash values are uniformly distributed; that assumption is widely accepted to be at least approximately true, but I'm not aware of a mathematical proof. In any case, I think it's safe to conclude that SHA-256 is more than adequate for filesystem block equality comparisons. That's true today. At what point will Moore's law catch up though? (In other words, how long will it take for storage densities to reach the point where where the risk of a collision becomes significant?) Start from a petabyte (probably about the largest practical filesystem size in use today), and double every 12 months. (I think storage has been outpacing Moore somewhat.) It's not. Brute forcing a security system with 128 bits of security, and storing 2^128 bits runs into fundamental physical limits. Still, if you have 2^48 bits of storage the likelihood of pair-wise conflicts with a 256-bit hash is going to be a more that 2^-128: ~ 2^-97 if we assume a block size of 128KB. 2^-97 is still extremely unlikely. If we up the storage amount to 2^64 and block sizes to 1MB we have a 2^-88 probability of collisions. Still comfortable, but if SHA-256 turns out to have weaknesses, then 2^-88 begins to get uncomfortable. Of course, by the time anyone has 2^64 bits of storage we'll have switched to a larger hash function for zfs send streams. The problem for me is not that 128 bits is not enough -- it sure seems like enough. One problem is that we don't know that SHA-256 has a uniform distribution of outputs for any random set of inputs, but let's assume that SHA-256 does. The bigger problem for me is that ZFS had never before used checksums for equality comparison, and I just wanted to make sure that the fact that ZFS would now have one use case of checksums for equality comparison didn't happen by accident. Since the i-team has indicated that this design point is purposeful, I'm done. Nico --
LSARC/2009/545 - Gutenprint update
Still waiting on a +1. Thanks, John John Fischer wrote: All, I am sponsoring this case for Gowtham who is delivering the project into the SFW consolidation. I have set the time out for Thursday, October 15th, 2009. Although, I do not believe that this update needs that much time as the project team is simply supplying what is already in a FOSS project. The project team would like to make a delivery on Monday. The case directory contains this proposal and the FOSS check list. This project is an update to an earlier case (LSARC/2009/469) and incorporates 2 additional binaries and the required modules. The project will be delivering into a Minor release of Solaris. The additional utilities help generate PPD files for use with the CUPS framework. The modules supplied are necessary to work with various printer models. Thanks, John -- next part -- An embedded and charset-unspecified text was scrubbed... Name: Proposal URL: http://mail.opensolaris.org/pipermail/opensolaris-arc/attachments/20091016/f06cb0d9/attachment-0001.ksh
Update Apache HTTP Server to 2.2.14 [LSARC/2009/565 Self Review]
I am sponsoring this case for Seema Alevoor and closing it as approved automatic. It's a straightforward Apache version update. Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI This information is Copyright 2009 Sun Microsystems 1. Introduction 1.1. Project/Component Working Name: Update Apache HTTP Server to 2.2.14 1.2. Name of Document Author/Supplier: Author: Seema Alevoor 1.3 Date of This Document: 16 October, 2009 4. Technical Description Update Apache HTTP Server to 2.2.14 1. Introduction 1.1. Project/Component Working Name: Update Apache HTTP Server to 2.2.14 1.2. Name of Document Author/Supplier: Author: Seema Alevoor (seema.alevoor at sun.com) 1.3 Date of This Document: 16 October, 2009 2. Summary 2.1. Update Apache to 2.2.14 Apache has released the next stable version, 2.2.14, this project updates the integrated Apache version to 2.2.14. [1][2] This version includes a number of general bug fixes and few security fixes. Also, this version includes a new module, mod_proxy_scgi [4]. This provides support for the SCGI protocol, version 1. Apache 2.2.14 ChangeLog can be found here: http://www.apache.org/dist/httpd/CHANGES_2.2.14 3. Technical Description 3.1 New module, mod_proxy_scgi Apache HTTP Server version 2.2.14, includes a new module, mod_proxy_scgi [4]. It provides support for the SCGI protocol, version 1 and it depends on mod_proxy module. This module will not be enabled by default. This project will provide a sample configuration file, mod_proxy_scgi.conf within /etc/apache2/[version]/samples-conf.d directory. To enable this module, user will copy this sample file to /etc/apache2/[version]/conf.d directory and update the directives to specify the address of the SCGI application and the web server URI where it should be accessed. 3.2. New Configuration Directives mod_proxy_scgi apache module introduces two new configuration directives as listed below. ProxySCGIInternalRedirect Directive: This directive enables the backend to internally redirect the gateway to a different URL. ProxySCGISendfile Directive: This directive enables the SCGI backend to let files serve directly by the gateway. More information on these directives can be found here: http://httpd.apache.org/docs/2.2/mod/mod_proxy_scgi.html 4. Interfaces 3.1. Exported Interfaces NAMESTABILITY NOTES --- ProxySCGIInternalRedirect Uncommittedmod_proxy_scgi configuration directive ProxySCGISendfile Uncommittedmod_proxy_scgi configuration directive 5. References [1] http://httpd.apache.org/ [2] http://httpd.apache.org/download.cgi [3] http://www.apache.org/dist/httpd/CHANGES_2.2.14 [4] http://httpd.apache.org/docs/2.2/mod/mod_proxy_scgi.html [5] http://arc.opensolaris.org/caselog/LSARC/2009/020/ 6. Resources and Schedule 6.4. Steering Committee requested information 6.4.1. Consolidation C-team Name: sfw 6.5. ARC review type: Automatic 6.6. ARC Exposure: open
ZFS send dedup [PSARC/2009/557 FastTrack timeout 10/21/2009]
+1, although it may be implied from other responses. On 10/13/09 01:25 PM, Matthew Ahrens wrote: Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI This information is Copyright 2009 Sun Microsystems 1. Introduction 1.1. Project/Component Working Name: ZFS send dedup 1.2. Name of Document Author/Supplier: Author: Lori Alt 1.3 Date of This Document: 13 October, 2009 4. Technical Description This case requests micro/patch binding; new interfaces are Comitted. -- - Rick Matthews email: Rick.Matthews at sun.com Sun Microsystems, Inc. phone:+1(651) 554-1518 1270 Eagan Industrial Road phone(internal): 54418 Suite 160 fax: +1(651) 554-1540 Eagan, MN 55121-1231 USAmain: +1(651) 554-1500 -
Tomcat6 example package and PID file [PSARC/2009/563 FastTrack timeout 10/22/2009]
On Fri, Oct 16, 2009 at 03:31:00PM +0200, Petr Sumbera wrote: Danek Duvall napsal(a): There's no way to get tomcat to start as root and setuid to webservd and/or drop all unnecessary privileges? Perhaps have the start method do the work? If not, then yeah, this is fine. Currently Tomcat SMF manifest takes care of setting 'webservd' credentials and adding extra privilege 'net_privaddr'. I think it's not possible to do this later in start method. I mean to combine 'su' command with 'ppriv'. Don't use su(1M) -- SMF does not login services to their method_context users, which su(1M) would do for you here, rather inappropriately. Use pcred(1) and ppriv(1). Or better yet, keep things the way they are, don't bother with the PID file, modify the PID file consumers to use SMF interfaces to find the service process contract and its members' PIDs. Nico --
Provide minor private interface modifications to support mntfs [PSARC/2009/566 FastTrack timeout 10/23/2009]
Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI This information is Copyright 2009 Sun Microsystems 1. Introduction 1.1. Project/Component Working Name: Provide minor private interface modifications to support mntfs 1.2. Name of Document Author/Supplier: Author: Robert Harris 1.3 Date of This Document: 16 October, 2009 4. Technical Description This information is Copyright 2009 Sun Microsystems 1. Introduction 1.1. Project/Component Working Name: Provide minor private interface modifications to support mntfs. 1.2. Name of Document Author/Supplier: Author: Robert Harris 1.3 Date of This Document: 13 October, 2009 4. Technical Description 1. Proposal: Provide minor private interface modifications to support mntfs. 2. The Problem: The contents of /etc/mnttab are created by mntfs on demand. mntfs parses the in-kernel mnttab structures to create a snapshot that can be used to satisfy subsequent calls to read() or ioctl(). The snapshot is stored by the kernel within the address space of the process that made the first call to read() or ioctl(). The enclosing mapping is removed from the calling process's address space by mntfs on last close(). The snapshot-in-userland design has a flaw: the kernel cannot determine whether or not a close() is a specific process's last if the vnode count is greater than 1. This is because there is no way to determine whether a count that is greater than one has originated from dup(), from fork() or from both. This means that mntfs is unable to ensure that every insertion of a mapping into a process's address space is paired with a corresponding deletion. Two specific manifestations are 6394241, in which a newly-execed process has an arbitrary range of its address space unmapped by mntfs, and 6813502, in which a process address space is entirely consumed by orphaned mappings left behind by mntfs. 3. Solutions: The most obvious solution seemed, at first, to involve storing the snapshot data within the corresponding vnode, thereby allowing the existing file system infrastructure to free the resources when no longer required. This, however, was rejected on account of complications inherent in the unprivileged user's resulting ability to allocate and retain kernel memory. It was previously believed that there remained no alternative other than to abandon the use of snapshots in their current form. The approach would necessitate a change to the behaviour of /etc/mnttab and its API and resulted in an earlier PSARC case, 2009/352. Although case 2009/352 was approved, comments exchanged during its review have led to the design of a solution that retains all of the existing documented behaviour and yet has minimal consumption of kernel memory. This solution has been adopted as the preferred approach. Very briefly, the new proposal effects a snapshot by constructing a per-zone database that encapsulates the different states of the in-kernel mnttab that are visible to existing consumers. The database takes the form of a linked list where every element represents an entry in /etc/mnttab and has a time of birth and a time of death. By providing appropriate time stamps to each element, a consumer need remember only the time at which his own view was created. This view, i.e. snapshot, can be generated on demand by walking through the database and extracting all elements that were born before, but that died after, the snapshot creation time. Elements are removed when they are no longer referenced by any existing consumer, and so the database need not exist at all. 4. Impact: 4.1 Overview: This solution has some modest requirements. The database is maintained on a per-zone basis, and so the zone_t will acquire two new fields: a pointer to the database and a lock. Two new private ioctl() commands will be added, MNTIOC_GETEXTMNTENT and MNTIOC_GETMNTANY, to ensure that the getmntent(3C) family of functions can be serviced as efficiently as possible. More delicate is the need for every vfs_t present in the in-kernel mnttab to have a high-resolution time stamp indicating its time of creation (not its mount time). The vfs_t is unusual in that it is exposed to unbundled file systems, and is therefore considered dangerous to modify. To this end, following PSARC 2006/270, there now exists a vfs_impl_t, referenced by a vfs_t's vfs_implp, that is designed to accommodate additional fields that would otherwise occupy the vfs_t. As part of this change, the vfs_impl_t will acquire a new field: a high resolution time stamp. The new time stamp in the vfs_t's vfs_impl_t will be initialised in vfs_list_add(), a private function that inserts a
Tomcat6 example package and PID file [PSARC/2009/563 FastTrack timeout 10/22/2009]
On Fri, Oct 16, 2009 at 2:31 PM, Petr Sumbera Petr.Sumbera at sun.com wrote: The original idea was to deliver /var/run/tomcat6 directory via package so that tomcat could write into it. It was wrong.. As tomcat6 is started with 'webservd' credential it's not able to write into /var/run. Therefore I'm proposing to change pid file location to: /var/tomcat6/logs/pid ? ? ? ?Committed ? ?PID file What if there are multiple instances of tomcat running? Don't you need per-instance locations? (On my systems, the pid file ends up in $CATALINA_BASE/logs, which seems the obvious place to put it. That's the same as above if CATALINA_BASE is /var/tomcat6, but allows for additional instances.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
acpihpd ACPI Hotplug Daemon [PSARC/2009/551 fast-track timeout 10/19/2009]
Mike, I'm working with Intel to answer your questions. Essentially we want to provide the least amount of access possible for this daemon to do its job. IIRC, my initial question had 3 parts: How does the project meet the SMF requirement for authorizations to manage? What is the Method Context used to start the service? The service is to be enabled only on the xxx platform - how is this done? I'd like to clarify the first part about authorizations. When we talked I may not have been complete. If there are no properties that configure the service as in a property group of type application, there is no need for value authorizations to manage them. If the service is never intended to be enabled/disabled by the administrator (but always enabled/started automatically at boot time and never disabled), there is no need for action/value authorizations to manage the service. If both are true and there is no need for defining authorizations for the service, there is no need for a service related Rights Profile. HTH, Gary.. For starting the daemon, I'm guessing that we'll have to create something similar to usr/src/cmd/svc/profile/platform_SUNW,SPARC-Enterprise.xml for these x86 machines since we want the service to be enabled by default on the platforms that support it. Does anyone have any recommendation about who to talk to about how to get this done? Thanks, Mike On Tue, 2009-10-13 at 13:29 -0700, Gary Winiger wrote: The acpihpd is started and stopped using the standard Solaris service management facility. The acpihpd is an smf service, and will only be enabled on the platforms which supports IOH/CPU/memory hot plug. How is the SMF usage policy met? http://opensolaris.org/os/community/arc/policies/SMF-policy Specifically the authorizations, what Rights Profile the authorizations will be contained in, method context, ... How will this be enabled? Is it enabled from platform.xml? Gary..
Update to PHP 5.2 to deliver additional features [LSARC/2009/564 FastTrack timeout 11/03/2009]
Jyri Virkki wrote: I am sponsoring this case for Sriram Natarajan; timeout set to 11/03/2009. FYI I won't be here for the next two ARC meetings, if other ARC members have a chance to review and +1 this it can get closed earlier. If it doesn't happen, let the case run to its timeout of 11/3. -- Jyri J. Virkki - jyri.virkki at sun.com - Sun Microsystems
OpenSSL RSA keys by reference in PKCS#11 keystores through the PKCS11 engine [PSARC/2009/555 FastTrack timeout 10/20/2009]
+0.75 a couple of questions below + OpenSSL can access RSA keys in PKCS#11 keystores using the + following functions of the ENGINE API: + + EVP_PKEY *ENGINE_load_private_key(ENGINE *e, + const char *key_id, UI_METHOD *ui_method, + void *callback_data) + + EVP_PKEY *ENGINE_load_public_key(ENGINE *e, + const char *key_id, UI_METHOD *ui_method, + void *callback_data) given the semantics described in the case, these functions will fail for multiple reasons: bad argument, key not found, bad internal state (engine hasn't initialized or hasn't authenticated to the token). Yet the return value can be either NULL: failure or Not NULL: a matching key was retrieved. It will be more helpful to give the app developers some info as to the reason of failure, so that they know what to do when the load function returns NULL. Possibly Missing: -- 1. Need to mention somewhere that the caller of the load functions is responsible for calling EVP_PKEY_free(). 2. since the private parts of the on-token keys are never read by the engine, there is an implication on all OpenSSL access routines, like EVP_PKEY_copy_parameters(), EVP_PKEY_get1_RSA(), etc. The'll all gonna fail when the pkey arg comes from a token. Rather than chasing the dozens of functions that use RSA private keys in openssl, maybe it suffices to document that EVP_Decrypt() and EVP_PKEY_free() are the only routines that can use an RSA private key by reference. Kais.
Pass-through iconv code conversion [PSARC/2009/561 FastTrack timeout 10/21/2009]
+1 Kais