Re: Hurd Involvement

2014-05-12 Thread Thomas Schwinge
Hi Erick!

Posting my answers to your questions to the public 
mailing list,
, so that
others can join the discussions and/or learn from the answers given.

On Wed, 07 May 2014 23:49:19 -0500, Erick Mendoza  wrote:
> [requirements for involvement]

There are no formal requirements other than an adequate copyright/licensing
status of your changes.

> [detailed knowledge needed to get involved]

That totally depends on what you'd like to do.
,
.

> [programming language]

Mostly C.

> [testing]

Other than some rare exceptions, all contributions have to be tested.
You can, for example, use a virtual machine for running a Hurd system,
such as: QEMU/KVM,
, or VirtualBox,
.


Grüße,
 Thomas


pgpJCevRvzkG4.pgp
Description: PGP signature


[PATCH 05/11] libdiskfs: lock-less reference counting for peropen objects

2014-05-12 Thread Justus Winter
* libdiskfs/diskfs.h (struct peropen): Use refcount_t for field refcnt.
* libdiskfs/peropen-make.c (diskfs_make_peropen): Initialize refcnt.
* libdiskfs/peropen-rele.c (diskfs_release_peropen): Adjust accordingly.
* libdiskfs/protid-make.c (diskfs_start_protid): Likewise.  Also, the
node must no longer be locked, adjust comment accordingly.
(diskfs_create_protid): Likewise.
---
 libdiskfs/diskfs.h   |  7 ---
 libdiskfs/peropen-make.c |  2 +-
 libdiskfs/peropen-rele.c | 21 ++---
 libdiskfs/protid-make.c  |  8 
 4 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/libdiskfs/diskfs.h b/libdiskfs/diskfs.h
index 8151ddc..ae1a150 100644
--- a/libdiskfs/diskfs.h
+++ b/libdiskfs/diskfs.h
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #ifdef DISKFS_DEFINE_EXTERN_INLINE
 #define DISKFS_EXTERN_INLINE
@@ -57,7 +58,7 @@ struct peropen
 {
   int filepointer;
   int lock_status;
-  int refcnt;
+  refcount_t refcnt;
   int openstat;
 
   struct node *np;
@@ -792,12 +793,12 @@ diskfs_create_node (struct node *dir, const char *name, 
mode_t mode,
struct dirstat *ds);
 
 /* Create and return a protid for an existing peropen PO in CRED,
-   referring to user USER.  The node PO->np must be locked. */
+   referring to user USER.  */
 error_t diskfs_create_protid (struct peropen *po, struct iouser *user,
  struct protid **cred);
 
 /* Build and return in CRED a protid which has no user identification, for
-   peropen PO.  The node PO->np must be locked.  */
+   peropen PO.  */
 error_t diskfs_start_protid (struct peropen *po, struct protid **cred);
 
 /* Finish building protid CRED started with diskfs_start_protid;
diff --git a/libdiskfs/peropen-make.c b/libdiskfs/peropen-make.c
index eba037f..6d5ca01 100644
--- a/libdiskfs/peropen-make.c
+++ b/libdiskfs/peropen-make.c
@@ -31,7 +31,7 @@ diskfs_make_peropen (struct node *np, int flags, struct 
peropen *context,
 
   po->filepointer = 0;
   po->lock_status = LOCK_UN;
-  po->refcnt = 0;
+  refcount_init (&po->refcnt, 0);
   po->openstat = flags;
   po->np = np;
   po->path = NULL;
diff --git a/libdiskfs/peropen-rele.c b/libdiskfs/peropen-rele.c
index d3f7492..877137b 100644
--- a/libdiskfs/peropen-rele.c
+++ b/libdiskfs/peropen-rele.c
@@ -22,13 +22,8 @@
 void
 diskfs_release_peropen (struct peropen *po)
 {
-  pthread_mutex_lock (&po->np->lock);
-
-  if (--po->refcnt)
-{
-  pthread_mutex_unlock (&po->np->lock);
-  return;
-}
+  if (refcount_deref (&po->refcnt) > 0)
+return;
 
   if (po->root_parent)
 mach_port_deallocate (mach_task_self (), po->root_parent);
@@ -40,10 +35,14 @@ diskfs_release_peropen (struct peropen *po)
 mach_port_deallocate (mach_task_self (), po->shadow_root_parent);
 
   if (po->lock_status != LOCK_UN)
-fshelp_acquire_lock (&po->np->userlock, &po->lock_status,
-&po->np->lock, LOCK_UN);
-
-  diskfs_nput (po->np);
+{
+  pthread_mutex_lock (&po->np->lock);
+  fshelp_acquire_lock (&po->np->userlock, &po->lock_status,
+   &po->np->lock, LOCK_UN);
+  diskfs_nput (po->np);
+}
+  else
+diskfs_nrele (po->np);
 
   free (po->path);
   free (po);
diff --git a/libdiskfs/protid-make.c b/libdiskfs/protid-make.c
index b39b92a..22aaa2e 100644
--- a/libdiskfs/protid-make.c
+++ b/libdiskfs/protid-make.c
@@ -20,7 +20,7 @@
 #include 
 
 /* Build and return in CRED a protid which has no user identification, for
-   peropen PO.  The node PO->np must be locked.  */
+   peropen PO.  */
 error_t
 diskfs_start_protid (struct peropen *po, struct protid **cred)
 {
@@ -29,7 +29,7 @@ diskfs_start_protid (struct peropen *po, struct protid **cred)
 sizeof (struct protid), cred);
   if (! err)
 {
-  po->refcnt++;
+  refcount_ref (&po->refcnt);
   (*cred)->po = po;
   (*cred)->shared_object = MACH_PORT_NULL;
   (*cred)->mapped = 0;
@@ -55,8 +55,8 @@ diskfs_finish_protid (struct protid *cred, struct iouser 
*user)
   assert_perror (err);
 }
 
-/* Create and return a protid for an existing peropen PO in CRED for USER.
-   The node PO->np must be locked. */
+/* Create and return a protid for an existing peropen PO in CRED for
+   USER.  */
 error_t
 diskfs_create_protid (struct peropen *po, struct iouser *user,
  struct protid **cred)
-- 
2.0.0.rc0




[PATCH 06/11] libtrivfs: lock-less reference counting for trivfs_peropen objects

2014-05-12 Thread Justus Winter
* libtrivfs/trivfs.h (struct trivfs_peropen): Use refcount_t for field
refcnt.
(struct trivfs_control): Remove unused field lock.
* libtrivfs/cntl-create.c (trivfs_create_control): Drop the mutex
initialization.
* libtrivfs/io-reauthenticate.c (trivfs_S_io_reauthenticate): Adjust
accordingly.
* libtrivfs/io-restrict-auth.c (trivfs_S_io_restrict_auth): Likewise.
* libtrivfs/open.c (trivfs_open): Initialize refcnt.
* libtrivfs/protid-clean.c (trivfs_clean_protid): Likewise.
* libtrivfs/protid-dup.c (trivfs_protid_dup): Likewise.
---
 libtrivfs/cntl-create.c   |  1 -
 libtrivfs/io-reauthenticate.c |  5 +
 libtrivfs/io-restrict-auth.c  |  4 +---
 libtrivfs/open.c  |  2 +-
 libtrivfs/protid-clean.c  | 26 +++---
 libtrivfs/protid-dup.c|  5 +
 libtrivfs/trivfs.h|  4 ++--
 7 files changed, 21 insertions(+), 26 deletions(-)

diff --git a/libtrivfs/cntl-create.c b/libtrivfs/cntl-create.c
index 910daf3..eb9a834 100644
--- a/libtrivfs/cntl-create.c
+++ b/libtrivfs/cntl-create.c
@@ -85,7 +85,6 @@ trivfs_create_control (mach_port_t underlying,
}
 
   (*control)->hook = 0;
-  pthread_mutex_init (&(*control)->lock, NULL);
 }
 
 out:
diff --git a/libtrivfs/io-reauthenticate.c b/libtrivfs/io-reauthenticate.c
index 7677697..df0ed2e 100644
--- a/libtrivfs/io-reauthenticate.c
+++ b/libtrivfs/io-reauthenticate.c
@@ -62,11 +62,8 @@ trivfs_S_io_reauthenticate (struct trivfs_protid *cred,
 newcred->isroot = 1;
 
   newcred->hook = cred->hook;
-
-  pthread_mutex_lock (&cred->po->cntl->lock);
   newcred->po = cred->po;
-  newcred->po->refcnt++;
-  pthread_mutex_unlock (&cred->po->cntl->lock);
+  refcount_ref (&newcred->po->refcnt);
 
   do
 err = io_restrict_auth (newcred->po->cntl->underlying, &newcred->realnode,
diff --git a/libtrivfs/io-restrict-auth.c b/libtrivfs/io-restrict-auth.c
index 65b4fd6..39670fe 100644
--- a/libtrivfs/io-restrict-auth.c
+++ b/libtrivfs/io-restrict-auth.c
@@ -110,10 +110,8 @@ trivfs_S_io_restrict_auth (struct trivfs_protid *cred,
 }
 
   newcred->isroot = 0;
-  pthread_mutex_lock (&cred->po->cntl->lock);
   newcred->po = cred->po;
-  newcred->po->refcnt++;
-  pthread_mutex_unlock (&cred->po->cntl->lock);
+  refcount_ref (&newcred->po->refcnt);
   if (cred->isroot && idvec_contains (user->uids, 0))
 newcred->isroot = 1;
   newcred->user = user;
diff --git a/libtrivfs/open.c b/libtrivfs/open.c
index f64d2ff..97e70a1 100644
--- a/libtrivfs/open.c
+++ b/libtrivfs/open.c
@@ -40,7 +40,7 @@ trivfs_open (struct trivfs_control *cntl,
 
   ports_port_ref (cntl);
 
-  po->refcnt = 1;
+  refcount_init (&po->refcnt, 1);
   po->cntl = cntl;
   po->openmodes = flags;
   po->hook = 0;
diff --git a/libtrivfs/protid-clean.c b/libtrivfs/protid-clean.c
index f98da6a..cce736d 100644
--- a/libtrivfs/protid-clean.c
+++ b/libtrivfs/protid-clean.c
@@ -31,19 +31,23 @@ trivfs_clean_protid (void *arg)
 (*trivfs_protid_destroy_hook) (cred);
 
   /* If we hold the only reference to the peropen, try to get rid of it. */
-  pthread_mutex_lock (&cntl->lock);
-  if (cred->po->refcnt == 1 && trivfs_peropen_destroy_hook)
+  if (refcount_deref (&cred->po->refcnt) == 0)
 {
-  pthread_mutex_unlock (&cntl->lock);
-  (*trivfs_peropen_destroy_hook) (cred->po);
-  pthread_mutex_lock (&cntl->lock);
+  if (trivfs_peropen_destroy_hook)
+{
+  /* Reaquire a reference while we call the hook.  */
+  refcount_ref (&cred->po->refcnt);
+  (*trivfs_peropen_destroy_hook) (cred->po);
+  if (refcount_deref (&cred->po->refcnt) == 0)
+goto free_po;
+}
+  else
+{
+free_po:
+  ports_port_deref (cntl);
+  free (cred->po);
+}
 }
-  if (--cred->po->refcnt == 0)
-{
-  ports_port_deref (cntl);
-  free (cred->po);
-}
-  pthread_mutex_unlock (&cntl->lock);
 
   iohelp_free_iouser (cred->user);
 
diff --git a/libtrivfs/protid-dup.c b/libtrivfs/protid-dup.c
index 6169603..75f3ca8 100644
--- a/libtrivfs/protid-dup.c
+++ b/libtrivfs/protid-dup.c
@@ -35,11 +35,8 @@ trivfs_protid_dup (struct trivfs_protid *cred, struct 
trivfs_protid **dup)
 
   if (! err)
 {
-  pthread_mutex_lock (&cred->po->cntl->lock);
   new->po = cred->po;
-  new->po->refcnt++;
-  pthread_mutex_unlock (&cred->po->cntl->lock);
-
+  refcount_ref (&new->po->refcnt);
   new->isroot = cred->isroot;
 
   err = iohelp_dup_iouser (&new->user, cred->user);
diff --git a/libtrivfs/trivfs.h b/libtrivfs/trivfs.h
index bb456ff..8902338 100644
--- a/libtrivfs/trivfs.h
+++ b/libtrivfs/trivfs.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 struct trivfs_protid
 {
@@ -41,14 +42,13 @@ struct trivfs_peropen
 {
   void *hook;  /* for user use */
   int openmodes;
-  int refcnt;
+  refcount_t refcnt;
   struct trivfs_control *cntl;
 };
 
 struct trivfs_control
 {
   struct port_info pi;
-  pthread_mutex_t lock;
  

[PATCH 04/11] libports: lock-less reference counting for port_info objects

2014-05-12 Thread Justus Winter
* libports/ports.h (struct port_info): Use the new type.
* libports/lookup-port.c: No need to lock _ports_lock anymore.
* libports/bucket-iterate.c: Likewise.
* libports/complete-deallocate.c: Check if someone reacquired a
reference through a hash table lookup.
* libports/create-internal.c: Use the new reference counting primitives.
* libports/get-right.c: Likewise.
* libports/import-port.c: Likewise.
* libports/port-deref-weak.c: Likewise.
* libports/port-deref.c: Likewise.
* libports/port-ref-weak.c: Likewise.
* libports/port-ref.c: Likewise.
* libports/reallocate-from-external.c: Likewise.
* libports/transfer-right.c: Likewise.
* utils/rpctrace.c: Likewise.
---
 libports/bucket-iterate.c   |  4 +---
 libports/complete-deallocate.c  | 15 +++
 libports/create-internal.c  |  3 +--
 libports/get-right.c|  2 +-
 libports/import-port.c  |  3 +--
 libports/lookup-port.c  |  6 ++
 libports/port-deref-weak.c  | 10 +++---
 libports/port-deref.c   | 34 --
 libports/port-ref-weak.c|  8 +++-
 libports/port-ref.c |  8 +++-
 libports/ports.h|  4 ++--
 libports/reallocate-from-external.c |  2 +-
 libports/transfer-right.c   |  2 +-
 utils/rpctrace.c| 10 --
 14 files changed, 58 insertions(+), 53 deletions(-)

diff --git a/libports/bucket-iterate.c b/libports/bucket-iterate.c
index d5f590e..2a2b2b6 100644
--- a/libports/bucket-iterate.c
+++ b/libports/bucket-iterate.c
@@ -35,7 +35,6 @@ _ports_bucket_class_iterate (struct port_bucket *bucket,
   size_t i, n, nr_items;
   error_t err;
 
-  pthread_mutex_lock (&_ports_lock);
   pthread_mutex_lock (&_ports_htable_lock);
 
   if (_ports_htable.nr_items == 0)
@@ -60,13 +59,12 @@ _ports_bucket_class_iterate (struct port_bucket *bucket,
   if ((bucket == NULL || pi->bucket == bucket)
   && (class == NULL || pi->class == class))
{
- pi->refcnt++;
+ refcounts_ref (&pi->refcounts, NULL);
  p[n] = pi;
  n++;
}
 }
   pthread_mutex_unlock (&_ports_htable_lock);
-  pthread_mutex_unlock (&_ports_lock);
 
   if (n == 0)
 {
diff --git a/libports/complete-deallocate.c b/libports/complete-deallocate.c
index 1371d92..6c9074a 100644
--- a/libports/complete-deallocate.c
+++ b/libports/complete-deallocate.c
@@ -29,14 +29,29 @@ _ports_complete_deallocate (struct port_info *pi)
 
   if (pi->port_right)
 {
+  struct references result;
+
   pthread_mutex_lock (&_ports_htable_lock);
+
+  refcounts_references (&pi->refcounts, &result);
+  if (result.hard > 0 || result.weak > 0)
+{
+  /* A reference was reacquired through a hash table lookup.
+ It's fine, we didn't touch anything yet. */
+  pthread_mutex_unlock (&_ports_htable_lock);
+  return;
+}
+
   hurd_ihash_locp_remove (&_ports_htable, pi->hentry);
   pthread_mutex_unlock (&_ports_htable_lock);
+
   mach_port_mod_refs (mach_task_self (), pi->port_right,
  MACH_PORT_RIGHT_RECEIVE, -1);
   pi->port_right = MACH_PORT_NULL;
 }
 
+  pthread_mutex_lock (&_ports_lock);
+
   pi->bucket->count--;
   pi->class->count--;
 
diff --git a/libports/create-internal.c b/libports/create-internal.c
index e773dd6..9e3824d 100644
--- a/libports/create-internal.c
+++ b/libports/create-internal.c
@@ -54,8 +54,7 @@ _ports_create_port_internal (struct port_class *class,
 }
 
   pi->class = class;
-  pi->refcnt = 1;
-  pi->weakrefcnt = 0;
+  refcounts_init (&pi->refcounts, 1, 0);
   pi->cancel_threshold = 0;
   pi->mscount = 0;
   pi->flags = 0;
diff --git a/libports/get-right.c b/libports/get-right.c
index 89050c6..8681f46 100644
--- a/libports/get-right.c
+++ b/libports/get-right.c
@@ -41,7 +41,7 @@ ports_get_right (void *port)
   if ((pi->flags & PORT_HAS_SENDRIGHTS) == 0)
 {
   pi->flags |= PORT_HAS_SENDRIGHTS;
-  pi->refcnt++;
+  refcounts_ref (&pi->refcounts, NULL);
   err = mach_port_request_notification (mach_task_self (),
pi->port_right,
MACH_NOTIFY_NO_SENDERS,
diff --git a/libports/import-port.c b/libports/import-port.c
index d0b2ea4..e250b0e 100644
--- a/libports/import-port.c
+++ b/libports/import-port.c
@@ -48,8 +48,7 @@ ports_import_port (struct port_class *class, struct 
port_bucket *bucket,
 return ENOMEM;
   
   pi->class = class;
-  pi->refcnt = 1 + !!stat.mps_srights;
-  pi->weakrefcnt = 0;
+  refcounts_init (&pi->refcounts, 1 + !!stat.mps_srights, 0);
   pi->cancel_threshold = 0;
   pi->mscount = stat.mps_mscount;
   pi->flags = stat.mps_srights ? PORT_HAS_SENDRIGHTS : 0;
diff --git a/libports/lookup-port.c b/libports/lookup-port.c
index fbb13ef..1bf012f 100644
--- a/libports/lookup-port.c
+++ b/libports/lookup-port.c
@@ -28,7 +28,6 @@ 

[PATCH 01/11] ext2fs: cache the superblock

2014-05-12 Thread Justus Winter
Previously, the superblock was mmaped and a pointer stored in sblock
by map_hypermetadata.  This memory is backed by our disk pager.

This is rather unfortunate, as this means that whenever we read a
value from that location, we might generate a request our disk pager.
This amplifies the so-called thread-storm problem.

Rather than relying on a mmaped region of memory, just use the data
loaded by get_hypermetadata.

* ext2fs/hyper.c (get_hypermetadata): Do not free sblock.
(mapped_sblock): New variable.
(map_hypermetadata): Map the superblock to mapped_sblock instead.
(diskfs_set_hypermetadata): Copy superblock into mapped_superblock.
---
 ext2fs/hyper.c | 19 +++
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/ext2fs/hyper.c b/ext2fs/hyper.c
index 5bcc2ab..5f288bf 100644
--- a/ext2fs/hyper.c
+++ b/ext2fs/hyper.c
@@ -61,7 +61,9 @@ get_hypermetadata (void)
   error_t err;
   size_t read = 0;
 
-  assert (! sblock);
+  if (sblock != NULL)
+munmap (sblock, SBLOCK_SIZE);
+
   err = store_read (store, SBLOCK_OFFS >> store->log2_block_size,
SBLOCK_SIZE, (void **)&sblock, &read);
   if (err || read != SBLOCK_SIZE)
@@ -161,19 +163,19 @@ get_hypermetadata (void)
   zeroblock = (vm_address_t) mmap (0, block_size, PROT_READ, MAP_ANON, 0, 
0);
   assert (zeroblock != (vm_address_t) MAP_FAILED);
 }
-
-  munmap (sblock, SBLOCK_SIZE);
-  sblock = NULL;
 }
 
+static struct ext2_super_block *mapped_sblock;
+
 void
 map_hypermetadata (void)
 {
-  sblock = (struct ext2_super_block *) boffs_ptr (SBLOCK_OFFS);
+  mapped_sblock = (struct ext2_super_block *) boffs_ptr (SBLOCK_OFFS);
 
   /* Cache a convenient pointer to the block group descriptors for allocation.
  These are stored in the filesystem blocks following the superblock.  */
-  group_desc_image = (struct ext2_group_desc *) bptr (bptr_block (sblock) + 1);
+  group_desc_image =
+(struct ext2_group_desc *) bptr (bptr_block (mapped_sblock) + 1);
 }
 
 error_t
@@ -196,8 +198,9 @@ diskfs_set_hypermetadata (int wait, int clean)
  if (sblock_dirty)
{
  sblock_dirty = 0;
- disk_cache_block_ref_ptr (sblock);
- record_global_poke (sblock);
+ memcpy (mapped_sblock, sblock, SBLOCK_SIZE);
+ disk_cache_block_ref_ptr (mapped_sblock);
+ record_global_poke (mapped_sblock);
}
 
   sync_global (wait);
-- 
2.0.0.rc0




[PATCH 10/11] ext2fs: improve {enable,disable}_caching

2014-05-12 Thread Justus Winter
* ext2fs/pager.c (enable_caching, disable_caching): Iterate over the
pager class instead of over both pager buckets.
---
 ext2fs/pager.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/ext2fs/pager.c b/ext2fs/pager.c
index 017efcc..6328f3b 100644
--- a/ext2fs/pager.c
+++ b/ext2fs/pager.c
@@ -1409,8 +1409,7 @@ disable_caching ()
 
   /* Loop through the pagers and turn off caching one by one,
  synchronously.  That should cause termination of each pager. */
-  ports_bucket_iterate (disk_pager_bucket, block_cache);
-  ports_bucket_iterate (file_pager_bucket, block_cache);
+  ports_class_iterate (_pager_class, block_cache);
 }
 
 static void
@@ -1438,8 +1437,7 @@ enable_caching ()
   return 0;
 }
 
-  ports_bucket_iterate (disk_pager_bucket, enable_cache);
-  ports_bucket_iterate (file_pager_bucket, enable_cache);
+  ports_class_iterate (_pager_class, enable_cache);
 }
 
 /* Tell diskfs if there are pagers exported, and if none, then
-- 
2.0.0.rc0




[PATCH 02/11] libports: use a single hash table

2014-05-12 Thread Justus Winter
Previously, libports used a hash table per port bucket.  This makes
looking up a port difficult if one does not know the port bucket, as
one has to iterate over all buckets and do a hash table lookup each.

Having to iterate over the buckets makes it necessary to keep a list
of all buckets, which has to be updated and protected by a lock as
well.

Also, the current code in _ports_bucket_class_iterate iterates over
the hash table associated with the bucket given.  When
ports_class_iterate calls this common function, it obtains a reference
to the bucket from one of the ports in the given class.  This will not
work if a class contains ports in different port buckets.  This
limitation is not documented as far as I can see.  Again, having to
maintain this list has its cost and requires serialization.

Use a single hash table instead.  Furthermore, serialize access to it
using a separate lock.  Remove the linked lists of all buckets and all
ports in a class.

* libports/bucket-iterate.c (ports_bucket_iterate): Acquire
_ports_htable_lock.  Also, generalize ports_bucket_iterate to treat
the bucket argument the same way as the class argument.
* libports/class-iterate.c (ports_class_iterate): Just call the
generalized _ports_bucket_class_iterate with NULL as class.
* libports/ports.h (struct port_info): Remove the port class links.
(struct port_bucket): Remove the hash table, and the all buckets link.
(_ports_all_buckets): Remove declaration.
(_ports_htable): New global hash table.
(_ports_htable_lock): Protected by this lock.
* libports/claim-right.c: Adjust accordingly.
* libports/complete-deallocate.c: Likewise.
* libports/create-bucket.c: Likewise.
* libports/create-class.c: Likewise.
* libports/create-internal.c: Likewise.
* libports/destroy-right.c: Likewise.
* libports/import-port.c: Likewise.
* libports/lookup-port.c: Likewise.
* libports/reallocate-from-external.c: Likewise.
* libports/reallocate-port.c: Likewise.
* libports/transfer-right.c: Likewise.
* libports/inhibit-all-rpcs.c: Iterate over the hash table.
* libports/inhibit-bucket-rpcs.c: Likewise, but filter using bucket.
* libports/inhibit-class-rpcs.c: Likewise, but filter using class.
* libports/init.c (_ports_htable): Initialize.
(_ports_htable_lock): Likewise.
---
 libports/bucket-iterate.c   | 21 +++--
 libports/claim-right.c  |  6 --
 libports/class-iterate.c| 10 +-
 libports/complete-deallocate.c  |  8 +++-
 libports/create-bucket.c|  7 ---
 libports/create-class.c |  1 -
 libports/create-internal.c  |  9 +++--
 libports/destroy-right.c|  6 +++---
 libports/import-port.c  |  9 +++--
 libports/inhibit-all-rpcs.c | 27 +--
 libports/inhibit-bucket-rpcs.c  |  7 +--
 libports/inhibit-class-rpcs.c   | 27 ++-
 libports/init.c |  7 ++-
 libports/lookup-port.c  | 21 -
 libports/ports.h| 11 ++-
 libports/reallocate-from-external.c | 16 ++--
 libports/reallocate-port.c  | 10 +++---
 libports/transfer-right.c   | 20 +---
 18 files changed, 118 insertions(+), 105 deletions(-)

diff --git a/libports/bucket-iterate.c b/libports/bucket-iterate.c
index babc204..d5f590e 100644
--- a/libports/bucket-iterate.c
+++ b/libports/bucket-iterate.c
@@ -36,35 +36,44 @@ _ports_bucket_class_iterate (struct port_bucket *bucket,
   error_t err;
 
   pthread_mutex_lock (&_ports_lock);
+  pthread_mutex_lock (&_ports_htable_lock);
 
-  if (bucket->htable.nr_items == 0)
+  if (_ports_htable.nr_items == 0)
 {
-  pthread_mutex_unlock (&_ports_lock);
+  pthread_mutex_unlock (&_ports_htable_lock);
   return 0;
 }
 
-  nr_items = bucket->htable.nr_items;
+  nr_items = _ports_htable.nr_items;
   p = malloc (nr_items * sizeof *p);
   if (p == NULL)
 {
-  pthread_mutex_unlock (&_ports_lock);
+  pthread_mutex_unlock (&_ports_htable_lock);
   return ENOMEM;
 }
 
   n = 0;
-  HURD_IHASH_ITERATE (&bucket->htable, arg)
+  HURD_IHASH_ITERATE (&_ports_htable, arg)
 {
   struct port_info *const pi = arg;
 
-  if (class == 0 || pi->class == class)
+  if ((bucket == NULL || pi->bucket == bucket)
+  && (class == NULL || pi->class == class))
{
  pi->refcnt++;
  p[n] = pi;
  n++;
}
 }
+  pthread_mutex_unlock (&_ports_htable_lock);
   pthread_mutex_unlock (&_ports_lock);
 
+  if (n == 0)
+{
+  free (p);
+  return 0;
+}
+
   if (n != nr_items)
 {
   /* We allocated too much.  Release unused memory.  */
diff --git a/libports/claim-right.c b/libports/claim-right.c
index 4851ea3..f453a82 100644
--- a/libports/claim-right.c
+++ b/libports/claim-right.c
@@ -34,10 +34,12 @@ ports_claim_right (void *portstruct)
   if (ret == MACH_PORT_NULL)
 re

[PATCH 07/11] libihash: use an integer hash function on the keys

2014-05-12 Thread Justus Winter
Use an integer hash function to derive the index from the key.  This
should reduce the number of collisions.

* libihash/ihash.c (hash_int32): New function.
(find_index): Use hash_int32 on the key to derive the index.
(add_one): Likewise.
---
 libihash/ihash.c | 23 +--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/libihash/ihash.c b/libihash/ihash.c
index d670fee..1de4c35 100644
--- a/libihash/ihash.c
+++ b/libihash/ihash.c
@@ -81,6 +81,25 @@ static const unsigned int ihash_nsizes = (sizeof ihash_sizes
  / sizeof ihash_sizes[0]);
 
 
+/* Integer hashing follows Thomas Wang's paper about his 32/64-bits
+   mix functions :
+   -  http://www.concentric.net/~Ttwang/tech/inthash.htm  */
+static inline uint32_t
+hash_int32(uint32_t n, unsigned int bits)
+{
+  uint32_t hash;
+
+  hash = n;
+  hash = ~hash + (hash << 15);
+  hash ^= (hash >> 12);
+  hash += (hash << 2);
+  hash ^= (hash >> 4);
+  hash += (hash << 3) + (hash << 11);
+  hash ^= (hash >> 16);
+
+  return hash >> (32 - bits);
+}
+
 /* Return 1 if the slot with the index IDX in the hash table HT is
empty, and 0 otherwise.  */
 static inline int
@@ -111,7 +130,7 @@ find_index (hurd_ihash_t ht, hurd_ihash_key_t key)
   unsigned int up_idx;
   unsigned int down_idx;
 
-  idx = key % ht->size;
+  idx = hash_int32 (key, 32) % ht->size;
 
   if (ht->items[idx].value == _HURD_IHASH_EMPTY || ht->items[idx].key == key)
 return idx;
@@ -264,7 +283,7 @@ add_one (hurd_ihash_t ht, hurd_ihash_key_t key, 
hurd_ihash_value_t value)
   unsigned int idx;
   unsigned int first_free;
 
-  idx = key % ht->size;
+  idx = hash_int32 (key, 32) % ht->size;
   first_free = idx;
 
   if (ht->items[idx].value != _HURD_IHASH_EMPTY && ht->items[idx].key != key)
-- 
2.0.0.rc0




[PATCH 11/11] fatfs: improve {enable,disable}_caching

2014-05-12 Thread Justus Winter
* fatfs/pager.c (enable_caching, disable_caching): Iterate over the
pager class instead of over both pager buckets.
---
 fatfs/pager.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/fatfs/pager.c b/fatfs/pager.c
index f855ecf..7aa5c5e 100644
--- a/fatfs/pager.c
+++ b/fatfs/pager.c
@@ -23,6 +23,9 @@
 #include 
 #include "fatfs.h"
 
+/* XXX */
+#include "../libpager/priv.h"
+
 /* A ports bucket to hold disk pager ports.  */
 struct port_bucket *disk_pager_bucket;
 
@@ -963,8 +966,7 @@ disable_caching ()
 
   /* Loop through the pagers and turn off caching one by one,
  synchronously.  That should cause termination of each pager.  */
-  ports_bucket_iterate (disk_pager_bucket, block_cache);
-  ports_bucket_iterate (file_pager_bucket, block_cache);
+  ports_class_iterate (_pager_class, block_cache);
 }
  
 static void
@@ -992,8 +994,7 @@ enable_caching ()
   return 0;
 }
 
-  ports_bucket_iterate (disk_pager_bucket, enable_cache);
-  ports_bucket_iterate (file_pager_bucket, enable_cache);
+  ports_class_iterate (_pager_class, enable_cache);
 }

 /* Tell diskfs if there are pagers exported, and if none, then
-- 
2.0.0.rc0




[PATCH 09/11] libihash: use linear probing and fast modulo operation

2014-05-12 Thread Justus Winter
libihash uses open addressing.  Previously, quadratic probing in both
directions was used to resolve collisions.  Quadratic probing might
result in a less efficient use of caches.

Also, prime numbers of the form 4 * i + 3 were used as array sizes.
This was used in combination with the integer modulo operation for
hashing.  It has been known for some time that libihash suffers from
collisions, so a integer hash function is now applied to the keys to
derive the index.

Use linear probing instead.  Also, use powers of two for the array
sizes, so a bit mask can be used for the modulo operation.

* libihash/ihash.c (ihash_sizes, ihash_nsizes): Remove.
(find_index): Use linear probing and fast modulo operation.
(add_one): Likewise.
* libihash/ihash.h (HURD_IHASH_MIN_SIZE): New macro.
---
 libihash/ihash.c | 121 ++-
 libihash/ihash.h |   4 ++
 2 files changed, 17 insertions(+), 108 deletions(-)

diff --git a/libihash/ihash.c b/libihash/ihash.c
index 1de4c35..e74a2c5 100644
--- a/libihash/ihash.c
+++ b/libihash/ihash.c
@@ -31,55 +31,6 @@
 #include 
 
 #include "ihash.h"
-
-
-/* The prime numbers of the form 4 * i + 3 for some i, all greater
-   than twice the previous one and smaller than 2^40 (for now).  */
-static const uint64_t ihash_sizes[] =
-{
-  3,
-  7,
-  19,
-  43,
-  103,
-  211,
-  431,
-  863,
-  1747,
-  3499,
-  7019,
-  14051,
-  28111,
-  56239,
-  112507,
-  225023,
-  450067,
-  900139,
-  1800311,
-  3600659,
-  7201351,
-  14402743,
-  28805519,
-  57611039,
-  115222091,
-  230444239,
-  460888499,
-  921777067,
-  1843554151,
-  UINT64_C (3687108307),
-  UINT64_C (7374216631),
-  UINT64_C (14748433279),
-  UINT64_C (29496866579),
-  UINT64_C (58993733159),
-  UINT64_C (117987466379),
-  UINT64_C (235974932759),
-  UINT64_C (471949865531),
-  UINT64_C (943899731087)
-};
-
-static const unsigned int ihash_nsizes = (sizeof ihash_sizes
- / sizeof ihash_sizes[0]);
-
 
 /* Integer hashing follows Thomas Wang's paper about his 32/64-bits
mix functions :
@@ -126,40 +77,24 @@ static inline int
 find_index (hurd_ihash_t ht, hurd_ihash_key_t key)
 {
   unsigned int idx;
-  unsigned int i;
   unsigned int up_idx;
-  unsigned int down_idx;
+  unsigned int mask = ht->size - 1;
 
-  idx = hash_int32 (key, 32) % ht->size;
+  idx = hash_int32 (key, 32) & mask;
 
   if (ht->items[idx].value == _HURD_IHASH_EMPTY || ht->items[idx].key == key)
 return idx;
 
-  /* Instead of calculating idx + 1, idx + 4, idx + 9, ..., idx + i^2,
- we add 1, 3, 5, 7, etc to the previous index.  We do this in both
- directions separately.  */
-  i = 1;
   up_idx = idx;
-  down_idx = idx;
 
   do
 {
-  up_idx = (up_idx + i) % ht->size;
+  up_idx = (up_idx + 1) & mask;
   if (ht->items[up_idx].value == _HURD_IHASH_EMPTY
  || ht->items[up_idx].key == key)
return up_idx;
-
-  if (down_idx < i)
-   down_idx += ht->size;
-  down_idx = (down_idx - i) % ht->size;
-  if (ht->items[down_idx].value == _HURD_IHASH_EMPTY
- || ht->items[down_idx].key == key)
-   return down_idx;
-
-  /* After (ht->size - 1) / 2 iterations, this will be 0.  */
-  i = (i + 2) % ht->size;
 }
-  while (i);
+  while (up_idx != idx);
 
   /* If we end up here, the item could not be found.  Return any
  invalid index.  */
@@ -282,53 +217,26 @@ add_one (hurd_ihash_t ht, hurd_ihash_key_t key, 
hurd_ihash_value_t value)
 {
   unsigned int idx;
   unsigned int first_free;
+  unsigned int mask = ht->size - 1;
 
-  idx = hash_int32 (key, 32) % ht->size;
+  idx = hash_int32 (key, 32) & mask;
   first_free = idx;
 
   if (ht->items[idx].value != _HURD_IHASH_EMPTY && ht->items[idx].key != key)
 {
-  /* Instead of calculating idx + 1, idx + 4, idx + 9, ..., idx +
- i^2, we add 1, 3, 5, 7, ... 2 * i - 1 to the previous index.
- We do this in both directions separately.  */
-  unsigned int i = 1;
   unsigned int up_idx = idx;
-  unsigned int down_idx = idx;
  
   do
{
- up_idx = (up_idx + i) % ht->size;
+up_idx = (up_idx + 1) & mask;
  if (ht->items[up_idx].value == _HURD_IHASH_EMPTY
  || ht->items[up_idx].key == key)
{
  idx = up_idx;
  break;
}
- if (first_free == idx
- && ht->items[up_idx].value == _HURD_IHASH_DELETED)
-   first_free = up_idx;
-
- if (down_idx < i)
-   down_idx += ht->size;
- down_idx = (down_idx - i) % ht->size;
- if (down_idx < 0)
-   down_idx += ht->size;
- else
-   down_idx %= ht->size;
- if (ht->items[down_idx].value == _HURD_IHASH_EMPTY
- || ht->items[down_idx].key == key)
-   {
- idx = down_idx;
- break;
-   }
- if (first_free == idx
- && ht->items[down_idx].value == _HURD_IHASH_

[PATCH 08/11] libihash: reduce the default maximum load factor to 75%

2014-05-12 Thread Justus Winter
* libihash/ihash.h (HURD_IHASH_MAX_LOAD_DEFAULT): Set to 75.
---
 libihash/ihash.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libihash/ihash.h b/libihash/ihash.h
index 3ca5ec3..6bdc925 100644
--- a/libihash/ihash.h
+++ b/libihash/ihash.h
@@ -94,7 +94,7 @@ typedef struct hurd_ihash *hurd_ihash_t;
 /* Construction and destruction of hash tables.  */
 
 /* The default value for the maximum load factor in percent.  */
-#define HURD_IHASH_MAX_LOAD_DEFAULT 80
+#define HURD_IHASH_MAX_LOAD_DEFAULT 75
 
 /* The LOCP_OFFS to use if no location pointer is available.  */
 #define HURD_IHASH_NO_LOCP INTPTR_MIN
-- 
2.0.0.rc0




[PATCH 03/11] include: add lock-less reference counting primitives

2014-05-12 Thread Justus Winter
* include/refcount.h: New file.
---
 include/refcount.h | 174 +
 1 file changed, 174 insertions(+)
 create mode 100644 include/refcount.h

diff --git a/include/refcount.h b/include/refcount.h
new file mode 100644
index 000..0a9ed8e
--- /dev/null
+++ b/include/refcount.h
@@ -0,0 +1,174 @@
+/* Lock-less reference counting primitives
+
+   Copyright (C) 2014 Free Software Foundation, Inc.
+
+   Written by Justus Winter <4win...@informatik.uni-hamburg.de>
+
+   This file is part of the GNU Hurd.
+
+   The GNU Hurd is free software; you can redistribute it and/or
+   modify it under the terms of the GNU General Public License as
+   published by the Free Software Foundation; either version 2, or (at
+   your option) any later version.
+
+   The GNU Hurd is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with the GNU Hurd.  If not, see .  */
+
+#ifndef _HURD_REFCOUNT_H_
+#define _HURD_REFCOUNT_H_
+
+#include 
+
+/* Simple reference counting.  */
+
+/* An opaque type.  You must not access these values directly.  */
+typedef unsigned int refcount_t;
+
+/* Initialize REF with REFERENCES.  */
+static inline void
+refcount_init (refcount_t *ref, unsigned int references)
+{
+  *ref = references;
+}
+
+/* Increment REF.  Return the result of the operation.  This function
+   uses atomic operations.  It is not required to serialize calls to
+   this function.  */
+static inline unsigned int
+refcount_ref (refcount_t *ref)
+{
+  return __atomic_add_fetch (ref, 1, __ATOMIC_RELAXED);
+}
+
+/* Decrement REF.  Return the result of the operation.  This function
+   uses atomic operations.  It is not required to serialize calls to
+   this function.  */
+static inline unsigned int
+refcount_deref (refcount_t *ref)
+{
+  return __atomic_sub_fetch (ref, 1, __ATOMIC_RELAXED);
+}
+
+/* Return REF.  This function uses atomic operations.  It is not
+   required to serialize calls to this function.  */
+static inline unsigned int
+refcount_references (refcount_t *ref)
+{
+  return __atomic_load_n (ref, __ATOMIC_RELAXED);
+}
+
+/* Reference counting with weak references.  */
+
+/* An opaque type.  You must not access these values directly.  */
+typedef uint64_t refcounts_t;
+
+/* Instead, the functions manipulating refcounts_t values write the
+   results into this kind of objects.  */
+struct references {
+  uint32_t hard;
+  uint32_t weak;
+};
+
+union _references {
+  struct references refs;
+  refcounts_t rc;
+};
+
+/* Initialize REF with HARD and WEAK references.  */
+static inline void
+refcounts_init (refcounts_t *ref, uint32_t hard, uint32_t weak)
+{
+  ((union _references *) ref)->refs =
+(struct references) { .hard = hard, .weak = weak };
+}
+
+/* Increment the hard reference count of REF.  If RESULT is not NULL,
+   the result of the operation is written there.  This function uses
+   atomic operations.  It is not required to serialize calls to this
+   function.  */
+static inline void
+refcounts_ref (refcounts_t *ref, struct references *result)
+{
+  const union _references op = { .refs = { .hard = 1 } };
+  refcounts_t r = __atomic_add_fetch (ref, op.rc, __ATOMIC_RELAXED);
+  if (result)
+((union _references *) result)->rc = r;
+}
+
+/* Decrement the hard reference count of REF.  If RESULT is not NULL,
+   the result of the operation is written there.  This function uses
+   atomic operations.  It is not required to serialize calls to this
+   function.  */
+static inline void
+refcounts_deref (refcounts_t *ref, struct references *result)
+{
+  const union _references op = { .refs = { .hard = 1 } };
+  refcounts_t r = __atomic_sub_fetch (ref, op.rc, __ATOMIC_RELAXED);
+  if (result)
+((union _references *) result)->rc = r;
+}
+
+/* Increment the weak reference count of REF.  If RESULT is not NULL,
+   the result of the operation is written there.  This function uses
+   atomic operations.  It is not required to serialize calls to this
+   function.  */
+static inline void
+refcounts_ref_weak (refcounts_t *ref, struct references *result)
+{
+  const union _references op = { .refs = { .weak = 1 } };
+  refcounts_t r = __atomic_add_fetch (ref, op.rc, __ATOMIC_RELAXED);
+  if (result)
+((union _references *) result)->rc = r;
+}
+
+/* Decrement the weak reference count of REF.  If RESULT is not NULL,
+   the result of the operation is written there.  This function uses
+   atomic operations.  It is not required to serialize calls to this
+   function.  */
+static inline void
+refcounts_deref_weak (refcounts_t *ref, struct references *result)
+{
+  const union _references op = { .refs = { .weak = 1 } };
+  refcounts_t r = __atomic_sub_fetch (ref, op.rc, __ATOMIC_REL

Re: [PATCH 08/11] libihash: reduce the default maximum load factor to 75%

2014-05-12 Thread Samuel Thibault
Well, why? :)

Justus Winter, le Mon 12 May 2014 12:05:46 +0200, a écrit :
> * libihash/ihash.h (HURD_IHASH_MAX_LOAD_DEFAULT): Set to 75.
> ---
>  libihash/ihash.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libihash/ihash.h b/libihash/ihash.h
> index 3ca5ec3..6bdc925 100644
> --- a/libihash/ihash.h
> +++ b/libihash/ihash.h
> @@ -94,7 +94,7 @@ typedef struct hurd_ihash *hurd_ihash_t;
>  /* Construction and destruction of hash tables.  */
>  
>  /* The default value for the maximum load factor in percent.  */
> -#define HURD_IHASH_MAX_LOAD_DEFAULT 80
> +#define HURD_IHASH_MAX_LOAD_DEFAULT 75
>  
>  /* The LOCP_OFFS to use if no location pointer is available.  */
>  #define HURD_IHASH_NO_LOCP   INTPTR_MIN
> -- 
> 2.0.0.rc0
> 

-- 
Samuel
* y se leve dans 2h10



[PATCH 08/11] libihash: reduce the default maximum load factor to 75%

2014-05-12 Thread Justus Winter
The performance of hash tables depend critically on a low number of
hash collisions.  As the table fills up, the chance of collisions
necessarily raises.

Previously, libihash resized the hash table when the load exceeded
80%.  This seems a bit optimistic (e. g. java.util.Hashtable uses 75%
as default).

* libihash/ihash.h (HURD_IHASH_MAX_LOAD_DEFAULT): Set to 75.
---
 libihash/ihash.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libihash/ihash.h b/libihash/ihash.h
index 3ca5ec3..6bdc925 100644
--- a/libihash/ihash.h
+++ b/libihash/ihash.h
@@ -94,7 +94,7 @@ typedef struct hurd_ihash *hurd_ihash_t;
 /* Construction and destruction of hash tables.  */
 
 /* The default value for the maximum load factor in percent.  */
-#define HURD_IHASH_MAX_LOAD_DEFAULT 80
+#define HURD_IHASH_MAX_LOAD_DEFAULT 75
 
 /* The LOCP_OFFS to use if no location pointer is available.  */
 #define HURD_IHASH_NO_LOCP INTPTR_MIN
-- 
2.0.0.rc0




Re: [PATCH 08/11] libihash: reduce the default maximum load factor to 75%

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:35:08 +0200, a écrit :
> The performance of hash tables depend critically on a low number of
> hash collisions.  As the table fills up, the chance of collisions
> necessarily raises.
> 
> Previously, libihash resized the hash table when the load exceeded
> 80%.  This seems a bit optimistic (e. g. java.util.Hashtable uses 75%
> as default).

Ack.

> * libihash/ihash.h (HURD_IHASH_MAX_LOAD_DEFAULT): Set to 75.
> ---
>  libihash/ihash.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libihash/ihash.h b/libihash/ihash.h
> index 3ca5ec3..6bdc925 100644
> --- a/libihash/ihash.h
> +++ b/libihash/ihash.h
> @@ -94,7 +94,7 @@ typedef struct hurd_ihash *hurd_ihash_t;
>  /* Construction and destruction of hash tables.  */
>  
>  /* The default value for the maximum load factor in percent.  */
> -#define HURD_IHASH_MAX_LOAD_DEFAULT 80
> +#define HURD_IHASH_MAX_LOAD_DEFAULT 75
>  
>  /* The LOCP_OFFS to use if no location pointer is available.  */
>  #define HURD_IHASH_NO_LOCP   INTPTR_MIN
> -- 
> 2.0.0.rc0
> 

-- 
Samuel
***e trouve un .xls
***e passe un moment à se demander quelle version de xml c'est ça, le .xls
e: donc j'ai fait un file
 -+- #sos - on n'a pas forcément les mêmes références que tout le monde -+-



Re: [PATCH 01/11] ext2fs: cache the superblock

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:39 +0200, a écrit :
> Previously, the superblock was mmaped and a pointer stored in sblock
> by map_hypermetadata.  This memory is backed by our disk pager.
> 
> This is rather unfortunate, as this means that whenever we read a
> value from that location, we might generate a request our disk pager.
> This amplifies the so-called thread-storm problem.
> 
> Rather than relying on a mmaped region of memory, just use the data
> loaded by get_hypermetadata.

Please also fix the comment in the declaration of map_hypermetadata in
ext2fs.h, then Ack.

> * ext2fs/hyper.c (get_hypermetadata): Do not free sblock.
> (mapped_sblock): New variable.
> (map_hypermetadata): Map the superblock to mapped_sblock instead.
> (diskfs_set_hypermetadata): Copy superblock into mapped_superblock.
> ---
>  ext2fs/hyper.c | 19 +++
>  1 file changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/ext2fs/hyper.c b/ext2fs/hyper.c
> index 5bcc2ab..5f288bf 100644
> --- a/ext2fs/hyper.c
> +++ b/ext2fs/hyper.c
> @@ -61,7 +61,9 @@ get_hypermetadata (void)
>error_t err;
>size_t read = 0;
>  
> -  assert (! sblock);
> +  if (sblock != NULL)
> +munmap (sblock, SBLOCK_SIZE);
> +
>err = store_read (store, SBLOCK_OFFS >> store->log2_block_size,
>   SBLOCK_SIZE, (void **)&sblock, &read);
>if (err || read != SBLOCK_SIZE)
> @@ -161,19 +163,19 @@ get_hypermetadata (void)
>zeroblock = (vm_address_t) mmap (0, block_size, PROT_READ, MAP_ANON, 
> 0, 0);
>assert (zeroblock != (vm_address_t) MAP_FAILED);
>  }
> -
> -  munmap (sblock, SBLOCK_SIZE);
> -  sblock = NULL;
>  }
>  
> +static struct ext2_super_block *mapped_sblock;
> +
>  void
>  map_hypermetadata (void)
>  {
> -  sblock = (struct ext2_super_block *) boffs_ptr (SBLOCK_OFFS);
> +  mapped_sblock = (struct ext2_super_block *) boffs_ptr (SBLOCK_OFFS);
>  
>/* Cache a convenient pointer to the block group descriptors for 
> allocation.
>   These are stored in the filesystem blocks following the superblock.  */
> -  group_desc_image = (struct ext2_group_desc *) bptr (bptr_block (sblock) + 
> 1);
> +  group_desc_image =
> +(struct ext2_group_desc *) bptr (bptr_block (mapped_sblock) + 1);
>  }
>  
>  error_t
> @@ -196,8 +198,9 @@ diskfs_set_hypermetadata (int wait, int clean)
>   if (sblock_dirty)
> {
>   sblock_dirty = 0;
> - disk_cache_block_ref_ptr (sblock);
> - record_global_poke (sblock);
> + memcpy (mapped_sblock, sblock, SBLOCK_SIZE);
> + disk_cache_block_ref_ptr (mapped_sblock);
> + record_global_poke (mapped_sblock);
> }
>  
>sync_global (wait);
> -- 
> 2.0.0.rc0
> 

-- 
Samuel
Progress (n.): The process through which the Internet has evolved from
smart people in front of dumb terminals to dumb people in front of smart
terminals.



Re: [PATCH 03/11] include: add lock-less reference counting primitives

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:41 +0200, a écrit :
> +  const union _references op = { .refs = { .hard = 1 } };
> +  refcounts_t r = __atomic_add_fetch (ref, op.rc, __ATOMIC_RELAXED);

Mmm, I don't think it is allowed by C to write into a field and read
from another field.  The legacy Hurd code used to tend to do it in some
places, but it breaks with smart compilers.

Samuel



Re: [PATCH 04/11] libports: lock-less reference counting for port_info objects

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:42 +0200, a écrit :
> -  pthread_mutex_lock (&_ports_lock);
>pthread_mutex_lock (&_ports_htable_lock);
>  
>if (_ports_htable.nr_items == 0)
> @@ -60,13 +59,12 @@ _ports_bucket_class_iterate (struct port_bucket *bucket,
>if ((bucket == NULL || pi->bucket == bucket)
>&& (class == NULL || pi->class == class))
>   {
> -   pi->refcnt++;
> +   refcounts_ref (&pi->refcounts, NULL);
> p[n] = pi;
> n++;
>   }
>  }
>pthread_mutex_unlock (&_ports_htable_lock);
> -  pthread_mutex_unlock (&_ports_lock);

Mmm, just to warn (I'm not sure I'll get the time soon to review these
changes), this kind of change needs a lot of making sure we see the
whole picture: quite often taking the mutex is not only to protect
the reference counting, but also to make that coherent with other
reads/writes, and thus only an atomic operation is not enough. As an
example, here we would want to make sure other threads see that we have
acquired a reference, before recording the pointer and doing something
else.  Keeping the mutex is more than we need, but we need to make sure
we still synchronize enough with the rest of the code.

Samuel



Re: [PATCH 03/11] include: add lock-less reference counting primitives

2014-05-12 Thread Richard Braun
On Tue, May 13, 2014 at 12:23:48AM +0200, Samuel Thibault wrote:
> Justus Winter, le Mon 12 May 2014 12:05:41 +0200, a écrit :
> > +  const union _references op = { .refs = { .hard = 1 } };
> > +  refcounts_t r = __atomic_add_fetch (ref, op.rc, __ATOMIC_RELAXED);
> 
> Mmm, I don't think it is allowed by C to write into a field and read
> from another field.  The legacy Hurd code used to tend to do it in some
> places, but it breaks with smart compilers.

It's not defined by C, but all major compilers support it as it's the
cleanest way, if done properly, not to violate strict aliasing currently.

-- 
Richard Braun



Re: [PATCH 10/11] ext2fs: improve {enable,disable}_caching

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:48 +0200, a écrit :
> * ext2fs/pager.c (enable_caching, disable_caching): Iterate over the
> pager class instead of over both pager buckets.

Mmm, did you check the actual implementation of ports_class_iterate?
It seems that it just iterates over one bucket, the buck of the last
created port of the class.

> diff --git a/ext2fs/pager.c b/ext2fs/pager.c
> index 017efcc..6328f3b 100644
> --- a/ext2fs/pager.c
> +++ b/ext2fs/pager.c
> @@ -1409,8 +1409,7 @@ disable_caching ()
>  
>/* Loop through the pagers and turn off caching one by one,
>   synchronously.  That should cause termination of each pager. */
> -  ports_bucket_iterate (disk_pager_bucket, block_cache);
> -  ports_bucket_iterate (file_pager_bucket, block_cache);
> +  ports_class_iterate (_pager_class, block_cache);
>  }
>  
>  static void
> @@ -1438,8 +1437,7 @@ enable_caching ()
>return 0;
>  }
>  
> -  ports_bucket_iterate (disk_pager_bucket, enable_cache);
> -  ports_bucket_iterate (file_pager_bucket, enable_cache);
> +  ports_class_iterate (_pager_class, enable_cache);
>  }
>  
>  /* Tell diskfs if there are pagers exported, and if none, then
> -- 
> 2.0.0.rc0
> 

-- 
Samuel
 lisons de l'assembleur c
 -+- #sos - CrisC forever -+-



Re: [PATCH 11/11] fatfs: improve {enable,disable}_caching

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:49 +0200, a écrit :
> * fatfs/pager.c (enable_caching, disable_caching): Iterate over the
> pager class instead of over both pager buckets.

Ditto.

> ---
>  fatfs/pager.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/fatfs/pager.c b/fatfs/pager.c
> index f855ecf..7aa5c5e 100644
> --- a/fatfs/pager.c
> +++ b/fatfs/pager.c
> @@ -23,6 +23,9 @@
>  #include 
>  #include "fatfs.h"
>  
> +/* XXX */
> +#include "../libpager/priv.h"
> +
>  /* A ports bucket to hold disk pager ports.  */
>  struct port_bucket *disk_pager_bucket;
>  
> @@ -963,8 +966,7 @@ disable_caching ()
>  
>/* Loop through the pagers and turn off caching one by one,
>   synchronously.  That should cause termination of each pager.  */
> -  ports_bucket_iterate (disk_pager_bucket, block_cache);
> -  ports_bucket_iterate (file_pager_bucket, block_cache);
> +  ports_class_iterate (_pager_class, block_cache);
>  }
> 
>  static void
> @@ -992,8 +994,7 @@ enable_caching ()
>return 0;
>  }
>  
> -  ports_bucket_iterate (disk_pager_bucket, enable_cache);
> -  ports_bucket_iterate (file_pager_bucket, enable_cache);
> +  ports_class_iterate (_pager_class, enable_cache);
>  }
>   
>  /* Tell diskfs if there are pagers exported, and if none, then
> -- 
> 2.0.0.rc0
> 

-- 
Samuel
 bouhouhouh, b il m'a abandonné. Tout ca parce que je regardais plus mon 
emacs que lui !
 s/lui/ses messages irc/
 -+- #ens-mim esseulé -+-



Re: [PATCH 03/11] include: add lock-less reference counting primitives

2014-05-12 Thread Samuel Thibault
Richard Braun, le Tue 13 May 2014 00:36:44 +0200, a écrit :
> On Tue, May 13, 2014 at 12:23:48AM +0200, Samuel Thibault wrote:
> > Justus Winter, le Mon 12 May 2014 12:05:41 +0200, a écrit :
> > > +  const union _references op = { .refs = { .hard = 1 } };
> > > +  refcounts_t r = __atomic_add_fetch (ref, op.rc, __ATOMIC_RELAXED);
> > 
> > Mmm, I don't think it is allowed by C to write into a field and read
> > from another field.  The legacy Hurd code used to tend to do it in some
> > places, but it breaks with smart compilers.
> 
> It's not defined by C, but all major compilers support it as it's the
> cleanest way, if done properly, not to violate strict aliasing currently.

Right, we only care about gcc.  We probably want to make sure we have
this extension, though, by rejecting any other compiler than gcc, for
instance.

Samuel



Re: [PATCH 10/11] ext2fs: improve {enable,disable}_caching

2014-05-12 Thread Samuel Thibault
Samuel Thibault, le Tue 13 May 2014 00:37:37 +0200, a écrit :
> Justus Winter, le Mon 12 May 2014 12:05:48 +0200, a écrit :
> > * ext2fs/pager.c (enable_caching, disable_caching): Iterate over the
> > pager class instead of over both pager buckets.
> 
> Mmm, did you check the actual implementation of ports_class_iterate?

Ah, you patch it in a previous patch, so it's just a patch dependency.

Samuel



Re: [PATCH 02/11] libports: use a single hash table

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:40 +0200, a écrit :
> Previously, libports used a hash table per port bucket.  This makes
> looking up a port difficult if one does not know the port bucket, as
> one has to iterate over all buckets and do a hash table lookup each.

But conversely, this makes having to iterate over all ports while one
may want to just iterate over one bucket.  I'd rather not drop that
feature.

_ports_bucket_class_iterate is clearly bogus ATM, so we at least need
something.  Would it be too costly to maintain both a global and a
per-bucket hash table?  It would at the very worse be only twice as
expensive as only the global hash table, while still allowing the
performance gain of iterating over only one bucket.

Samuel



Re: [PATCH 03/11] include: add lock-less reference counting primitives

2014-05-12 Thread Samuel Thibault
Justus Winter, le Mon 12 May 2014 12:05:41 +0200, a écrit :
> +/* An opaque type.  You must not access these values directly.  */
> +typedef uint64_t refcounts_t;
> +
> +/* Instead, the functions manipulating refcounts_t values write the
> +   results into this kind of objects.  */
> +struct references {
> +  uint32_t hard;
> +  uint32_t weak;
> +};
> +
> +union _references {
> +  struct references refs;
> +  refcounts_t rc;
> +};
> +
> +/* Initialize REF with HARD and WEAK references.  */
> +static inline void
> +refcounts_init (refcounts_t *ref, uint32_t hard, uint32_t weak)
> +{
> +  ((union _references *) ref)->refs =

AIUI this cast is a case of type-puning.  Why not making refcounts_t the
union itself?  That way would be clearly safe with gcc's extension.

Samuel