Re: expanding amd64 past the 1TB limit

2013-07-11 Thread Neel Natu
Hi Chris,

On Sun, Jul 7, 2013 at 11:42 PM, Chris Torek to...@torek.net wrote:
 Here is a final (I hope) version of the patch.  I dropped the
 config option, but I added code to limit the real size of the
 direct map PDEs.  The end result is that on small systems, this
 ties up 14 more pages (15 from increasing NKPML4E, but one
 regained because the new static variable ndmpdpphys is 1 instead
 of 2).


The patch looks good. I have a couple of comments inline:

 (I fixed the comment errors I spotted earlier, too.)

 Chris

  amd64/amd64/pmap.c  | 100 
 +---
  amd64/include/pmap.h|  36 +
  amd64/include/vmparam.h |  13 ---
  3 files changed, 97 insertions(+), 52 deletions(-)

 Author: Chris Torek chris.to...@gmail.com
 Date:   Thu Jun 27 18:49:29 2013 -0600

 increase physical and virtual memory limits

 Increase kernel VM space: go from .5 TB of KVA and 1 TB of direct
 map, to 8 TB of KVA and 16 TB of direct map.  However, we allocate
 less direct map space for small physical-memory systems.  Also, if
 Maxmem is so large that there is not enough direct map space,
 reduce Maxmem to fit, so that the system can boot unassisted.

 diff --git a/amd64/amd64/pmap.c b/amd64/amd64/pmap.c
 index 8dcf232..7368c96 100644
 --- a/amd64/amd64/pmap.c
 +++ b/amd64/amd64/pmap.c
 @@ -232,6 +232,7 @@ u_int64_t   KPML4phys;  /* phys addr of 
 kernel level 4 */

  static u_int64_t   DMPDphys;   /* phys addr of direct mapped level 2 
 */
  static u_int64_t   DMPDPphys;  /* phys addr of direct mapped level 3 
 */
 +static int ndmpdpphys; /* number of DMPDPphys pages */

  static struct rwlock_padalign pvh_global_lock;

 @@ -531,12 +532,27 @@ static void
  create_pagetables(vm_paddr_t *firstaddr)
  {
 int i, j, ndm1g, nkpdpe;
 +   pt_entry_t *pt_p;
 +   pd_entry_t *pd_p;
 +   pdp_entry_t *pdp_p;
 +   pml4_entry_t *p4_p;

The changes associated with pt_p, pd_p and p4_p are cosmetic and IMHO
detract from the meat of the change.

My preference would be for the cosmetic changes to be committed
separately from the changes that rearrange the KVA.


 /* Allocate page table pages for the direct map */
 ndmpdp = (ptoa(Maxmem) + NBPDP - 1)  PDPSHIFT;
 if (ndmpdp  4) /* Minimum 4GB of dirmap */
 ndmpdp = 4;
 -   DMPDPphys = allocpages(firstaddr, NDMPML4E);
 +   ndmpdpphys = howmany(ndmpdp, NPML4EPG);

NPDPEPG should be used here instead of NPML4EPG even though they are
numerically identical.

 +   if (ndmpdpphys  NDMPML4E) {
 +   /*
 +* Each NDMPML4E allows 512 GB, so limit to that,
 +* and then readjust ndmpdp and ndmpdpphys.
 +*/
 +   printf(NDMPML4E limits system to %d GB\n, NDMPML4E * 512);
 +   Maxmem = atop(NDMPML4E * NBPML4);
 +   ndmpdpphys = NDMPML4E;
 +   ndmpdp = NDMPML4E * NPDEPG;
 +   }
 +   DMPDPphys = allocpages(firstaddr, ndmpdpphys);
 ndm1g = 0;
 if ((amd_feature  AMDID_PAGE1GB) != 0)
 ndm1g = ptoa(Maxmem)  PDPSHIFT;
 @@ -553,6 +569,10 @@ create_pagetables(vm_paddr_t *firstaddr)
  * bootstrap.  We defer this until after all memory-size dependent
  * allocations are done (e.g. direct map), so that we don't have to
  * build in too much slop in our estimate.
 +*
 +* Note that when NKPML4E  1, we have an empty page underneath
 +* all but the KPML4I'th one, so we need NKPML4E-1 extra (zeroed)
 +* pages.  (pmap_enter requires a PD page to exist for each KPML4E.)
  */
 nkpt_init(*firstaddr);
 nkpdpe = NKPDPE(nkpt);
 @@ -561,32 +581,26 @@ create_pagetables(vm_paddr_t *firstaddr)
 KPDphys = allocpages(firstaddr, nkpdpe);

 /* Fill in the underlying page table pages */
 -   /* Read-only from zero to physfree */
 +   /* Nominally read-only (but really R/W) from zero to physfree */
 /* XXX not fully used, underneath 2M pages */
 -   for (i = 0; (i  PAGE_SHIFT)  *firstaddr; i++) {
 -   ((pt_entry_t *)KPTphys)[i] = i  PAGE_SHIFT;
 -   ((pt_entry_t *)KPTphys)[i] |= PG_RW | PG_V | PG_G;
 -   }
 +   pt_p = (pt_entry_t *)KPTphys;
 +   for (i = 0; ptoa(i)  *firstaddr; i++)
 +   pt_p[i] = ptoa(i) | PG_RW | PG_V | PG_G;

 /* Now map the page tables at their location within PTmap */
 -   for (i = 0; i  nkpt; i++) {
 -   ((pd_entry_t *)KPDphys)[i] = KPTphys + (i  PAGE_SHIFT);
 -   ((pd_entry_t *)KPDphys)[i] |= PG_RW | PG_V;
 -   }
 +   pd_p = (pd_entry_t *)KPDphys;
 +   for (i = 0; i  nkpt; i++)
 +   pd_p[i] = (KPTphys + ptoa(i)) | PG_RW | PG_V;

 /* Map from zero to end of allocations under 2M pages */
 /* This replaces some of the KPTphys 

Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-11 Thread Lars Engels
On Wed, Jul 10, 2013 at 02:04:17PM -0600, asom...@gmail.com wrote:
 On Wed, Jul 10, 2013 at 12:57 PM, Jordan Hubbard j...@mail.turbofuzz.com 
 wrote:
 
  On Jul 10, 2013, at 11:16 AM, Julian Elischer jul...@elischer.org wrote:
 
  My first  candidates are:
 
  Those sound useful.   Just out of curiosity, however, since we're on
  the topic of kernel dumps:  Has anyone even looked into the notion
  of an emergency fall-back network stack to enable remote kernel
  panic (or system hang) debugging, the way OS X lets you do?  I can't
  tell you the number of times I've NMI'd a Mac and connected to it
  remotely in a scenario where everything was totally wedged and just
  a couple of minutes in kgdb (or now lldb) quickly showed that
  everything was waiting on a specific lock and the problem became
  manifestly clear.
 
  The feature also lets you scrape a panic'd machine with automation,
  running some kgdb scripts against it to glean useful information for
  later analysis vs having to have someone schlep the dump image
  manually to triage.  It's going to be damn hard to live without this
  now, and if someone else isn't working on it, that's good to know
  too!
 
 I don't doubt that it would be useful to have an emergency network
 stack.  But have you ever looked into debugging over firewire?  We've
 had success with it.  All of our development machines are connected to
 a single firewire bus.  When one panics, we can remotely debug it with
 both kdb and ddb.  It's not ethernet , but it's still much faster than
 a serial port.
 https://wiki.freebsd.org/DebugWithDcons

Debugging over Firewire may be very nice to use, but Firewire is dead
while every single device nowadays has a network interface, admittedly
it's often wireless.


pgp066mWac44G.pgp
Description: PGP signature


Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-11 Thread Julian Elischer

On 7/11/13 6:09 AM, Kevin Day wrote:


Those sound useful.   Just out of curiosity, however, since we're on the topic 
of kernel dumps:  Has anyone even looked into the notion of an emergency 
fall-back network stack to enable remote kernel panic (or system hang) 
debugging, the way OS X lets you do?  I can't tell you the number of times I've 
NMI'd a Mac and connected to it remotely in a scenario where everything was 
totally wedged and just a couple of minutes in kgdb (or now lldb) quickly 
showed that everything was waiting on a specific lock and the problem became 
manifestly clear.

The feature also lets you scrape a panic'd machine with automation, running 
some kgdb scripts against it to glean useful information for later analysis vs 
having to have someone schlep the dump image manually to triage.  It's going to 
be damn hard to live without this now, and if someone else isn't working on it, 
that's good to know too!


I could imagine that we could stash away a vimage stack just for this 
purpose.

yould set it up on boot and leave it detached until you need it.

you just need to switch the interfaces over to the new stack on panic 
and put them into 'poll' mode.


Or maybe you'd need more (like pre-allocating mbufs for it to use).

Just an idea.




At a previous employer, we had a system where on a panic it had a totally 
separate stack capable of just IP/UDP/TFTP and would save its core via TFTP to 
a server. This isn’t as nice as full remote debugging, but it was a whole lot 
easier to develop. The caveats I remember were:

1) We didn’t want to implement ARP, so you had to write the mac address of the 
“dump server” to the kernel via sysctl before crashing.
2) We also didn’t want to have to deal with routing tables, so you had to 
manually specify what interface to blast packets out to, also via sysctl.
3) After a panic we didn’t want to rely on interrupt processing working, so it 
polled the network interface and blocked whenever it needed to. Since this was 
an embedded system, it wasn’t too big of a deal - only one network driver had 
to be hacked to support this. Basically a flag that would switch to “disable 
normal processing, switch to polled fifos for input and output” until reboot.
4) The whole system used only preallocated buffers and its own stack (carved 
out from memory on boot) so even if the kernel’s malloc was trashed, we could 
still dump.

I’m not sure this really would scratch your itch, but I believe this took me no 
more than a day or two to implement. Parts #1 and #2 would be pretty easy, but 
I’m not sure how generic the kernel could support an emergency network mode 
that doesn’t require interrupts for every network card out there. Maybe that 
isn’t as important to you as it was to us.

The whole exercise is much easier if you don’t use TFTP but a custom protocol 
that doesn’t require the crashing system to receive any packets, if it can just 
blast away at some random host oblivious if it’s working or not, it’s a lot 
less code to write.


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org




___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Attempting to roll back zfs transactions on a disk to recover a destroyed ZFS filesystem

2013-07-11 Thread Reid Linnemann
So recently I was trying to transfer a root-on-ZFS zpool from one pair of
disks to a single, larger disk. As I am wont to do, I botched the transfer
up and decided to destroy the ZFS filesystems on the destination and start
again. Naturally I was up late working on this, being sloppy and drowsy
without any coffee, and lo and behold I issued my 'zfs destroy -R' and
immediately realized after pressing [ENTER[ that I had given it the
source's zpool name. oops. Fortunately I was able to interrupt the
procedure with only /usr being destroyed from the pool and I was able to
send/receive the truly vital data in my /var partition to the new disk and
re-deploy the base system to /usr on the new disk. The only thing I'm
really missing at this point is all of the third-party software
configuration I had in /usr/local/etc and my apache data in /usr/local/www.

After a few minutes on Google I came across this wonderful page:

http://www.solarisinternals.com/wiki/index.php/ZFS_forensics_scrollback_script

where the author has published information about his python script which
locates the uberblocks on the raw disk and shows the user the most recent
transaction IDs, prompts the user for a transaction ID to roll back to, and
zeroes out all uberblocks beyond that point. Theoretically, I should be
able to use this script to get back to the transaction prior to my dreaded
'zfs destroy -R', then be able to recover the data I need (since no further
writes have been done to the source disks).

First, I know there's a problem in the script on FreeBSD in which the grep
pattern for the od output expects a single space between the output
elements. I've attached a patch that allows the output to be properly
grepped in FreeBSD, so we can actually get to the transaction log.

But now we are to my current problem. When attempting to roll back with
this script, it tries to dd zero'd bytes to offsets into the disk device
(/dev/ada1p3 in my case) where the uberblocks are located. But even
with kern.geom.debugflags
set to 0x10 (and I am runnign this as root) I get 'Operation not permitted'
when the script tries to zero out the unwanted transactions. I'm fairly
certain this is because the geom is in use by the ZFS subsystem, as it is
still recognized as a part of the original pool. I'm hesitant to zfs export
the pool, as I don't know if that wipes the transaction history on the
pool. Does anyone have any ideas?

Thanks,
-Reid


zfs_revert-0.1.py.patch
Description: Binary data
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org

Re: Attempting to roll back zfs transactions on a disk to recover a destroyed ZFS filesystem

2013-07-11 Thread Alan Somers
zpool export does not wipe the transaction history.  It does,
however, write new labels and some metadata, so there is a very slight
chance that it might overwrite some of the blocks that you're trying
to recover.  But it's probably safe.  An alternative, much more
complicated, solution would be to have ZFS open the device
non-exclusively.  This patch will do that.  Caveat programmer: I
haven't tested this patch in isolation.

Change 624068 by willa@willa_SpectraBSD on 2012/08/09 09:28:38

Allow multiple opens of geoms used by vdev_geom.
Also ignore the pool guid for spares when checking to decide whether
it's ok to attach a vdev.

This enables using hotspares to replace other devices, as well as
using a given hotspare in multiple pools.

We need to investigate alternative solutions in order to allow
opening the geoms exclusive.

Affected files ...

... 
//SpectraBSD/stable/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c#2
edit

Differences ...

 
//SpectraBSD/stable/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c#2
(text) 

@@ -179,49 +179,23 @@
gp = g_new_geomf(zfs_vdev_class, zfs::vdev);
gp-orphan = vdev_geom_orphan;
gp-attrchanged = vdev_geom_attrchanged;
-   cp = g_new_consumer(gp);
-   error = g_attach(cp, pp);
-   if (error != 0) {
-   printf(%s(%d): g_attach failed: %d\n, __func__,
-  __LINE__, error);
-   g_wither_geom(gp, ENXIO);
-   return (NULL);
-   }
-   error = g_access(cp, 1, 0, 1);
-   if (error != 0) {
-   printf(%s(%d): g_access failed: %d\n, __func__,
-  __LINE__, error);
-   g_wither_geom(gp, ENXIO);
-   return (NULL);
-   }
-   ZFS_LOG(1, Created geom and consumer for %s., pp-name);
-   } else {
-   /* Check if we are already connected to this provider. */
-   LIST_FOREACH(cp, gp-consumer, consumer) {
-   if (cp-provider == pp) {
-   ZFS_LOG(1, Provider %s already in use by ZFS. 
-   Failing attach., pp-name);
-   return (NULL);
-   }
-   }
-   cp = g_new_consumer(gp);
-   error = g_attach(cp, pp);
-   if (error != 0) {
-   printf(%s(%d): g_attach failed: %d\n,
-  __func__, __LINE__, error);
-   g_destroy_consumer(cp);
-   return (NULL);
-   }
-   error = g_access(cp, 1, 0, 1);
-   if (error != 0) {
-   printf(%s(%d): g_access failed: %d\n,
-  __func__, __LINE__, error);
-   g_detach(cp);
-   g_destroy_consumer(cp);
-   return (NULL);
-   }
-   ZFS_LOG(1, Created consumer for %s., pp-name);
+   }
+   cp = g_new_consumer(gp);
+   error = g_attach(cp, pp);
+   if (error != 0) {
+   printf(%s(%d): g_attach failed: %d\n, __func__,
+  __LINE__, error);
+   g_wither_geom(gp, ENXIO);
+   return (NULL);
+   }
+   error = g_access(cp, /*r*/1, /*w*/0, /*e*/0);
+   if (error != 0) {
+   printf(%s(%d): g_access failed: %d\n, __func__,
+  __LINE__, error);
+   g_wither_geom(gp, ENXIO);
+   return (NULL);
}
+   ZFS_LOG(1, Created consumer for %s., pp-name);

cp-private = vd;
vd-vdev_tsd = cp;
@@ -251,7 +225,7 @@
cp-private = NULL;

gp = cp-geom;
-   g_access(cp, -1, 0, -1);
+   g_access(cp, -1, 0, 0);
/* Destroy consumer on last close. */
if (cp-acr == 0  cp-ace == 0) {
ZFS_LOG(1, Destroyed consumer to %s., cp-provider-name);
@@ -384,6 +358,18 @@
cp-provider-name);
 }

+static inline boolean_t
+vdev_attach_ok(vdev_t *vd, uint64_t pool_guid, uint64_t vdev_guid)
+{
+   boolean_t pool_ok;
+   boolean_t vdev_ok;
+
+   /* Spares can be assigned to multiple pools. */
+   pool_ok = vd-vdev_isspare || pool_guid == spa_guid(vd-vdev_spa);
+   vdev_ok = vdev_guid == vd-vdev_guid;
+   return (pool_ok  vdev_ok);
+}
+
 static struct g_consumer *
 vdev_geom_attach_by_guids(vdev_t *vd)
 {
@@ -420,8 +406,7 @@
g_topology_lock();
g_access(zcp, -1, 0, 0);
g_detach(zcp);
-   if (pguid != spa_guid(vd-vdev_spa) ||
-   vguid != vd-vdev_guid)
+  

Re: Attempting to roll back zfs transactions on a disk to recover a destroyed ZFS filesystem

2013-07-11 Thread Will Andrews
On Thu, Jul 11, 2013 at 9:04 AM, Alan Somers asom...@freebsd.org wrote:
 zpool export does not wipe the transaction history.  It does,
 however, write new labels and some metadata, so there is a very slight
 chance that it might overwrite some of the blocks that you're trying
 to recover.  But it's probably safe.  An alternative, much more
 complicated, solution would be to have ZFS open the device
 non-exclusively.  This patch will do that.  Caveat programmer: I
 haven't tested this patch in isolation.

This change is quite a bit more than necessary, and probably wouldn't
apply to FreeBSD given the other changes in the code.  Really, to make
non-exclusive opens you just have to change the g_access() calls in
vdev_geom.c so the third argument is always 0.

However, see below.

 On Thu, Jul 11, 2013 at 8:43 AM, Reid Linnemann linnema...@gmail.com wrote:
 But now we are to my current problem. When attempting to roll back with
 this script, it tries to dd zero'd bytes to offsets into the disk device
 (/dev/ada1p3 in my case) where the uberblocks are located. But even
 with kern.geom.debugflags
 set to 0x10 (and I am runnign this as root) I get 'Operation not permitted'
 when the script tries to zero out the unwanted transactions. I'm fairly
 certain this is because the geom is in use by the ZFS subsystem, as it is
 still recognized as a part of the original pool. I'm hesitant to zfs export
 the pool, as I don't know if that wipes the transaction history on the
 pool. Does anyone have any ideas?

You do not have a choice.  Changing the on-disk state does not mean
the in-core state will update to match, and the pool could get into a
really bad state if you try to modify the transactions on disk while
it's online, since it may write additional transactions (which rely on
state you're about to destroy), before you export.

Also, rolling back transactions in this manner assumes that the
original blocks (that were COW'd) are still in their original state.
If you're using TRIM or have a pretty full pool, the odds are not in
your favor.  It's a roll of the dice, in any case.

--Will.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Attempting to roll back zfs transactions on a disk to recover a destroyed ZFS filesystem

2013-07-11 Thread Reid Linnemann
Will,

Thanks, that makes sense. I know this is all a crap shoot, but I've really
got nothing to lose at this point, so this is just a good opportunity to
rummage around the internals of ZFS and learn a few things. I might even
get lucky and recover some data!


On Thu, Jul 11, 2013 at 10:59 AM, Will Andrews w...@firepipe.net wrote:

 On Thu, Jul 11, 2013 at 9:04 AM, Alan Somers asom...@freebsd.org wrote:
  zpool export does not wipe the transaction history.  It does,
  however, write new labels and some metadata, so there is a very slight
  chance that it might overwrite some of the blocks that you're trying
  to recover.  But it's probably safe.  An alternative, much more
  complicated, solution would be to have ZFS open the device
  non-exclusively.  This patch will do that.  Caveat programmer: I
  haven't tested this patch in isolation.

 This change is quite a bit more than necessary, and probably wouldn't
 apply to FreeBSD given the other changes in the code.  Really, to make
 non-exclusive opens you just have to change the g_access() calls in
 vdev_geom.c so the third argument is always 0.

 However, see below.

  On Thu, Jul 11, 2013 at 8:43 AM, Reid Linnemann linnema...@gmail.com
 wrote:
  But now we are to my current problem. When attempting to roll back with
  this script, it tries to dd zero'd bytes to offsets into the disk device
  (/dev/ada1p3 in my case) where the uberblocks are located. But even
  with kern.geom.debugflags
  set to 0x10 (and I am runnign this as root) I get 'Operation not
 permitted'
  when the script tries to zero out the unwanted transactions. I'm fairly
  certain this is because the geom is in use by the ZFS subsystem, as it
 is
  still recognized as a part of the original pool. I'm hesitant to zfs
 export
  the pool, as I don't know if that wipes the transaction history on the
  pool. Does anyone have any ideas?

 You do not have a choice.  Changing the on-disk state does not mean
 the in-core state will update to match, and the pool could get into a
 really bad state if you try to modify the transactions on disk while
 it's online, since it may write additional transactions (which rely on
 state you're about to destroy), before you export.

 Also, rolling back transactions in this manner assumes that the
 original blocks (that were COW'd) are still in their original state.
 If you're using TRIM or have a pretty full pool, the odds are not in
 your favor.  It's a roll of the dice, in any case.

 --Will.
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Intel D2500CC serial ports

2013-07-11 Thread John Baldwin
On Sunday, June 30, 2013 1:24:27 pm Robert Ames wrote:
 I just picked up an Intel D2500CCE motherboard and was disappointed
 to find the serial ports didn't work.  There has been discussion
 about this problem here:
 
 http://lists.freebsd.org/pipermail/freebsd-current/2013-April/040897.html
 http://lists.freebsd.org/pipermail/freebsd-current/2013-May/042088.html
 
 As seen in the second link, Juergen Weiss was able to work around
 the problem.  This patch (for 8.4-RELEASE amd64) makes all 4 serial
 ports functional.
 
 --- /usr/src/sys/amd64/amd64/io_apic.c.orig 2013-06-02 13:23:05.0 
 -0500
 +++ /usr/src/sys/amd64/amd64/io_apic.c  2013-06-28 18:52:03.0 
 -0500
 @@ -452,6 +452,10 @@
 KASSERT(!(trig == INTR_TRIGGER_CONFORM || pol == 
 INTR_POLARITY_CONFORM),
 (%s: Conforming trigger or polarity\n, __func__));
  
 +   if (trig == INTR_TRIGGER_EDGE  pol == INTR_POLARITY_LOW) {
 +   pol = INTR_POLARITY_HIGH;
 +   }
 +

Hmm, so this is your BIOS doing the wrong thing in its ASL.

Maybe try this:

--- //depot/user/jhb/acpipci/dev/acpica/acpi_resource.c 2011-07-22 
17:59:31.0 
+++ /home/jhb/work/p4/acpipci/dev/acpica/acpi_resource.c2011-07-22 
17:59:31.0 
@@ -141,6 +141,10 @@
 default:
panic(%s: bad resource type %u, __func__, res-Type);
 }
+#if defined(__amd64__) || defined(__i386__)
+if (irq  16  trig == ACPI_EDGE_SENSITIVE  pol == ACPI_ACTIVE_LOW)
+   pol = ACPI_ACTIVE_HIGH;
+#endif
 BUS_CONFIG_INTR(dev, irq, (trig == ACPI_EDGE_SENSITIVE) ?
INTR_TRIGGER_EDGE : INTR_TRIGGER_LEVEL, (pol == ACPI_ACTIVE_HIGH) ?
INTR_POLARITY_HIGH : INTR_POLARITY_LOW);

-- 
John Baldwin
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: memmap in FreeBSD

2013-07-11 Thread John Baldwin
On Sunday, July 07, 2013 7:41:43 am mangesh chitnis wrote:
 Hi,
 
 What is the memmap equivalent of Linux in FreeBSD?
 
 In Linux memmap is used to reserve a portion of physical memory. This is 
used as a kernel boot argument. E.g.: memmap=2G$1G will reserve 1GB memory 
above 2GB,  incase I have 3GB RAM. This 1GB reserved memory is not visible 
to the OS, however this 1GB can be used using ioremap. 
 How can I reserve memory in FreeBSD and later use 
it i.e memmap and ioremap equivalent?
 
 I have tried using hw.physmem loader parameter.
 I have 3 GB system memory and I have set hw.physmem=2G. 
 
 
 sysctl -a shows:
 hw.physmem: 2.12G

Note that 'hw.physmem=2G' is using power of 2 units (so 2 * 2^30),
not power of 10.
 
 hw.usermem: 1.9G
 hw.realmem: 2.15G
 
 devinfo -rv shows:
 ram0: 
 
 0x00-0x9f3ff 
 0x1000-0xbfed 
 0xbff0-0xbfff
 
 Here, looks like it is showing the full 3 GB mapping.

ram0 is reserving address space, so it always claims all of the memory 
installed.

 Now, how do I know which is that 1 GB available memory (In Linux, this 
memory is shown as reserved in /proc/iomem under System RAM) ? Also, which 
function(similar to ioremap) should I call to map the physical address to 
virtual address?

There is currently no way to see the memory above the cap you set.  In the 
kernel you could perhaps fetch the SMAP metadata and walk the list to see if
there is memory above Maxmem (and if so it is presumably available for use).

However, to map it you would need to use pmap_*() routines directly.

Alternatively, you could abuse OBJT_SG by creating an sglist that describes
the unused memory range and then creating an OBJT_SG VM object backed by
that sglist.  You could then insert that VM object into the kernel's address
space to map it into the kernel, or even make it available to userland via
d_mmap_single(), or direct manipulation of a process' address space via an
ioctl, etc.

-- 
John Baldwin
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-11 Thread John Baldwin
 Speaking of Apple solutions, I've recently used Apple's kgdb with the
 kernel debug kit  kdp remote debugging, to debug a panic'd OS X host.
  It's really quite nice, because the debug kit comes with a ton of
 macros, similar to kdb, and you also get the benefit of source
 debugging.  I think FreeBSD would benefit massively from finding some
 way to share macros between kdb and kgdb, in addition to having an
 emergency network stack like you suggest.

I have a set of macros I maintain that implement many ddb commands in
kgdb including 'sleepchain' and 'lockchain'.

http://www.freebsd.org/~jhb/gdb/

-- 
John Baldwin
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-11 Thread Jordan K. Hubbard

On Jul 11, 2013, at 7:27 AM, Julian Elischer jul...@freebsd.org wrote:

 I could imagine that we could stash away a vimage stack just for this purpose.
 yould set it up on boot and leave it detached until you need it.
 
 you just need to switch the interfaces over to the new stack on panic and put 
 them into 'poll' mode.

That sounds like a rather clever solution to this problem (OS X doesn't support 
vimage, despite repeated attempts on my part to change that).

How much work do you think it would take to bang out a proof of concept?  Is 
anyone up to the challenge?  Any incentives I can provide?  This would be 
really useful. :-)

- Jordan

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-11 Thread Artem Belevich
On Thu, Jul 11, 2013 at 12:52 PM, Jordan K. Hubbard 
jordan.hubb...@gmail.com wrote:


 On Jul 11, 2013, at 7:27 AM, Julian Elischer jul...@freebsd.org wrote:

  I could imagine that we could stash away a vimage stack just for this
 purpose.
  yould set it up on boot and leave it detached until you need it.
 
  you just need to switch the interfaces over to the new stack on panic
 and put them into 'poll' mode.

 That sounds like a rather clever solution to this problem (OS X doesn't
 support vimage, despite repeated attempts on my part to change that).


It would probably work for most of the crashes, but will not work in few
interesting classes of failure. Using in-kernel stack implicitly assumes
that your memory allocator still works as both the stack and the interface
driver will need to get their mbufs and other data somewhere. Alas it's
those unusual cases that are hardest to debug and where you really do want
debugger or coredump to work.

Back at my previous work we did it 'embedded system way'. Interface driver
provided dumb functions to re-init device, send a frame and poll for
received frame. All that without using system malloc. There was a dumb
malloc that gave few chunks of memory from static buffer to gzip, but the
rest of the code was independent of any kernel facilities. We had simple
ARP/IP/UDP/TFTP(+gzip) implementation to upload compressed image of
physical memory to a specified server. Overall it worked pretty well.

Considering that this approach pretty much puts core dump outside of
kernel, I wonder if we could start some sort of reverse BTX loader on
crash. Instead of downloading the kernel it would upload the core. This way
we should be able to produce the core in a fairly generic way on any system
where we can use PXE for network I/O. The idea may be a non-starter as I
have no clue whether it's possible to use PXE once kernel had booted and
took control of NIC hardware.

--Artem
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-11 Thread Kevin Day

On Jul 11, 2013, at 4:05 PM, Artem Belevich a...@freebsd.org wrote:
  
 It would probably work for most of the crashes, but will not work in few 
 interesting classes of failure. Using in-kernel stack implicitly assumes that 
 your memory allocator still works as both the stack and the interface driver 
 will need to get their mbufs and other data somewhere. Alas it's those 
 unusual cases that are hardest to debug and where you really do want debugger 
 or coredump to work.
 
 Back at my previous work we did it 'embedded system way'. Interface driver 
 provided dumb functions to re-init device, send a frame and poll for received 
 frame. All that without using system malloc. There was a dumb malloc that 
 gave few chunks of memory from static buffer to gzip, but the rest of the 
 code was independent of any kernel facilities. We had simple 
 ARP/IP/UDP/TFTP(+gzip) implementation to upload compressed image of physical 
 memory to a specified server. Overall it worked pretty well.

That's the exact reason why we invented our own mini stack and hooks into the 
network driver. After many failure cases, you can no longer rely on malloc, 
interrupts, routing tables or other goodies to be working correctly. It's too 
easy for the rest of the system to be broken enough that touching any of those 
pieces was enough to crash again.

It really depends on the scope of problem you're trying to debug, but at 
minimum I think you need to revert to polled networking, disable all 
interrupts, and use your own stack/memory pool. Even then it's still not 
foolproof, but at least then you spend less time trying to debug your debugger.





smime.p7s
Description: S/MIME cryptographic signature


Error on building cross-gcc

2013-07-11 Thread Otacílio
Dears

I'm tryning to  build cross-gcc with this command line

make TGTARCH=arm TGTABI=freebsd10

or

make TGTARCH=arm TGTABI=freebsd8

on a

FreeBSD squitch 8.4-RELEASE FreeBSD 8.4-RELEASE #27: Mon Jun 10 08:52:47
BRT 2013 ota@squitch:/usr/obj/usr/src/sys/SQUITCH  i386


but all times I got

/usr/ports/devel/cross-gcc/work/build/./gcc/xgcc
-B/usr/ports/devel/cross-gcc/work/build/./gcc/
-B/usr/local/arm-freebsd10/bin/ -B/usr/local/arm-freebsd10/lib/ -isystem
/usr/ports/devel/cross-gcc/work/build/./gcc -isystem
/usr/local/arm-freebsd10/include -isystem
/usr/local/arm-freebsd10/sys-include-g -O2 -pipe
-fno-strict-aliasing -mbig-endian -O2  -g -O2 -pipe -fno-strict-aliasing
-DIN_GCC -DCROSS_DIRECTORY_STRUCTURE  -W -Wall -Wwrite-strings
-Wcast-qual -Wstrict-prototypes -Wmissing-prototypes
-Wold-style-definition  -isystem ./include  -fno-inline -g
-DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc
 -I. -I. -I../../.././gcc -I../../.././../gcc-4.5.4/libgcc
-I../../.././../gcc-4.5.4/libgcc/.
-I../../.././../gcc-4.5.4/libgcc/../gcc
-I../../.././../gcc-4.5.4/libgcc/../include  -DHAVE_CC_TLS  -o _muldi3.o
-MT _muldi3.o -MD -MP -MF _muldi3.dep -DL_muldi3 -c
../../.././../gcc-4.5.4/libgcc/../gcc/libgcc2.c \

In file included from ../../.././../gcc-4.5.4/libgcc/../gcc/tsystem.h:44:0,
 from ../../.././../gcc-4.5.4/libgcc/../gcc/libgcc2.c:29:
/usr/ports/devel/cross-gcc/work/build/./gcc/include/stddef.h:59:24:
fatal error: sys/_types.h: No such file or directory
compilation terminated.
gmake[4]: ** [_muldi3.o] Erro 1



Someone can give me a hint about what is happen?

Thanks a lot
-Otacilio
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org