Message ID | 20090206.232352.74134222.davem@davemloft.net |
---|---|
State | RFC |
Delegated to: | David Miller |
Headers | show |
On Fri, Feb 06, 2009 at 11:23:52PM -0800, David Miller wrote: > > Any chance that one of the last memory related patches will fix > > this problem ? Thanks. > > It is possible. > > Here are two seperate things you can try: > > 1) Boot with "mem=512m" on the kernel boot command line. > > 2) Try booting with the patch below. Unfortunately the patch applied to 2.6.27.13 didn't change anything, still hanging - see below. Full output is available at the web. Adding mem=512k hung also with a 2.6.28 series kernel. Any further ideas ? Doing kobject_set_name() kset_register() kset_register: kset_init() kset_register: kset_add_internal() kset_register: kobject_uevent() kobject_uevent_env: [of] fffff8a1fe00c6d8 kobject_uevent_env: Checking uevent_ops->filter kobject_uevent_env: Allocating and filling env buffer. kobject_uevent_env: Checking uevent_ops->uevent kobject_uevent_env: Invoking uevent_helper[/sbin/hotplug]
On Mon, Feb 09, 2009 at 11:21:01PM +0100, Hermann Lauer wrote: > > Unfortunately the patch applied to 2.6.27.13 didn't change anything, > still hanging - see below. Full output is available at the web. > > Adding mem=512k hung also with a 2.6.28 series kernel. Any further ideas ? > > Doing kobject_set_name() > kset_register() > kset_register: kset_init() > kset_register: kset_add_internal() > kset_register: kobject_uevent() > kobject_uevent_env: [of] fffff8a1fe00c6d8 > kobject_uevent_env: Checking uevent_ops->filter > kobject_uevent_env: Allocating and filling env buffer. > kobject_uevent_env: Checking uevent_ops->uevent > kobject_uevent_env: Invoking uevent_helper[/sbin/hotplug] Tried today "# CONFIG_CGROUPS is not set" which mentioned David some time ago, but still the same hang.
From: Hermann Lauer <Hermann.Lauer@iwr.uni-heidelberg.de> Date: Wed, 11 Feb 2009 22:25:04 +0100 > On Mon, Feb 09, 2009 at 11:21:01PM +0100, Hermann Lauer wrote: > > > > Unfortunately the patch applied to 2.6.27.13 didn't change anything, > > still hanging - see below. Full output is available at the web. > > > > Adding mem=512k hung also with a 2.6.28 series kernel. Any further ideas ? > > > > Doing kobject_set_name() > > kset_register() > > kset_register: kset_init() > > kset_register: kset_add_internal() > > kset_register: kobject_uevent() > > kobject_uevent_env: [of] fffff8a1fe00c6d8 > > kobject_uevent_env: Checking uevent_ops->filter > > kobject_uevent_env: Allocating and filling env buffer. > > kobject_uevent_env: Checking uevent_ops->uevent > > kobject_uevent_env: Invoking uevent_helper[/sbin/hotplug] > > Tried today "# CONFIG_CGROUPS is not set" which mentioned David > some time ago, but still the same hang. Yes, I didn't expect that to help. I'm real busy but will get back to you with some new debugging patches hopefully in the next few days. -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 51daae5..c9ab51a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -46,6 +46,7 @@ #include <linux/page-isolation.h> #include <linux/memcontrol.h> #include <linux/debugobjects.h> +#include <linux/nmi.h> #include <asm/tlbflush.h> #include <asm/div64.h> @@ -707,7 +708,26 @@ static int move_freepages(struct zone *zone, * Remove at a later date when no bug reports exist related to * grouping pages by mobility */ - BUG_ON(page_zone(start_page) != page_zone(end_page)); + if (unlikely(page_zone(start_page) != page_zone(end_page))) { + printk(KERN_ERR "move_freepages: Bogus zones: " + "start_page[%p] end_page[%p] zone[%p]\n", + start_page, end_page, zone); + printk(KERN_ERR "move_freepages: " + "start_zone[%p] end_zone[%p]\n", + page_zone(start_page), page_zone(end_page)); + printk(KERN_ERR "move_freepages: " + "start_pfn[0x%lx] end_pfn[0x%lx]\n", + page_to_pfn(start_page), page_to_pfn(end_page)); + printk(KERN_ERR "move_freepages: " + "start_nid[%d] end_nid[%d]\n", + page_to_nid(start_page), page_to_nid(end_page)); + spin_unlock(&zone->lock); + local_irq_enable(); + while (1) { + barrier(); + touch_nmi_watchdog(); + } + } #endif for (page = start_page; page <= end_page;) { @@ -2583,6 +2603,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, unsigned long end_pfn = start_pfn + size; unsigned long pfn; struct zone *z; + int tmp; z = &NODE_DATA(nid)->node_zones[zone]; for (pfn = start_pfn; pfn < end_pfn; pfn++) { @@ -2594,7 +2615,8 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (context == MEMMAP_EARLY) { if (!early_pfn_valid(pfn)) continue; - if (!early_pfn_in_nid(pfn, nid)) + tmp = early_pfn_to_nid(pfn); + if (tmp > -1 && tmp != nid) continue; } page = pfn_to_page(pfn); @@ -2961,8 +2983,9 @@ int __meminit early_pfn_to_nid(unsigned long pfn) return early_node_map[i].nid; } - return 0; + return -1; } + #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ /* Basic iterator support to walk early_node_map[] */