From patchwork Wed Aug 3 09:43:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xulei (Stone, Euler)" X-Patchwork-Id: 655313 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3s47T91ThLz9sBX for ; Wed, 3 Aug 2016 19:44:44 +1000 (AEST) Received: from localhost ([::1]:32892 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUsjA-0004g4-CH for incoming@patchwork.ozlabs.org; Wed, 03 Aug 2016 05:44:40 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45481) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUsiV-00046i-1a for qemu-devel@nongnu.org; Wed, 03 Aug 2016 05:44:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bUsiR-0007fj-Sb for qemu-devel@nongnu.org; Wed, 03 Aug 2016 05:43:59 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:3064) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUsiR-0007f0-9Z for qemu-devel@nongnu.org; Wed, 03 Aug 2016 05:43:55 -0400 Received: from 172.24.1.137 (EHLO SZXEMI414-HUB.china.huawei.com) ([172.24.1.137]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DLE10150; Wed, 03 Aug 2016 17:43:28 +0800 (CST) Received: from SZXEMI504-MBS.china.huawei.com ([169.254.1.86]) by SZXEMI414-HUB.china.huawei.com ([10.86.210.49]) with mapi id 14.03.0235.001; Wed, 3 Aug 2016 17:43:24 +0800 From: "Xulei (Stone)" To: qemu-devel , kvm Thread-Topic: [QUESTION]stuck in SeaBIOS because of losing a SMI Thread-Index: AdHtaPDf91a3ZmknR0af9nAcyZ9z2w== Date: Wed, 3 Aug 2016 09:43:23 +0000 Message-ID: <8E78D212B8C25246BE4CE7EA0E645FE53EFD8C@SZXEMI504-MBS.china.huawei.com> Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.177.254.96] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.57A1BCC4.00EC, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.1.86, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: bf84590a71ae01c96bedeaa942e5053c X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 119.145.14.65 Subject: [Qemu-devel] [QUESTION]stuck in SeaBIOS because of losing a SMI X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: "Xulei \(Stone\)" Cc: Paolo Bonzini , "guangrong.xiao" Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Hi, all: Recently I use a shell script to continuously reset a vm to see what may happen. After one day, the vm is stuck. Looking from the following seabios log and kvm trace log, it seems like losing a SMI or SeaBIOS can not handle a SMI. This problem is reproducible on my machine (SeaBIOS 1.9.1, Qemu 2.6.0, Kmod 4.4.11). 2016-08-03 16:23:15init smm 2016-08-03 16:23:15before SMI==== 2016-08-03 16:23:15after SMI===== <----always stuck here, unless i destroy it As above, it is obviously that if bios doesn't handle_smi, PORT_SMI_STATUS is always 0x01. smm_relocate_and_restore() will always in the while loop. Why does bios handle_smi at this point, is this a kvm bug? or SeaBIOS bug? -------------- Xulei (Stone) ==================my shell script=== #!/bin/bash while((1)) do virsh reset VMNAME sleep 1 done ==================kvm trace log=== CPU 0/KVM-13843 [020] d... 1025056.813494: kvm_entry: vcpu 0 CPU 0/KVM-13843 [020] .... 1025056.813495: kvm_exit: reason IO_INSTRUCTION rip 0xef10e info b30048 0 CPU 0/KVM-13843 [020] .... 1025056.813495: kvm_emulate_insn: 0:ef10e:e4 b3 (prot32) CPU 0/KVM-13843 [020] .... 1025056.813496: kvm_userspace_exit: reason KVM_EXIT_IO (2) CPU 0/KVM-13843 [020] .... 1025056.813496: kvm_fpu: unload CPU 0/KVM-13843 [020] .... 1025056.813497: kvm_pio: pio_read at 0xb3 size 1 count 1 val 0x1 CPU 0/KVM-13843 [020] .... 1025056.813497: kvm_fpu: load CPU 0/KVM-13843 [020] d... 1025056.813497: kvm_entry: vcpu 0 CPU 0/KVM-13843 [020] .... 1025056.813498: kvm_exit: reason IO_INSTRUCTION rip 0xef10e info b30048 0 CPU 0/KVM-13843 [020] .... 1025056.813498: kvm_emulate_insn: 0:ef10e:e4 b3 (prot32) CPU 0/KVM-13843 [020] .... 1025056.813499: kvm_userspace_exit: reason KVM_EXIT_IO (2) CPU 0/KVM-13843 [020] .... 1025056.813499: kvm_fpu: unload CPU 0/KVM-13843 [020] .... 1025056.813500: kvm_pio: pio_read at 0xb3 size 1 count 1 val 0x1 CPU 0/KVM-13843 [020] .... 1025056.813500: kvm_fpu: load ==================my seabios log=== --- a/roms/seabios/src/fw/smm.c +++ b/roms/seabios/src/fw/smm.c @@ -65,7 +65,8 @@ handle_smi(u16 cs) u8 cmd = inb(PORT_SMI_CMD); struct smm_layout *smm = MAKE_FLATPTR(cs, 0); u32 rev = smm->cpu.i32.smm_rev & SMM_REV_MASK; - dprintf(DEBUG_HDL_smi, "handle_smi cmd=%x smbase=%p\n", cmd, smm); + if(cmd == 0x00) { + dprintf(1, "handle_smi cmd=%x smbase=%p\n", cmd, smm); + } if (smm == (void*)BUILD_SMM_INIT_ADDR) { // relocate SMBASE to 0xa0000 @@ -147,14 +148,14 @@ smm_relocate_and_restore(void) { /* init APM status port */ outb(0x01, PORT_SMI_STATUS); + dprintf(1,"before SMI====\n"); /* raise an SMI interrupt */ outb(0x00, PORT_SMI_CMD); + dprintf(1,"after SMI=====\n"); /* wait until SMM code executed */ while (inb(PORT_SMI_STATUS) != 0x00) ; + dprintf(1,"smm code executes complete====\n"); And the log output like this: 2016-08-03 16:23:15PCI: Using 00:02.0 for primary VGA 2016-08-03 16:23:15smm_device_setup start