From patchwork Mon Jun 15 11:51:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alvise Rigo X-Patchwork-Id: 484227 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 61223140273 for ; Mon, 15 Jun 2015 21:50:46 +1000 (AEST) Received: from localhost ([::1]:33558 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z4Sua-0005xW-CU for incoming@patchwork.ozlabs.org; Mon, 15 Jun 2015 07:50:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37138) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z4Stm-0004aQ-VP for qemu-devel@nongnu.org; Mon, 15 Jun 2015 07:49:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z4Sti-0003zC-No for qemu-devel@nongnu.org; Mon, 15 Jun 2015 07:49:54 -0400 Received: from mail-wi0-f177.google.com ([209.85.212.177]:37676) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z4Sti-0003yy-HU for qemu-devel@nongnu.org; Mon, 15 Jun 2015 07:49:50 -0400 Received: by wifx6 with SMTP id x6so75343744wif.0 for ; Mon, 15 Jun 2015 04:49:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9uYAu1ZZV3Hl/HNt8dePGlQwPIfJ3Yj5alLfI2+HyII=; b=T6PYmeIOtzExQjAByRwjswBKt3NMwaMyt8uBoS357BONEXmCphz2Ny2X3jgOnvmiNs JKQuJTk9RDaJ+hTEFFoO5gtjWZe5vh57dFqV57eZMyBK51LVjzbGYpL+p9am9SIwCnVP bhwlFGPX/Z9ogwHU1aaZmU7BJzxCODZD7l/q59YJbDxROFhjLT78PQ/qU5ofFchaHCgi 4fWimW92qJ4HABted0fPNMRDLmoVUm0f9upHYv0YVvXPpdbNS1mb9PiRZN5JgxpWqs9Y jqTQehEndKvSvBCI6FrrSg6JYHxrk+KU38acZFUxXb5HKV1zTldB3kiL3gDIHCGFJhmp cMpw== X-Gm-Message-State: ALoCoQlMyojfwKY6MGWyihT2m3gkwwyPLspg8nDs+mUhY66V8n8i57DAy0BtziZYX+2AcYpd4uuF X-Received: by 10.194.185.106 with SMTP id fb10mr51931417wjc.84.1434368989867; Mon, 15 Jun 2015 04:49:49 -0700 (PDT) Received: from linarch.home (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by mx.google.com with ESMTPSA id tl3sm18570524wjc.20.2015.06.15.04.49.48 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Jun 2015 04:49:48 -0700 (PDT) From: Alvise Rigo To: qemu-devel@nongnu.org Date: Mon, 15 Jun 2015 13:51:23 +0200 Message-Id: <1434369088-15076-3-git-send-email-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1434369088-15076-1-git-send-email-a.rigo@virtualopensystems.com> References: <1434369088-15076-1-git-send-email-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.212.177 Cc: mttcg@listserver.greensocs.com, claudio.fontana@huawei.com, cota@braap.org, jani.kokkonen@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, rth@twiddle.net Subject: [Qemu-devel] [RFC v2 2/7] exec: Add new exclusive bitmap to ram_list X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The purpose of this new bitmap is to flag the memory pages that are in the middle of LL/SC operations (after a LL, before a SC). For all these pages, the corresponding TLB entries will be generated in such a way to force the slow-path. When the system starts, the whole memory is dirty (all the bitmap is set). A page, after being marked as exclusively-clean, will *not* be restored as dirty after the SC; the cputlb code will take care of that, lazily setting the page as dirty when the TLB EXCL entry is about to be overwritten. The accessors to this bitmap are currently not atomic, but they have to be so in a real multi-threading TCG. Fix also one bracket alignment. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo --- exec.c | 7 +++++-- include/exec/memory.h | 3 ++- include/exec/ram_addr.h | 19 +++++++++++++++++++ 3 files changed, 26 insertions(+), 3 deletions(-) diff --git a/exec.c b/exec.c index 487583b..79b2571 100644 --- a/exec.c +++ b/exec.c @@ -1430,11 +1430,14 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp) int i; /* ram_list.dirty_memory[] is protected by the iothread lock. */ - for (i = 0; i < DIRTY_MEMORY_NUM; i++) { + for (i = 0; i < DIRTY_MEMORY_NUM - 1; i++) { ram_list.dirty_memory[i] = bitmap_zero_extend(ram_list.dirty_memory[i], old_ram_size, new_ram_size); - } + } + ram_list.dirty_memory[DIRTY_MEMORY_EXCLUSIVE] = + bitmap_one_extend(ram_list.dirty_memory[i], + old_ram_size, new_ram_size); } cpu_physical_memory_set_dirty_range(new_block->offset, new_block->used_length, diff --git a/include/exec/memory.h b/include/exec/memory.h index 8ae004e..5ad6f20 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -19,7 +19,8 @@ #define DIRTY_MEMORY_VGA 0 #define DIRTY_MEMORY_CODE 1 #define DIRTY_MEMORY_MIGRATION 2 -#define DIRTY_MEMORY_NUM 3 /* num of dirty bits */ +#define DIRTY_MEMORY_EXCLUSIVE 3 +#define DIRTY_MEMORY_NUM 4 /* num of dirty bits */ #include #include diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index c113f21..f097f1b 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -21,6 +21,7 @@ #ifndef CONFIG_USER_ONLY #include "hw/xen/xen.h" +#include "qemu/bitmap.h" ram_addr_t qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr, bool share, const char *mem_path, @@ -249,5 +250,23 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest, return num_dirty; } +/* Exclusive bitmap accessors. */ +static inline void cpu_physical_memory_set_excl_dirty(ram_addr_t addr) +{ + set_bit(addr >> TARGET_PAGE_BITS, + ram_list.dirty_memory[DIRTY_MEMORY_EXCLUSIVE]); +} + +static inline int cpu_physical_memory_excl_is_dirty(ram_addr_t addr) +{ + return test_bit(addr >> TARGET_PAGE_BITS, + ram_list.dirty_memory[DIRTY_MEMORY_EXCLUSIVE]); +} + +static inline void cpu_physical_memory_clear_excl_dirty(ram_addr_t addr) +{ + clear_bit(addr >> TARGET_PAGE_BITS, + ram_list.dirty_memory[DIRTY_MEMORY_EXCLUSIVE]); +} #endif #endif