From patchwork Mon Jan 11 13:48:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wangnan (F)" X-Patchwork-Id: 565830 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 929EE1402BF for ; Tue, 12 Jan 2016 00:50:45 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760182AbcAKNuA (ORCPT ); Mon, 11 Jan 2016 08:50:00 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:11648 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760131AbcAKNt4 (ORCPT ); Mon, 11 Jan 2016 08:49:56 -0500 Received: from 172.24.1.47 (EHLO szxeml422-hub.china.huawei.com) ([172.24.1.47]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CZQ72444; Mon, 11 Jan 2016 21:49:42 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml422-hub.china.huawei.com (10.82.67.152) with Microsoft SMTP Server id 14.3.235.1; Mon, 11 Jan 2016 21:49:31 +0800 From: Wang Nan To: CC: , , , , , Wang Nan , He Kuang , "Arnaldo Carvalho de Melo" , Jiri Olsa , Masami Hiramatsu , Namhyung Kim Subject: [PATCH 51/53] perf record: Read from tailsize ring buffer Date: Mon, 11 Jan 2016 13:48:42 +0000 Message-ID: <1452520124-2073-52-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1452520124-2073-1-git-send-email-wangnan0@huawei.com> References: <1452520124-2073-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.5693B2F6.0316, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 7e72c896c64021b9ecd9c0f3d95e71d8 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org tailsize_rb_find_start() introduced to find the first available event from a tailsize ring buffer through tailsize. event with '/overwrite/' setting is able to be read. record__mmap_should_read() is changed accordingly. Reading a active tailsize ring buffer is unsafe. A global tailsize ring buffer director is introduced into 'struct record' record__mmap_should_read() returns true if tailsize_evt_stopped is true. Following patch whould turn off events attached to tailsize ring buffer and toggle this director. Signed-off-by: Wang Nan Signed-off-by: He Kuang Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Zefan Li Cc: pi3orama@163.com --- tools/perf/builtin-record.c | 69 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 68 insertions(+), 1 deletion(-) diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 8e56f92..6c8905b 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -56,6 +56,7 @@ struct record { bool no_buildid_cache_set; bool timestamp_filename; bool switch_output; + bool tailsize_evt_stopped; unsigned long long samples; }; @@ -79,6 +80,63 @@ static int process_synthesized_event(struct perf_tool *tool, return record__write(rec, event, event->header.size); } +static int +tailsize_rb_find_start(void *buf, u64 head, int mask, u64 *p_evt_head) +{ + int buf_size = mask + 1; + u64 evt_head = head; + u64 *pevt_size; + + pr_debug("start reading tailsize, head=%"PRId64"\n", head); + while (true) { + struct perf_event_header *pheader; + + pevt_size = buf + ((evt_head - sizeof(*pevt_size)) & mask); + pr_debug4("read tailsize: size: %"PRId64"\n", *pevt_size); + + if (*pevt_size % sizeof(u64) != 0) { + pr_warning("Tailsize ring buffer corrupted: unaligned\n"); + return -1; + } + + if (!*pevt_size) { + if (evt_head) { + pr_warning("Tailsize ring buffer corrupted: size is 0 but evt_head (0x%"PRIx64") is not 0\n", + (unsigned long)evt_head); + return -1; + } + *p_evt_head = evt_head; + return 0; + } + + if (evt_head < *pevt_size) { + pr_warning("Tailsize ring buffer corrupted: head (%"PRId64") < size (%"PRId64")\n", + evt_head, *pevt_size); + return -1; + } + + evt_head -= *pevt_size; + + if (evt_head + buf_size < head) { + evt_head += *pevt_size; + pr_debug("Finish reading tailsize buffer, evt_head=%"PRIx64", head=%"PRIx64"\n", + evt_head, head); + *p_evt_head = evt_head; + return 0; + } + + pheader = (struct perf_event_header *)(buf + (evt_head & mask)); + if (pheader->size != *pevt_size) { + pr_warning("Tailsize ring buffer corrupted: found size mismatch: %d vs %"PRId64"\n", + pheader->size, *pevt_size); + return -1; + } + } + + pr_warning("ERROR: shouldn't get there\n"); + return -1; +} + static int record__mmap_read(struct record *rec, int idx) { struct perf_mmap *md = &rec->evlist->mmap[idx]; @@ -88,10 +146,17 @@ static int record__mmap_read(struct record *rec, int idx) unsigned long size; void *buf; int rc = 0; + int channel; if (old == head) return 0; + channel = perf_evlist__idx_channel(rec->evlist, idx); + if (perf_evlist__channel_check(rec->evlist, channel, TAILSIZE)) { + if (tailsize_rb_find_start(data, head, md->mask, &old)) + return -1; + } + rec->samples++; size = head - old; @@ -462,7 +527,8 @@ static bool record__mmap_should_read(struct record *rec, int idx) if (perf_evlist__channel_idx(rec->evlist, &channel, &idx)) return false; if (perf_evlist__channel_check(rec->evlist, channel, RDONLY)) - return false; + if (perf_evlist__channel_check(rec->evlist, channel, TAILSIZE)) + return rec->tailsize_evt_stopped; return true; } @@ -1226,6 +1292,7 @@ static struct record record = { .mmap2 = perf_event__process_mmap2, .ordered_events = true, }, + .tailsize_evt_stopped = false, }; const char record_callchain_help[] = CALLCHAIN_RECORD_HELP