From patchwork Thu Oct 25 21:10:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 989368 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-ide-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="wHsWE8e3"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42h0DX4Fr3z9sBq for ; Fri, 26 Oct 2018 08:11:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727567AbeJZFpx (ORCPT ); Fri, 26 Oct 2018 01:45:53 -0400 Received: from mail-it1-f195.google.com ([209.85.166.195]:51855 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727542AbeJZFpu (ORCPT ); Fri, 26 Oct 2018 01:45:50 -0400 Received: by mail-it1-f195.google.com with SMTP id 74-v6so3353704itw.1 for ; Thu, 25 Oct 2018 14:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=teLy+kqhVeXz0chZKxtNZsegRkVVqlpSx9WlzELwrT8=; b=wHsWE8e33vwsO6OU2vPedGKBmsniA7i/7KaUVQKGNBs0T1GI7jf/GhuGucadN6R+jK RbzsXgJWQAz4HyXCC3i8XRZriIxHUt85iCxDBpuKhKwleZoBupLUS5Xfc/q+W+IQIJP2 2vlBvbCxNuysvJw/1Xi9P8qAiGLA6bgXDaFstbSCAnIStdJBbLzOqhLPIqsI0EOyQTqe jpuNbYjw/exT2it0U9RQGa75SfAWwtw3N0NSAyvYJigD+3lS4Drg9t87wFkSLAGov9hJ iwTIpVFX7DdT0jUly8BGaFcsHZRQrnipQJCAbEw/gpBHoldlwiZY/Xv3J/ThwP6n/ZE/ H6HA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=teLy+kqhVeXz0chZKxtNZsegRkVVqlpSx9WlzELwrT8=; b=EUahMAk+Gdz7aylz53VFqVFgMrgMKZczuO+UXE2mcQIH+krFgCEdgktGVWULbApdZd H51ELWmWOfDfbgXY4pARTL+u73TBgl0ECdzEPNqD5udDbdzzIlwLmAGF6EPW+f/1DlVd aZey8snz5k61wOGnHXTrnzqnjDRvlobLVywShjVbgN+Oz0L8wWNH78ZP7s7SDX9KF6wr hzlqFwLdg0jRccsnwtrH4fHoRKim+g74TWP6H0jYCJB8ohk1Z1M9ZFOzNr1V+Ez8ek5v HVowLKVpFwSWOYs52GQMh/gZW1Ev0nGLm7ZA2ok88OkBKVirX1Ye8fM/Qj59BVwfzW8N pufg== X-Gm-Message-State: AGRZ1gIpmLIhzLKbWnYNTadCHMBkii8/oBOiLieOlKsWRZ+LWZSToJcs 1gLE2q3UkWoUA7GXdBwQbOoSXw== X-Google-Smtp-Source: AJdET5dykKzIXiPJg/3gvY9TqJzQl8DqggRvFOO9g5leNdhQKLV2W7fVMNH8SoqG8iOiToRiz3i5DA== X-Received: by 2002:a24:9602:: with SMTP id z2-v6mr2038270itd.102.1540501890719; Thu, 25 Oct 2018 14:11:30 -0700 (PDT) Received: from localhost.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id e65-v6sm3446375ioa.76.2018.10.25.14.11.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Oct 2018 14:11:29 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-ide@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 22/28] block: kill legacy parts of timeout handling Date: Thu, 25 Oct 2018 15:10:33 -0600 Message-Id: <20181025211039.11559-23-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181025211039.11559-1-axboe@kernel.dk> References: <20181025211039.11559-1-axboe@kernel.dk> Sender: linux-ide-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ide@vger.kernel.org The only user of legacy timing now is BSG, which is invoked from the mq timeout handler. Kill the legacy code, and rename the q->rq_timed_out_fn to q->bsg_job_timeout_fn. Signed-off-by: Jens Axboe Reviewed-by: Hannes Reinecke --- block/blk-core.c | 1 - block/blk-settings.c | 7 --- block/blk-timeout.c | 99 +++--------------------------------------- block/blk.h | 1 - block/bsg-lib.c | 6 +-- include/linux/blkdev.h | 4 +- 6 files changed, 11 insertions(+), 107 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index ad79e8b09801..4c39c7865f9c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -653,7 +653,6 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, laptop_mode_timer_fn, 0); timer_setup(&q->timeout, blk_rq_timed_out_timer, 0); INIT_WORK(&q->timeout_work, NULL); - INIT_LIST_HEAD(&q->timeout_list); INIT_LIST_HEAD(&q->icq_list); #ifdef CONFIG_BLK_CGROUP INIT_LIST_HEAD(&q->blkg_list); diff --git a/block/blk-settings.c b/block/blk-settings.c index 764d0e9e092f..3c5da75c2def 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -32,13 +32,6 @@ void blk_queue_rq_timeout(struct request_queue *q, unsigned int timeout) } EXPORT_SYMBOL_GPL(blk_queue_rq_timeout); -void blk_queue_rq_timed_out(struct request_queue *q, rq_timed_out_fn *fn) -{ - WARN_ON_ONCE(q->mq_ops); - q->rq_timed_out_fn = fn; -} -EXPORT_SYMBOL_GPL(blk_queue_rq_timed_out); - void blk_queue_lld_busy(struct request_queue *q, lld_busy_fn *fn) { q->lld_busy_fn = fn; diff --git a/block/blk-timeout.c b/block/blk-timeout.c index f2cfd56e1606..6428d458072a 100644 --- a/block/blk-timeout.c +++ b/block/blk-timeout.c @@ -78,70 +78,6 @@ void blk_delete_timer(struct request *req) list_del_init(&req->timeout_list); } -static void blk_rq_timed_out(struct request *req) -{ - struct request_queue *q = req->q; - enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER; - - if (q->rq_timed_out_fn) - ret = q->rq_timed_out_fn(req); - switch (ret) { - case BLK_EH_RESET_TIMER: - blk_add_timer(req); - blk_clear_rq_complete(req); - break; - case BLK_EH_DONE: - /* - * LLD handles this for now but in the future - * we can send a request msg to abort the command - * and we can move more of the generic scsi eh code to - * the blk layer. - */ - break; - default: - printk(KERN_ERR "block: bad eh return: %d\n", ret); - break; - } -} - -static void blk_rq_check_expired(struct request *rq, unsigned long *next_timeout, - unsigned int *next_set) -{ - const unsigned long deadline = blk_rq_deadline(rq); - - if (time_after_eq(jiffies, deadline)) { - list_del_init(&rq->timeout_list); - - /* - * Check if we raced with end io completion - */ - if (!blk_mark_rq_complete(rq)) - blk_rq_timed_out(rq); - } else if (!*next_set || time_after(*next_timeout, deadline)) { - *next_timeout = deadline; - *next_set = 1; - } -} - -void blk_timeout_work(struct work_struct *work) -{ - struct request_queue *q = - container_of(work, struct request_queue, timeout_work); - unsigned long flags, next = 0; - struct request *rq, *tmp; - int next_set = 0; - - spin_lock_irqsave(q->queue_lock, flags); - - list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list) - blk_rq_check_expired(rq, &next, &next_set); - - if (next_set) - mod_timer(&q->timeout, round_jiffies_up(next)); - - spin_unlock_irqrestore(q->queue_lock, flags); -} - /** * blk_abort_request -- Request request recovery for the specified command * @req: pointer to the request of interest @@ -153,20 +89,13 @@ void blk_timeout_work(struct work_struct *work) */ void blk_abort_request(struct request *req) { - if (req->q->mq_ops) { - /* - * All we need to ensure is that timeout scan takes place - * immediately and that scan sees the new timeout value. - * No need for fancy synchronizations. - */ - blk_rq_set_deadline(req, jiffies); - kblockd_schedule_work(&req->q->timeout_work); - } else { - if (blk_mark_rq_complete(req)) - return; - blk_delete_timer(req); - blk_rq_timed_out(req); - } + /* + * All we need to ensure is that timeout scan takes place + * immediately and that scan sees the new timeout value. + * No need for fancy synchronizations. + */ + blk_rq_set_deadline(req, jiffies); + kblockd_schedule_work(&req->q->timeout_work); } EXPORT_SYMBOL_GPL(blk_abort_request); @@ -194,13 +123,6 @@ void blk_add_timer(struct request *req) struct request_queue *q = req->q; unsigned long expiry; - if (!q->mq_ops) - lockdep_assert_held(q->queue_lock); - - /* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */ - if (!q->mq_ops && !q->rq_timed_out_fn) - return; - BUG_ON(!list_empty(&req->timeout_list)); /* @@ -213,13 +135,6 @@ void blk_add_timer(struct request *req) req->rq_flags &= ~RQF_TIMED_OUT; blk_rq_set_deadline(req, jiffies + req->timeout); - /* - * Only the non-mq case needs to add the request to a protected list. - * For the mq case we simply scan the tag map. - */ - if (!q->mq_ops) - list_add_tail(&req->timeout_list, &req->q->timeout_list); - /* * If the timer isn't already pending or this timeout is earlier * than an existing one, modify the timer. Round up to next nearest diff --git a/block/blk.h b/block/blk.h index e2604ae7ddfa..4ae6cacb4548 100644 --- a/block/blk.h +++ b/block/blk.h @@ -224,7 +224,6 @@ static inline bool bio_integrity_endio(struct bio *bio) } #endif /* CONFIG_BLK_DEV_INTEGRITY */ -void blk_timeout_work(struct work_struct *work); unsigned long blk_rq_timeout(unsigned long timeout); void blk_add_timer(struct request *req); void blk_delete_timer(struct request *); diff --git a/block/bsg-lib.c b/block/bsg-lib.c index ef176b472914..3a5938205825 100644 --- a/block/bsg-lib.c +++ b/block/bsg-lib.c @@ -305,8 +305,8 @@ static enum blk_eh_timer_return bsg_timeout(struct request *rq, bool reserved) enum blk_eh_timer_return ret = BLK_EH_DONE; struct request_queue *q = rq->q; - if (q->rq_timed_out_fn) - ret = q->rq_timed_out_fn(rq); + if (q->bsg_job_timeout_fn) + ret = q->bsg_job_timeout_fn(rq); return ret; } @@ -355,9 +355,9 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name, q->queuedata = dev; q->bsg_job_fn = job_fn; + q->bsg_job_timeout_fn = timeout; blk_queue_flag_set(QUEUE_FLAG_BIDI, q); blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT); - q->rq_timed_out_fn = timeout; ret = bsg_register_queue(q, dev, name, &bsg_transport_ops); if (ret) { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 33a81cfde6a6..bfe40de81c19 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -442,7 +442,6 @@ struct request_queue { make_request_fn *make_request_fn; poll_q_fn *poll_fn; softirq_done_fn *softirq_done_fn; - rq_timed_out_fn *rq_timed_out_fn; dma_drain_needed_fn *dma_drain_needed; lld_busy_fn *lld_busy_fn; /* Called just after a request is allocated */ @@ -543,7 +542,6 @@ struct request_queue { struct timer_list timeout; struct work_struct timeout_work; - struct list_head timeout_list; struct list_head icq_list; #ifdef CONFIG_BLK_CGROUP @@ -603,6 +601,7 @@ struct request_queue { #if defined(CONFIG_BLK_DEV_BSG) bsg_job_fn *bsg_job_fn; + rq_timed_out_fn *bsg_job_timeout_fn; struct bsg_class_device bsg_dev; #endif @@ -1159,7 +1158,6 @@ extern void blk_queue_virt_boundary(struct request_queue *, unsigned long); extern void blk_queue_dma_alignment(struct request_queue *, int); extern void blk_queue_update_dma_alignment(struct request_queue *, int); extern void blk_queue_softirq_done(struct request_queue *, softirq_done_fn *); -extern void blk_queue_rq_timed_out(struct request_queue *, rq_timed_out_fn *); extern void blk_queue_rq_timeout(struct request_queue *, unsigned int); extern void blk_queue_flush_queueable(struct request_queue *q, bool queueable); extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua);