From patchwork Thu Jun 16 12:42:49 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernard Metzler X-Patchwork-Id: 100638 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 712E6B6F8F for ; Thu, 16 Jun 2011 22:43:08 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753037Ab1FPMmy (ORCPT ); Thu, 16 Jun 2011 08:42:54 -0400 Received: from mtagate3.uk.ibm.com ([194.196.100.163]:57994 "EHLO mtagate3.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752722Ab1FPMmv (ORCPT ); Thu, 16 Jun 2011 08:42:51 -0400 Received: from d06nrmr1307.portsmouth.uk.ibm.com (d06nrmr1307.portsmouth.uk.ibm.com [9.149.38.129]) by mtagate3.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p5GCgoJW014788; Thu, 16 Jun 2011 12:42:50 GMT Received: from d06av08.portsmouth.uk.ibm.com (d06av08.portsmouth.uk.ibm.com [9.149.37.249]) by d06nrmr1307.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p5GCgoiP2117688; Thu, 16 Jun 2011 13:42:50 +0100 Received: from d06av08.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av08.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p5GCgoNW015366; Thu, 16 Jun 2011 13:42:50 +0100 Received: from aare.zurich.ibm.com (aare.zurich.ibm.com [9.4.2.232]) by d06av08.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p5GCgnUO015353; Thu, 16 Jun 2011 13:42:50 +0100 Received: from localhost.localdomain (achilles.zurich.ibm.com [9.4.243.2]) by aare.zurich.ibm.com (AIX6.1/8.13.4/8.13.4) with ESMTP id p5GCgnAZ6226256; Thu, 16 Jun 2011 14:42:49 +0200 From: Bernard Metzler To: netdev@vger.kernel.org Cc: linux-rdma@vger.kernel.org, Bernard Metzler Subject: [PATCH 13/14] SIWv2: Memory management: siw_mem.c Date: Thu, 16 Jun 2011 14:42:49 +0200 Message-Id: <1308228169-22771-1-git-send-email-bmt@zurich.ibm.com> X-Mailer: git-send-email 1.5.4.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org --- drivers/infiniband/hw/siw/siw_mem.c | 180 +++++++++++++++++++++++++++++++++++ 1 files changed, 180 insertions(+), 0 deletions(-) create mode 100644 drivers/infiniband/hw/siw/siw_mem.c diff --git a/drivers/infiniband/hw/siw/siw_mem.c b/drivers/infiniband/hw/siw/siw_mem.c new file mode 100644 index 0000000..8a3d65d --- /dev/null +++ b/drivers/infiniband/hw/siw/siw_mem.c @@ -0,0 +1,180 @@ +/* + * Software iWARP device driver for Linux + * + * Authors: Animesh Trivedi + * Bernard Metzler + * + * Copyright (c) 2008-2011, IBM Corporation + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * + * - Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * - Neither the name of IBM nor the names of its contributors may be + * used to endorse or promote products derived from this software without + * specific prior written permission. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ +#include +#include +#include + +#include "siw.h" +#include "siw_verbs.h" + +/* + * DMA mapping/address translation functions. + * Used to populate siw private DMA mapping functions of + * struct ib_dma_mapping_ops in struct ib_dev - see rdma/ib_verbs.h + */ + +static int siw_mapping_error(struct ib_device *dev, u64 dma_addr) +{ + return dma_addr == 0; +} + +static u64 siw_dma_map_single(struct ib_device *dev, void *kva, size_t size, + enum dma_data_direction dir) +{ + /* siw uses kernel virtual addresses for data transfer */ + return (u64) kva; +} + +static void siw_dma_unmap_single(struct ib_device *dev, + u64 addr, size_t size, + enum dma_data_direction dir) +{ + /* NOP */ +} + +static u64 siw_dma_map_page(struct ib_device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir) +{ + u64 kva = 0; + + BUG_ON(!valid_dma_direction(dir)); + + if (offset + size <= PAGE_SIZE) { + kva = (u64) page_address(page); + if (kva) + kva += offset; + } + return kva; +} + +static void siw_dma_unmap_page(struct ib_device *dev, + u64 addr, size_t size, + enum dma_data_direction dir) +{ + /* NOP */ +} + +static int siw_map_sg(struct ib_device *dev, struct scatterlist *sgl, + int n_sge, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + BUG_ON(!valid_dma_direction(dir)); + + /* This is just a validity check */ + for_each_sg(sgl, sg, n_sge, i) + if (page_address(sg_page(sg)) == NULL) + return 0; + + return n_sge; +} + +static void siw_unmap_sg(struct ib_device *dev, struct scatterlist *sgl, + int n_sge, enum dma_data_direction dir) +{ + /* NOP */ +} + +static u64 siw_dma_address(struct ib_device *dev, struct scatterlist *sg) +{ + u64 kva = (u64) page_address(sg_page(sg)); + + if (kva) + kva += sg->offset; + + return kva; +} + +static unsigned int siw_dma_len(struct ib_device *dev, + struct scatterlist *sg) +{ + return sg_dma_len(sg); +} + +static void siw_sync_single_for_cpu(struct ib_device *dev, u64 addr, + size_t size, enum dma_data_direction dir) +{ + /* NOP */ +} + +static void siw_sync_single_for_device(struct ib_device *dev, u64 addr, + size_t size, + enum dma_data_direction dir) +{ + /* NOP */ +} + +static void *siw_dma_alloc_coherent(struct ib_device *dev, size_t size, + u64 *dma_addr, gfp_t flag) +{ + struct page *page; + void *kva = NULL; + + page = alloc_pages(flag, get_order(size)); + if (page) + kva = page_address(page); + if (dma_addr) + *dma_addr = (u64)kva; + + return kva; +} + +static void siw_dma_free_coherent(struct ib_device *dev, size_t size, + void *kva, u64 dma_addr) +{ + free_pages((unsigned long) kva, get_order(size)); +} + +struct ib_dma_mapping_ops siw_dma_mapping_ops = { + .mapping_error = siw_mapping_error, + .map_single = siw_dma_map_single, + .unmap_single = siw_dma_unmap_single, + .map_page = siw_dma_map_page, + .unmap_page = siw_dma_unmap_page, + .map_sg = siw_map_sg, + .unmap_sg = siw_unmap_sg, + .dma_address = siw_dma_address, + .dma_len = siw_dma_len, + .sync_single_for_cpu = siw_sync_single_for_cpu, + .sync_single_for_device = siw_sync_single_for_device, + .alloc_coherent = siw_dma_alloc_coherent, + .free_coherent = siw_dma_free_coherent +};