From patchwork Fri Oct 18 19:42:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Musta X-Patchwork-Id: 284709 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [IPv6:::1]) by ozlabs.org (Postfix) with ESMTP id E9B152C016E for ; Sat, 19 Oct 2013 06:42:38 +1100 (EST) Received: from mail-ob0-x22d.google.com (mail-ob0-x22d.google.com [IPv6:2607:f8b0:4003:c01::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (not verified)) by ozlabs.org (Postfix) with ESMTPS id E30862C00B8 for ; Sat, 19 Oct 2013 06:42:13 +1100 (EST) Received: by mail-ob0-f173.google.com with SMTP id gq1so62398obb.4 for ; Fri, 18 Oct 2013 12:42:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:subject:from:to:cc:date:in-reply-to:references :organization:content-type:content-transfer-encoding:mime-version; bh=6YOno3/CAMpob8zBryYAn6cF/7EFr3OSLrByKFWpAJ4=; b=P9vzxK0SDjrMr/1omXIBVQJveZsbTf/HYAvQTk9B890s1d5l0tzJsxXsVTISj/bEyb bwJLzClpDCmzC5hCmmD8dywqoBDGs6EY73fVgCIFDTAGOem5oWVJ7QCGTfvOb1k5s2s3 eRhfDopCfofhS2e8ieXGu9+ARoB4L1sqah7DNiMGp4nOOpu05+iYyNNSu6IdH6biXlkV p5cQuNW3LWO52nR4dtoPDNQR5/tkn8fa+iZhqTnhBwNF4Dm7oQ/Srstg/F5VZ4dkVkjS a3DhQRz1Qn9X/m235C7rcWvQ25sbM1L4da/5Vsysd+Z+Pqptqakqe6rR6Oef8KTgdajZ POIQ== X-Received: by 10.182.81.99 with SMTP id z3mr978188obx.64.1382125330578; Fri, 18 Oct 2013 12:42:10 -0700 (PDT) Received: from [9.10.80.32] (rchp4.rochester.ibm.com. [129.42.161.36]) by mx.google.com with ESMTPSA id bq4sm6781963obb.1.2013.10.18.12.42.09 for (version=SSLv3 cipher=RC4-SHA bits=128/128); Fri, 18 Oct 2013 12:42:10 -0700 (PDT) Message-ID: <1382125328.2206.26.camel@tmusta-sc.rchland.ibm.com> Subject: [PATCH 2/3] powerpc: Fix Unaligned Loads and Stores From: Tom Musta To: linuxppc-dev Date: Fri, 18 Oct 2013 14:42:08 -0500 In-Reply-To: <1382125125.2206.22.camel@tmusta-sc.rchland.ibm.com> References: <1382125125.2206.22.camel@tmusta-sc.rchland.ibm.com> Organization: X-Mailer: Evolution 3.2.3-0ubuntu6 Mime-Version: 1.0 Cc: tmusta@gmail.com X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.16rc2 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This patch modifies the unaligned access routines of the sstep.c module so that it properly reverses the bytes of storage operands in the little endian kernel kernel. This is implemented by breaking an unaligned little endian access into a combination of single byte accesses plus an overal byte reversal operation. Signed-off-by: Tom Musta --- arch/powerpc/lib/sstep.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 45 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c index 5e0d0e9..570f2af 100644 --- a/arch/powerpc/lib/sstep.c +++ b/arch/powerpc/lib/sstep.c @@ -212,11 +212,19 @@ static int __kprobes read_mem_unaligned(unsigned long *dest, unsigned long ea, { int err; unsigned long x, b, c; +#ifdef __LITTLE_ENDIAN__ + int len = nb; /* save a copy of the length for byte reversal */ +#endif /* unaligned, do this in pieces */ x = 0; for (; nb > 0; nb -= c) { +#ifdef __LITTLE_ENDIAN__ + c = 1; +#endif +#ifdef __BIG_ENDIAN__ c = max_align(ea); +#endif if (c > nb) c = max_align(nb); err = read_mem_aligned(&b, ea, c); @@ -225,7 +233,24 @@ static int __kprobes read_mem_unaligned(unsigned long *dest, unsigned long ea, x = (x << (8 * c)) + b; ea += c; } +#ifdef __LITTLE_ENDIAN__ + switch (len) { + case 2: + *dest = byterev_2(x); + break; + case 4: + *dest = byterev_4(x); + break; +#ifdef __powerpc64__ + case 8: + *dest = byterev_8(x); + break; +#endif + } +#endif +#ifdef __BIG_ENDIAN__ *dest = x; +#endif return 0; } @@ -273,9 +298,29 @@ static int __kprobes write_mem_unaligned(unsigned long val, unsigned long ea, int err; unsigned long c; +#ifdef __LITTLE_ENDIAN__ + switch (nb) { + case 2: + val = byterev_2(val); + break; + case 4: + val = byterev_4(val); + break; +#ifdef __powerpc64__ + case 8: + val = byterev_8(val); + break; +#endif + } +#endif /* unaligned or little-endian, do this in pieces */ for (; nb > 0; nb -= c) { +#ifdef __LITTLE_ENDIAN__ + c = 1; +#endif +#ifdef __BIG_ENDIAN__ c = max_align(ea); +#endif if (c > nb) c = max_align(nb); err = write_mem_aligned(val >> (nb - c) * 8, ea, c);