From patchwork Fri Oct 4 15:43:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wolfgang Denk X-Patchwork-Id: 280660 X-Patchwork-Delegate: trini@ti.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from theia.denx.de (theia.denx.de [85.214.87.163]) by ozlabs.org (Postfix) with ESMTP id 3A2622C0342 for ; Sat, 5 Oct 2013 01:45:26 +1000 (EST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 0D5394A0E0; Fri, 4 Oct 2013 17:45:07 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at theia.denx.de Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id w04RWLTRr4LK; Fri, 4 Oct 2013 17:45:06 +0200 (CEST) Received: from theia.denx.de (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id C9F244A0E7; Fri, 4 Oct 2013 17:44:53 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 0816C4A09F for ; Fri, 4 Oct 2013 17:43:53 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at theia.denx.de Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ROogjc-34lu0 for ; Fri, 4 Oct 2013 17:43:52 +0200 (CEST) X-policyd-weight: NOT_IN_SBL_XBL_SPAMHAUS=-1.5 NOT_IN_SPAMCOP=-1.5 NOT_IN_BL_NJABL=-1.5 (only DNSBL check requested) Received: from mail-out.m-online.net (mail-out.m-online.net [212.18.0.10]) by theia.denx.de (Postfix) with ESMTPS id 038C94A09E for ; Fri, 4 Oct 2013 17:43:49 +0200 (CEST) Received: from frontend1.mail.m-online.net (frontend1.mail.intern.m-online.net [192.168.8.180]) by mail-out.m-online.net (Postfix) with ESMTP id 3crwNj46Skz3hjkc; Fri, 4 Oct 2013 17:43:49 +0200 (CEST) Received: from localhost (dynscan1.mnet-online.de [192.168.6.68]) by mail.m-online.net (Postfix) with ESMTP id 3crwNj3jLJzbbfb; Fri, 4 Oct 2013 17:43:49 +0200 (CEST) X-Virus-Scanned: amavisd-new at mnet-online.de Received: from smtp-auth.mnet-online.de ([192.168.8.180]) by localhost (dynscan1.mail.m-online.net [192.168.6.68]) (amavisd-new, port 10024) with ESMTP id UiL2Liht_xaa; Fri, 4 Oct 2013 17:43:32 +0200 (CEST) X-Auth-Info: /kOe2+Z+xHDd1DzKdZG7jgkV30wQh9kqk029jTLtd54= Received: from diddl.denx.de (host-80-81-18-216.customer.m-online.net [80.81.18.216]) by smtp-auth.mnet-online.de (Postfix) with ESMTPA; Fri, 4 Oct 2013 17:43:32 +0200 (CEST) Received: from gemini.denx.de (unknown [10.0.0.2]) by diddl.denx.de (Postfix) with ESMTP id 37E781A011E; Fri, 4 Oct 2013 17:43:29 +0200 (CEST) Received: by gemini.denx.de (Postfix, from userid 500) id 0E906380A3C; Fri, 4 Oct 2013 17:43:29 +0200 (MEST) From: Wolfgang Denk To: u-boot@lists.denx.de Date: Fri, 4 Oct 2013 17:43:24 +0200 Message-Id: <1380901406-19303-3-git-send-email-wd@denx.de> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1380901406-19303-1-git-send-email-wd@denx.de> References: <1380901406-19303-1-git-send-email-wd@denx.de> X-Mailman-Approved-At: Fri, 04 Oct 2013 17:44:50 +0200 Cc: Tom Rini Subject: [U-Boot] [PATCH 2/4] Coding Style cleanup: replace leading SPACEs by TABs X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.11 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: u-boot-bounces@lists.denx.de Errors-To: u-boot-bounces@lists.denx.de Signed-off-by: Wolfgang Denk --- MAKEALL | 4 +- README | 20 +- arch/arm/config.mk | 2 +- arch/arm/cpu/armv7/mx5/lowlevel_init.S | 44 +- arch/arm/dts/exynos5250.dtsi | 2 +- arch/arm/lib/relocate.S | 4 +- arch/blackfin/cpu/traps.c | 2 +- .../include/asm/mach-common/bits/lockbox.h | 2 +- arch/m68k/lib/traps.c | 2 +- arch/nios2/cpu/epcs.c | 2 +- arch/nios2/lib/libgcc.c | 2 +- arch/openrisc/cpu/cpu.c | 4 +- arch/powerpc/cpu/mpc83xx/cpu_init.c | 10 +- arch/powerpc/cpu/mpc83xx/pci.c | 4 +- arch/powerpc/cpu/ppc4xx/4xx_pci.c | 4 +- board/Barix/ipam390/README.ipam390 | 2 +- board/altera/common/sevenseg.c | 4 +- board/boundary/nitrogen6x/6x_upgrade.txt | 52 +- board/chromebook-x86/dts/alex.dts | 10 +- board/chromebook-x86/dts/link.dts | 10 +- board/cobra5272/bdm/cobra5272_uboot.gdb | 4 +- board/cogent/flash.c | 2 +- board/cray/L1/bootscript.hush | 6 +- board/esd/common/lcd.h | 2 +- board/esd/cpci750/ide.c | 2 +- board/esd/cpci750/pci.c | 2 +- board/etin/debris/flash.c | 12 +- board/evb64260/flash.c | 14 +- board/evb64260/mpsc.c | 4 +- board/fads/fads.c | 4 +- board/freescale/b4860qds/tlb.c | 2 +- board/freescale/bsc9132qds/README | 10 +- board/freescale/mpc8313erdb/mpc8313erdb.c | 4 +- board/freescale/mpc8360emds/mpc8360emds.c | 4 +- board/funkwerk/vovpn-gw/vovpn-gw.c | 6 +- board/gen860t/flash.c | 2 +- board/incaip/incaip.c | 4 +- board/matrix_vision/mvbc_p/Makefile | 2 +- board/matrix_vision/mvsmr/Makefile | 2 +- board/openrisc/openrisc-generic/or1ksim.cfg | 122 +- board/ppmc8260/strataflash.c | 2 +- board/pxa255_idp/pxa_reg_calcs.py | 6 +- board/rbc823/rbc823.c | 2 +- board/samsung/dts/exynos5250-snow.dts | 6 +- board/svm_sc8xx/svm_sc8xx.c | 2 +- boards.cfg | 2 +- common/cmd_bmp.c | 6 +- common/main.c | 2 +- config.mk | 2 +- doc/DocBook/Makefile | 4 +- doc/README.ext4 | 34 +- doc/README.kwbimage | 6 +- doc/README.mxc_hab | 2 +- doc/README.mxsimage | 24 +- doc/README.nokia_rx51 | 2 +- doc/README.ramboot-ppc85xx | 10 +- doc/README.trace | 24 +- doc/README.ubi | 4 +- doc/README.zfs | 18 +- doc/driver-model/UDM-cores.txt | 24 +- doc/driver-model/UDM-design.txt | 12 +- doc/driver-model/UDM-gpio.txt | 4 +- doc/driver-model/UDM-hwmon.txt | 10 +- doc/driver-model/UDM-mmc.txt | 6 +- doc/driver-model/UDM-power.txt | 12 +- doc/driver-model/UDM-rtc.txt | 26 +- doc/driver-model/UDM-spi.txt | 12 +- doc/driver-model/UDM-stdio.txt | 28 +- doc/driver-model/UDM-tpm.txt | 2 +- doc/driver-model/UDM-watchdog.txt | 6 +- drivers/mtd/nand/fsl_elbc_spl.c | 20 +- drivers/mtd/nand/fsl_upm.c | 8 +- drivers/mtd/nand/nand_base.c | 2 +- drivers/mtd/nand/nand_util.c | 4 +- drivers/mtd/ubi/crc32.c | 4 +- drivers/net/dm9000x.c | 2 +- drivers/net/plb2800_eth.c | 2 +- drivers/rtc/bfin_rtc.c | 2 +- drivers/rtc/pl031.c | 2 +- drivers/spi/mpc8xxx_spi.c | 4 +- drivers/spi/omap3_spi.c | 10 +- drivers/video/ati_radeon_fb.h | 2 +- fs/ubifs/super.c | 2 +- include/configs/DU440.h | 2 +- include/configs/P3G4.h | 2 +- include/configs/RBC823.h | 6 +- include/configs/alpr.h | 2 +- include/configs/devkit8000.h | 4 +- include/configs/p3mx.h | 2 +- include/configs/p3p440.h | 2 +- include/configs/pcs440ep.h | 2 +- include/configs/pdnb3.h | 4 +- include/configs/uc100.h | 2 +- include/configs/zeus.h | 4 +- include/ddr_spd.h | 34 +- nand_spl/nand_boot_fsl_elbc.c | 2 +- post/lib_powerpc/andi.c | 2 +- post/lib_powerpc/cpu_asm.h | 32 +- post/lib_powerpc/rlwimi.c | 6 +- post/lib_powerpc/rlwinm.c | 6 +- post/lib_powerpc/rlwnm.c | 6 +- post/lib_powerpc/srawi.c | 6 +- post/lib_powerpc/three.c | 6 +- post/lib_powerpc/threei.c | 2 +- post/lib_powerpc/threex.c | 6 +- post/lib_powerpc/twox.c | 6 +- test/compression.c | 4 +- test/image/test-fit.py | 236 +- tools/Makefile | 2 +- tools/bddb/defs.php | 4 +- tools/buildman/README | 268 +-- tools/buildman/board.py | 236 +- tools/buildman/bsettings.py | 18 +- tools/buildman/builder.py | 2374 ++++++++++---------- tools/buildman/buildman.py | 16 +- tools/buildman/control.py | 136 +- tools/buildman/test.py | 162 +- tools/buildman/toolchain.py | 392 ++-- tools/img2brec.sh | 28 +- tools/imls/Makefile | 2 +- tools/kernel-doc/docproc.c | 52 +- tools/kernel-doc/kernel-doc | 2 +- tools/patman/checkpatch.py | 194 +- tools/patman/command.py | 98 +- tools/patman/commit.py | 84 +- tools/patman/cros_subprocess.py | 616 ++--- tools/patman/get_maintainer.py | 22 +- tools/patman/gitutil.py | 316 +-- tools/patman/patchstream.py | 608 ++--- tools/patman/patman.py | 80 +- tools/patman/project.py | 8 +- tools/patman/series.py | 442 ++-- tools/patman/settings.py | 232 +- tools/patman/terminal.py | 18 +- tools/patman/test.py | 150 +- tools/reformat.py | 2 +- tools/scripts/make-asm-offsets | 2 +- tools/ubsha1.c | 8 +- 138 files changed, 3859 insertions(+), 3859 deletions(-) diff --git a/MAKEALL b/MAKEALL index c0d04fb..80f4fbe 100755 --- a/MAKEALL +++ b/MAKEALL @@ -38,8 +38,8 @@ usage() BUILD_NCPUS number of parallel make jobs (default: auto) CROSS_COMPILE cross-compiler toolchain prefix (default: "") CROSS_COMPILE_ cross-compiler toolchain prefix for - architecture "ARCH". Substitute "ARCH" for any - supported architecture (default: "") + architecture "ARCH". Substitute "ARCH" for any + supported architecture (default: "") MAKEALL_LOGDIR output all logs to here (default: ./LOG/) BUILD_DIR output build directory (default: ./) BUILD_NBUILDS number of parallel targets (default: 1) diff --git a/README b/README index b99a444..9734dc7 100644 --- a/README +++ b/README @@ -945,10 +945,10 @@ The following options need to be configured: - Regular expression support: CONFIG_REGEX - If this variable is defined, U-Boot is linked against - the SLRE (Super Light Regular Expression) library, - which adds regex support to some commands, as for - example "env grep" and "setexpr". + If this variable is defined, U-Boot is linked against + the SLRE (Super Light Regular Expression) library, + which adds regex support to some commands, as for + example "env grep" and "setexpr". - Device tree: CONFIG_OF_CONTROL @@ -1097,8 +1097,8 @@ The following options need to be configured: devices. CONFIG_SYS_SCSI_SYM53C8XX_CCF to fix clock timing (80Mhz) - The environment variable 'scsidevs' is set to the number of - SCSI devices found during the last scan. + The environment variable 'scsidevs' is set to the number of + SCSI devices found during the last scan. - NETWORK Support (PCI): CONFIG_E1000 @@ -1988,7 +1988,7 @@ CBFS (Coreboot Filesystem) support offset CONFIG_SYS_FSL_I2C_SPEED for the i2c speed and CONFIG_SYS_FSL_I2C_SLAVE for the slave addr of the first bus. - - If your board supports a second fsl i2c bus, define + - If your board supports a second fsl i2c bus, define CONFIG_SYS_FSL_I2C2_OFFSET for the register offset CONFIG_SYS_FSL_I2C2_SPEED for the speed and CONFIG_SYS_FSL_I2C2_SLAVE for the slave address of the @@ -3192,9 +3192,9 @@ FIT uImage format: CONFIG_TPL_PAD_TO Image offset to which the TPL should be padded before appending the TPL payload. By default, this is defined as - CONFIG_SPL_MAX_SIZE, or 0 if CONFIG_SPL_MAX_SIZE is undefined. - CONFIG_SPL_PAD_TO must be either 0, meaning to append the SPL - payload without any padding, or >= CONFIG_SPL_MAX_SIZE. + CONFIG_SPL_MAX_SIZE, or 0 if CONFIG_SPL_MAX_SIZE is undefined. + CONFIG_SPL_PAD_TO must be either 0, meaning to append the SPL + payload without any padding, or >= CONFIG_SPL_MAX_SIZE. Modem Support: -------------- diff --git a/arch/arm/config.mk b/arch/arm/config.mk index d0cf43f..bdabcf4 100644 --- a/arch/arm/config.mk +++ b/arch/arm/config.mk @@ -17,7 +17,7 @@ endif LDFLAGS_FINAL += --gc-sections PLATFORM_RELFLAGS += -ffunction-sections -fdata-sections \ - -fno-common -ffixed-r9 -msoft-float + -fno-common -ffixed-r9 -msoft-float # Support generic board on ARM __HAVE_ARCH_GENERIC_BOARD := y diff --git a/arch/arm/cpu/armv7/mx5/lowlevel_init.S b/arch/arm/cpu/armv7/mx5/lowlevel_init.S index fc7c767..25fadf6 100644 --- a/arch/arm/cpu/armv7/mx5/lowlevel_init.S +++ b/arch/arm/cpu/armv7/mx5/lowlevel_init.S @@ -31,10 +31,10 @@ /* reconfigure L2 cache aux control reg */ ldr r0, =0xC0 | /* tag RAM */ \ - 0x4 | /* data RAM */ \ - 1 << 24 | /* disable write allocate delay */ \ - 1 << 23 | /* disable write allocate combine */ \ - 1 << 22 /* disable write allocate */ + 0x4 | /* data RAM */ \ + 1 << 24 | /* disable write allocate delay */ \ + 1 << 23 | /* disable write allocate combine */ \ + 1 << 22 /* disable write allocate */ #if defined(CONFIG_MX51) ldr r3, [r4, #ROM_SI_REV] @@ -290,20 +290,20 @@ setup_pll_func: setup_pll PLL1_BASE_ADDR, 800 - setup_pll PLL3_BASE_ADDR, 400 + setup_pll PLL3_BASE_ADDR, 400 - /* Switch peripheral to PLL3 */ - ldr r0, =CCM_BASE_ADDR - ldr r1, =0x00015154 - str r1, [r0, #CLKCTL_CBCMR] - ldr r1, =0x02898945 - str r1, [r0, #CLKCTL_CBCDR] - /* make sure change is effective */ + /* Switch peripheral to PLL3 */ + ldr r0, =CCM_BASE_ADDR + ldr r1, =0x00015154 + str r1, [r0, #CLKCTL_CBCMR] + ldr r1, =0x02898945 + str r1, [r0, #CLKCTL_CBCDR] + /* make sure change is effective */ 1: ldr r1, [r0, #CLKCTL_CDHIPR] - cmp r1, #0x0 - bne 1b + cmp r1, #0x0 + bne 1b - setup_pll PLL2_BASE_ADDR, 400 + setup_pll PLL2_BASE_ADDR, 400 /* Switch peripheral to PLL2 */ ldr r0, =CCM_BASE_ADDR @@ -324,7 +324,7 @@ setup_pll_func: cmp r1, #0x0 bne 1b - setup_pll PLL3_BASE_ADDR, 216 + setup_pll PLL3_BASE_ADDR, 216 setup_pll PLL4_BASE_ADDR, 455 @@ -358,13 +358,13 @@ setup_pll_func: str r1, [r0, #CLKCTL_CCGR6] str r1, [r0, #CLKCTL_CCGR7] - mov r1, #0x00000 - str r1, [r0, #CLKCTL_CCDR] + mov r1, #0x00000 + str r1, [r0, #CLKCTL_CCDR] - /* for cko - for ARM div by 8 */ - mov r1, #0x000A0000 - add r1, r1, #0x00000F0 - str r1, [r0, #CLKCTL_CCOSR] + /* for cko - for ARM div by 8 */ + mov r1, #0x000A0000 + add r1, r1, #0x00000F0 + str r1, [r0, #CLKCTL_CCOSR] #endif /* CONFIG_MX53 */ .endm diff --git a/arch/arm/dts/exynos5250.dtsi b/arch/arm/dts/exynos5250.dtsi index 1c5474f..44cbb5a 100644 --- a/arch/arm/dts/exynos5250.dtsi +++ b/arch/arm/dts/exynos5250.dtsi @@ -140,7 +140,7 @@ reg = <0x12d40000 0x30>; clock-frequency = <50000000>; interrupts = <0 70 0>; - }; + }; spi@131a0000 { #address-cells = <1>; diff --git a/arch/arm/lib/relocate.S b/arch/arm/lib/relocate.S index a62a556..8035251 100644 --- a/arch/arm/lib/relocate.S +++ b/arch/arm/lib/relocate.S @@ -66,9 +66,9 @@ relocate_done: /* ARMv4- don't know bx lr but the assembler fails to see that */ #ifdef __ARM_ARCH_4__ - mov pc, lr + mov pc, lr #else - bx lr + bx lr #endif ENDPROC(relocate_code) diff --git a/arch/blackfin/cpu/traps.c b/arch/blackfin/cpu/traps.c index 20aeab8..10f72f8 100644 --- a/arch/blackfin/cpu/traps.c +++ b/arch/blackfin/cpu/traps.c @@ -261,7 +261,7 @@ static void decode_address(char *buf, unsigned long address) if (!address) sprintf(buf, "<0x%p> /* Maybe null pointer? */", paddr); else if (address >= CONFIG_SYS_MONITOR_BASE && - address < CONFIG_SYS_MONITOR_BASE + CONFIG_SYS_MONITOR_LEN) + address < CONFIG_SYS_MONITOR_BASE + CONFIG_SYS_MONITOR_LEN) sprintf(buf, "<0x%p> /* somewhere in u-boot */", paddr); else sprintf(buf, "<0x%p> /* unknown address */", paddr); diff --git a/arch/blackfin/include/asm/mach-common/bits/lockbox.h b/arch/blackfin/include/asm/mach-common/bits/lockbox.h index 77f849e..17d22ab 100644 --- a/arch/blackfin/include/asm/mach-common/bits/lockbox.h +++ b/arch/blackfin/include/asm/mach-common/bits/lockbox.h @@ -16,7 +16,7 @@ typedef struct SESR_args { unsigned long ulMessageSize; /* message length in bytes */ unsigned long ulSFEntryPoint; /* entry point of secure function */ unsigned long ulMessagePtr; /* pointer to the buffer containing - the digital signature and message */ + the digital signature and message */ unsigned long ulReserved1; /* reserved */ unsigned long ulReserved2; /* reserved */ } tSESR_args; diff --git a/arch/m68k/lib/traps.c b/arch/m68k/lib/traps.c index 55bf0c2..cbd410c 100644 --- a/arch/m68k/lib/traps.c +++ b/arch/m68k/lib/traps.c @@ -20,7 +20,7 @@ extern void _int_handler(void); static void show_frame(struct pt_regs *fp) { printf ("Vector Number: %d Format: %02x Fault Status: %01x\n\n", (fp->vector & 0x3fc) >> 2, - fp->format, (fp->vector & 0x3) | ((fp->vector & 0xc00) >> 8)); + fp->format, (fp->vector & 0x3) | ((fp->vector & 0xc00) >> 8)); printf ("PC: %08lx SR: %08lx SP: %08lx\n", fp->pc, (long) fp->sr, (long) fp); printf ("D0: %08lx D1: %08lx D2: %08lx D3: %08lx\n", fp->d0, fp->d1, fp->d2, fp->d3); diff --git a/arch/nios2/cpu/epcs.c b/arch/nios2/cpu/epcs.c index c83f5ee..9758552 100644 --- a/arch/nios2/cpu/epcs.c +++ b/arch/nios2/cpu/epcs.c @@ -475,7 +475,7 @@ void do_epcs_info (struct epcs_devinfo_t *dev, int argc, char * const argv[]) printf ("status: 0x%02x (WIP:%d, WEL:%d, PROT:%s)\n", stat, (stat & EPCS_STATUS_WIP) ? 1 : 0, - (stat & EPCS_STATUS_WEL) ? 1 : 0, + (stat & EPCS_STATUS_WEL) ? 1 : 0, (stat & dev->prot_mask) ? "on" : "off" ); /* Configuration */ diff --git a/arch/nios2/lib/libgcc.c b/arch/nios2/lib/libgcc.c index 3caee19..cf1b836 100644 --- a/arch/nios2/lib/libgcc.c +++ b/arch/nios2/lib/libgcc.c @@ -548,7 +548,7 @@ __mulsi3 (SItype a, SItype b) while (cnt) { if (cnt & 1) - { + { res += b; } b <<= 1; diff --git a/arch/openrisc/cpu/cpu.c b/arch/openrisc/cpu/cpu.c index f238fe4..272656a 100644 --- a/arch/openrisc/cpu/cpu.c +++ b/arch/openrisc/cpu/cpu.c @@ -138,8 +138,8 @@ int do_reset(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) /* Code the jump to __reset here as the compiler is prone to emitting a bad jump instruction if the function is in flash */ __asm__("l.movhi r1,hi(__reset); \ - l.ori r1,r1,lo(__reset); \ - l.jr r1"); + l.ori r1,r1,lo(__reset); \ + l.jr r1"); /* not reached, __reset does not return */ return 0; } diff --git a/arch/powerpc/cpu/mpc83xx/cpu_init.c b/arch/powerpc/cpu/mpc83xx/cpu_init.c index d568f88..0e9ddb8 100644 --- a/arch/powerpc/cpu/mpc83xx/cpu_init.c +++ b/arch/powerpc/cpu/mpc83xx/cpu_init.c @@ -425,15 +425,15 @@ static int print_83xx_arb_event(int force) }; int etype = (gd->arch.arbiter_event_attributes & AEATR_EVENT) - >> AEATR_EVENT_SHIFT; + >> AEATR_EVENT_SHIFT; int mstr_id = (gd->arch.arbiter_event_attributes & AEATR_MSTR_ID) - >> AEATR_MSTR_ID_SHIFT; + >> AEATR_MSTR_ID_SHIFT; int tbst = (gd->arch.arbiter_event_attributes & AEATR_TBST) - >> AEATR_TBST_SHIFT; + >> AEATR_TBST_SHIFT; int tsize = (gd->arch.arbiter_event_attributes & AEATR_TSIZE) - >> AEATR_TSIZE_SHIFT; + >> AEATR_TSIZE_SHIFT; int ttype = (gd->arch.arbiter_event_attributes & AEATR_TTYPE) - >> AEATR_TTYPE_SHIFT; + >> AEATR_TTYPE_SHIFT; if (!force && !gd->arch.arbiter_event_address) return 0; diff --git a/arch/powerpc/cpu/mpc83xx/pci.c b/arch/powerpc/cpu/mpc83xx/pci.c index e073c90..30606fb 100644 --- a/arch/powerpc/cpu/mpc83xx/pci.c +++ b/arch/powerpc/cpu/mpc83xx/pci.c @@ -67,7 +67,7 @@ static void pci_init_bus(int bus, struct pci_region *reg) pci_ctrl->pibar1 = 0; pci_ctrl->piebar1 = 0; pci_ctrl->piwar1 = PIWAR_EN | PIWAR_PF | PIWAR_RTT_SNOOP | - PIWAR_WTT_SNOOP | (__ilog2(gd->ram_size - 1)); + PIWAR_WTT_SNOOP | (__ilog2(gd->ram_size - 1)); i = hose->region_count++; hose->regions[i].bus_start = 0; @@ -79,7 +79,7 @@ static void pci_init_bus(int bus, struct pci_region *reg) hose->last_busno = 0xff; pci_setup_indirect(hose, CONFIG_SYS_IMMR + 0x8300 + bus * 0x80, - CONFIG_SYS_IMMR + 0x8304 + bus * 0x80); + CONFIG_SYS_IMMR + 0x8304 + bus * 0x80); pci_register_hose(hose); diff --git a/arch/powerpc/cpu/ppc4xx/4xx_pci.c b/arch/powerpc/cpu/ppc4xx/4xx_pci.c index 08781a1..33dc725 100644 --- a/arch/powerpc/cpu/ppc4xx/4xx_pci.c +++ b/arch/powerpc/cpu/ppc4xx/4xx_pci.c @@ -143,14 +143,14 @@ void pci_405gp_init(struct pci_controller *hose) ptmla_str = getenv("ptm1la"); ptmms_str = getenv("ptm1ms"); if(NULL != ptmla_str && NULL != ptmms_str ) { - ptmla[0] = simple_strtoul (ptmla_str, NULL, 16); + ptmla[0] = simple_strtoul (ptmla_str, NULL, 16); ptmms[0] = simple_strtoul (ptmms_str, NULL, 16); } ptmla_str = getenv("ptm2la"); ptmms_str = getenv("ptm2ms"); if(NULL != ptmla_str && NULL != ptmms_str ) { - ptmla[1] = simple_strtoul (ptmla_str, NULL, 16); + ptmla[1] = simple_strtoul (ptmla_str, NULL, 16); ptmms[1] = simple_strtoul (ptmms_str, NULL, 16); } #endif diff --git a/board/Barix/ipam390/README.ipam390 b/board/Barix/ipam390/README.ipam390 index 2d155a3..5c45fca 100644 --- a/board/Barix/ipam390/README.ipam390 +++ b/board/Barix/ipam390/README.ipam390 @@ -50,7 +50,7 @@ TFTP from server 192.168.1.1; our IP address is 192.168.20.71 Filename '/tftpboot/ipam390/u-boot.ais'. Load address: 0xc0000000 Loading: ################################## - 1.5 MiB/s + 1.5 MiB/s done Bytes transferred = 493716 (78894 hex) diff --git a/board/altera/common/sevenseg.c b/board/altera/common/sevenseg.c index 7ae7ca2..1f22c85 100644 --- a/board/altera/common/sevenseg.c +++ b/board/altera/common/sevenseg.c @@ -87,10 +87,10 @@ static inline void __sevenseg_set (unsigned int value) #if (SEVENSEG_ACTIVE == 0) sevenseg_portval = (sevenseg_portval & SEVENDEG_MASK_DP) - | ((~value) & (~SEVENDEG_MASK_DP)); + | ((~value) & (~SEVENDEG_MASK_DP)); #else sevenseg_portval = (sevenseg_portval & SEVENDEG_MASK_DP) - | (value); + | (value); #endif piop->data = sevenseg_portval; diff --git a/board/boundary/nitrogen6x/6x_upgrade.txt b/board/boundary/nitrogen6x/6x_upgrade.txt index 0d8e8e5..1f9a889 100644 --- a/board/boundary/nitrogen6x/6x_upgrade.txt +++ b/board/boundary/nitrogen6x/6x_upgrade.txt @@ -6,39 +6,39 @@ if ${fs}load ${dtype} ${disk}:1 12000000 u-boot.imx || ${fs}load ${dtype} ${disk if sf probe || sf probe || \ sf probe 1 27000000 || sf probe 1 27000000 ; then echo "probed SPI ROM" ; - if sf read 0x12400000 $offset $filesize ; then - if cmp.b 0x12000000 0x12400000 $filesize ; then - echo "------- U-Boot versions match" ; - else - echo "Need U-Boot upgrade" ; - echo "Program in 5 seconds" ; - for n in 5 4 3 2 1 ; do - echo $n ; - sleep 1 ; - done + if sf read 0x12400000 $offset $filesize ; then + if cmp.b 0x12000000 0x12400000 $filesize ; then + echo "------- U-Boot versions match" ; + else + echo "Need U-Boot upgrade" ; + echo "Program in 5 seconds" ; + for n in 5 4 3 2 1 ; do + echo $n ; + sleep 1 ; + done echo "erasing" ; - sf erase 0 0x50000 ; + sf erase 0 0x50000 ; # two steps to prevent bricking echo "programming" ; - sf write 0x12000000 $offset $filesize ; + sf write 0x12000000 $offset $filesize ; echo "verifying" ; - if sf read 0x12400000 $offset $filesize ; then - if cmp.b 0x12000000 0x12400000 $filesize ; then - while echo "---- U-Boot upgraded. reset" ; do + if sf read 0x12400000 $offset $filesize ; then + if cmp.b 0x12000000 0x12400000 $filesize ; then + while echo "---- U-Boot upgraded. reset" ; do sleep 120 done - else - echo "Read verification error" ; - fi - else - echo "Error re-reading EEPROM" ; - fi - fi - else - echo "Error reading boot loader from EEPROM" ; - fi + else + echo "Read verification error" ; + fi + else + echo "Error re-reading EEPROM" ; + fi + fi + else + echo "Error reading boot loader from EEPROM" ; + fi else - echo "Error initializing EEPROM" ; + echo "Error initializing EEPROM" ; fi ; else echo "No U-Boot image found on SD card" ; diff --git a/board/chromebook-x86/dts/alex.dts b/board/chromebook-x86/dts/alex.dts index cb6a9e4..2f13544 100644 --- a/board/chromebook-x86/dts/alex.dts +++ b/board/chromebook-x86/dts/alex.dts @@ -3,8 +3,8 @@ /include/ "coreboot.dtsi" / { - #address-cells = <1>; - #size-cells = <1>; + #address-cells = <1>; + #size-cells = <1>; model = "Google Alex"; compatible = "google,alex", "intel,atom-pineview"; @@ -12,13 +12,13 @@ silent_console = <0>; }; - gpio: gpio {}; + gpio: gpio {}; serial { reg = <0x3f8 8>; clock-frequency = <115200>; }; - chosen { }; - memory { device_type = "memory"; reg = <0 0>; }; + chosen { }; + memory { device_type = "memory"; reg = <0 0>; }; }; diff --git a/board/chromebook-x86/dts/link.dts b/board/chromebook-x86/dts/link.dts index c95ee8a..4a37dac 100644 --- a/board/chromebook-x86/dts/link.dts +++ b/board/chromebook-x86/dts/link.dts @@ -3,8 +3,8 @@ /include/ "coreboot.dtsi" / { - #address-cells = <1>; - #size-cells = <1>; + #address-cells = <1>; + #size-cells = <1>; model = "Google Link"; compatible = "google,link", "intel,celeron-ivybridge"; @@ -12,15 +12,15 @@ silent_console = <0>; }; - gpio: gpio {}; + gpio: gpio {}; serial { reg = <0x3f8 8>; clock-frequency = <115200>; }; - chosen { }; - memory { device_type = "memory"; reg = <0 0>; }; + chosen { }; + memory { device_type = "memory"; reg = <0 0>; }; spi { #address-cells = <1>; diff --git a/board/cobra5272/bdm/cobra5272_uboot.gdb b/board/cobra5272/bdm/cobra5272_uboot.gdb index abecb40..61e778e 100644 --- a/board/cobra5272/bdm/cobra5272_uboot.gdb +++ b/board/cobra5272/bdm/cobra5272_uboot.gdb @@ -1,11 +1,11 @@ # # GDB Init script for the Coldfire 5272 processor. # -# The main purpose of this script is to configure the +# The main purpose of this script is to configure the # DRAM controller so code can be loaded. # # This file was changed to suite the senTec COBRA5272 board. -# +# define addresses diff --git a/board/cogent/flash.c b/board/cogent/flash.c index 207380f..d4ae4d0 100644 --- a/board/cogent/flash.c +++ b/board/cogent/flash.c @@ -487,7 +487,7 @@ flash_erase(flash_info_t *info, int s_first, int s_last) if (haderr > 0) { printf (" failed\n"); - rcode = 1; + rcode = 1; } else printf (" done\n"); diff --git a/board/cray/L1/bootscript.hush b/board/cray/L1/bootscript.hush index c9e3c91..f2f78ad 100644 --- a/board/cray/L1/bootscript.hush +++ b/board/cray/L1/bootscript.hush @@ -47,7 +47,7 @@ else echo no kernel to boot from $flash_krl, need tftp fi -# Have a rootfs in flash? +# Have a rootfs in flash? echo test for SQUASHfs at $flash_rfs if imi $flash_rfs @@ -69,7 +69,7 @@ fi # TFTP down a kernel if printenv bootfile -then +then tftp $tftp_addr $bootfile setenv kernel $tftp_addr echo I will boot the TFTP kernel @@ -90,7 +90,7 @@ if printenv rootpath then echo rootpath is $rootpath if printenv initrd - then + then echo initrd is also specified, so use $initrd tftp $tftp2_addr $initrd setenv bootargs root=/dev/ram0 rw cwsroot=$serverip:$rootpath $bootargs diff --git a/board/esd/common/lcd.h b/board/esd/common/lcd.h index 96e4b99..5b14bf9 100644 --- a/board/esd/common/lcd.h +++ b/board/esd/common/lcd.h @@ -9,7 +9,7 @@ * Neutralize little endians. */ #define SWAP_LONG(data) ((unsigned long) \ - (((unsigned long)(data) >> 24) | \ + (((unsigned long)(data) >> 24) | \ ((unsigned long)(data) << 24) | \ (((unsigned long)(data) >> 8) & 0x0000ff00 ) | \ (((unsigned long)(data) << 8) & 0x00ff0000 ))) diff --git a/board/esd/cpci750/ide.c b/board/esd/cpci750/ide.c index 5f30685..f555c08 100644 --- a/board/esd/cpci750/ide.c +++ b/board/esd/cpci750/ide.c @@ -43,7 +43,7 @@ int ide_preinit (void) if (devbusfn != -1) { cpci_hd_type = 1; } else { - devbusfn = pci_find_device (0x1095, 0x3114, 0); + devbusfn = pci_find_device (0x1095, 0x3114, 0); if (devbusfn != -1) { cpci_hd_type = 2; } diff --git a/board/esd/cpci750/pci.c b/board/esd/cpci750/pci.c index c9b3ac2..59f170a 100644 --- a/board/esd/cpci750/pci.c +++ b/board/esd/cpci750/pci.c @@ -746,7 +746,7 @@ static int gt_read_config_dword (struct pci_controller *hose, int bus = PCI_BUS (dev); if ((bus == local_buses[0]) || (bus == local_buses[1])) { - *value = pciReadConfigReg ((PCI_HOST) hose->cfg_addr, + *value = pciReadConfigReg ((PCI_HOST) hose->cfg_addr, offset | (PCI_FUNC(dev) << 8), PCI_DEV (dev)); } else { diff --git a/board/etin/debris/flash.c b/board/etin/debris/flash.c index 9d22aa2..2657958 100644 --- a/board/etin/debris/flash.c +++ b/board/etin/debris/flash.c @@ -323,7 +323,7 @@ int flash_erase (flash_info_t *flash, int s_first, int s_last) if (prot) printf ("- Warning: %d protected sectors will not be erased!\n", - prot); + prot); else printf ("\n"); @@ -365,7 +365,7 @@ static const struct jedec_flash_info jedec_table[] = { DevSize: SIZE_1MiB, NumEraseRegions: 4, regions: {ERASEINFO(0x10000,15), - ERASEINFO(0x08000,1), + ERASEINFO(0x08000,1), ERASEINFO(0x02000,2), ERASEINFO(0x04000,1) } @@ -376,7 +376,7 @@ static const struct jedec_flash_info jedec_table[] = { DevSize: SIZE_2MiB, NumEraseRegions: 4, regions: {ERASEINFO(0x10000,31), - ERASEINFO(0x08000,1), + ERASEINFO(0x08000,1), ERASEINFO(0x02000,2), ERASEINFO(0x04000,1) } @@ -387,7 +387,7 @@ static const struct jedec_flash_info jedec_table[] = { DevSize: SIZE_2MiB, NumEraseRegions: 4, regions: {ERASEINFO(0x04000,1), - ERASEINFO(0x02000,2), + ERASEINFO(0x02000,2), ERASEINFO(0x08000,1), ERASEINFO(0x10000,31) } @@ -398,7 +398,7 @@ static const struct jedec_flash_info jedec_table[] = { DevSize: SIZE_4MiB, NumEraseRegions: 2, regions: {ERASEINFO(0x10000,63), - ERASEINFO(0x02000,8) + ERASEINFO(0x02000,8) } }, { @@ -408,7 +408,7 @@ static const struct jedec_flash_info jedec_table[] = { DevSize: SIZE_4MiB, NumEraseRegions: 2, regions: {ERASEINFO(0x02000,8), - ERASEINFO(0x10000,63) + ERASEINFO(0x10000,63) } } }; diff --git a/board/evb64260/flash.c b/board/evb64260/flash.c index 88c43ff..f3b0074 100644 --- a/board/evb64260/flash.c +++ b/board/evb64260/flash.c @@ -60,7 +60,7 @@ flash_init (void) #define CONFIG_SYS_BOOT_FLASH_WIDTH 1 #endif size_b0 = flash_get_size(CONFIG_SYS_BOOT_FLASH_WIDTH, (vu_long *)base, - &flash_info[0]); + &flash_info[0]); #ifndef CONFIG_P3G4 printf("["); @@ -97,17 +97,17 @@ flash_init (void) #if CONFIG_SYS_MONITOR_BASE >= CONFIG_SYS_FLASH_BASE /* monitor protection ON by default */ flash_protect(FLAG_PROTECT_SET, - CONFIG_SYS_MONITOR_BASE, - CONFIG_SYS_MONITOR_BASE + monitor_flash_len - 1, - flash_get_info(CONFIG_SYS_MONITOR_BASE)); + CONFIG_SYS_MONITOR_BASE, + CONFIG_SYS_MONITOR_BASE + monitor_flash_len - 1, + flash_get_info(CONFIG_SYS_MONITOR_BASE)); #endif #ifdef CONFIG_ENV_IS_IN_FLASH /* ENV protection ON by default */ flash_protect(FLAG_PROTECT_SET, - CONFIG_ENV_ADDR, - CONFIG_ENV_ADDR + CONFIG_ENV_SIZE - 1, - flash_get_info(CONFIG_ENV_ADDR)); + CONFIG_ENV_ADDR, + CONFIG_ENV_ADDR + CONFIG_ENV_SIZE - 1, + flash_get_info(CONFIG_ENV_ADDR)); #endif flash_size = size_b0 + size_b1; diff --git a/board/evb64260/mpsc.c b/board/evb64260/mpsc.c index 1a2ad20..c9da57c 100644 --- a/board/evb64260/mpsc.c +++ b/board/evb64260/mpsc.c @@ -785,7 +785,7 @@ galmpsc_shutdown(int mpsc) GT_REG_WRITE(GALSDMA_0_COM_REG + CHANNEL * GALSDMA_REG_DIFF, 0); GT_REG_WRITE(GALSDMA_0_COM_REG + CHANNEL * GALSDMA_REG_DIFF, - SDMA_TX_ABORT | SDMA_RX_ABORT); + SDMA_TX_ABORT | SDMA_RX_ABORT); /* shut down the MPSC */ GT_REG_WRITE(GALMPSC_MCONF_LOW, 0); @@ -797,7 +797,7 @@ galmpsc_shutdown(int mpsc) /* shut down the sdma engines. */ /* reset config to default */ GT_REG_WRITE(GALSDMA_0_CONF_REG + CHANNEL * GALSDMA_REG_DIFF, - 0x000000fc); + 0x000000fc); udelay(100); diff --git a/board/fads/fads.c b/board/fads/fads.c index 3fe318f..89dd9ef 100644 --- a/board/fads/fads.c +++ b/board/fads/fads.c @@ -434,7 +434,7 @@ static int _initsdram(uint base, uint noMbytes) */ memctl->memc_mcr = 0x80808111; /* run umpb cs4 1 count 1, addr 0x11 ??? (50MHz) */ - /* run umpb cs4 1 count 1, addr 0x11 precharge+MRS (100MHz) */ + /* run umpb cs4 1 count 1, addr 0x11 precharge+MRS (100MHz) */ udelay(200); /* Run 8 refresh cycles */ @@ -567,7 +567,7 @@ static int initsdram(uint base, uint *noMbytes) if(!_initsdram(base, m)) { - *noMbytes += m; + *noMbytes += m; return 0; } else diff --git a/board/freescale/b4860qds/tlb.c b/board/freescale/b4860qds/tlb.c index f71aca4..00798a1 100644 --- a/board/freescale/b4860qds/tlb.c +++ b/board/freescale/b4860qds/tlb.c @@ -68,7 +68,7 @@ struct fsl_e_tlb_entry tlb_table[] = { 0, 3, BOOKE_PAGESZ_256M, 1), SET_TLB_ENTRY(1, CONFIG_SYS_PCIE1_MEM_VIRT + 0x10000000, - CONFIG_SYS_PCIE1_MEM_PHYS + 0x10000000, + CONFIG_SYS_PCIE1_MEM_PHYS + 0x10000000, MAS3_SX|MAS3_SW|MAS3_SR, MAS2_I|MAS2_G, 0, 4, BOOKE_PAGESZ_256M, 1), diff --git a/board/freescale/bsc9132qds/README b/board/freescale/bsc9132qds/README index 4a3dbfe..f8377c9 100644 --- a/board/freescale/bsc9132qds/README +++ b/board/freescale/bsc9132qds/README @@ -23,14 +23,14 @@ Overview ECC), up to 1333 MHz data rate - Dedicated security engine featuring trusted boot - Two DMA controllers - - OCNDMA with four bidirectional channels - - SysDMA with sixteen bidirectional channels + - OCNDMA with four bidirectional channels + - SysDMA with sixteen bidirectional channels - Interfaces - - Four-lane SerDes PHY + - Four-lane SerDes PHY - PCI Express controller complies with the PEX Specification-Rev 2.0 - - Two Common Public Radio Interface (CPRI) controller lanes + - Two Common Public Radio Interface (CPRI) controller lanes - High-speed USB 2.0 host and device controller with ULPI interface - - Enhanced secure digital (SD/MMC) host controller (eSDHC) + - Enhanced secure digital (SD/MMC) host controller (eSDHC) - Antenna interface controller (AIC), supporting four industry standard JESD207/four custom ADI RF interfaces - ADI lanes support both full duplex FDD support & half duplex TDD diff --git a/board/freescale/mpc8313erdb/mpc8313erdb.c b/board/freescale/mpc8313erdb/mpc8313erdb.c index 591a4c6..69e98a5 100644 --- a/board/freescale/mpc8313erdb/mpc8313erdb.c +++ b/board/freescale/mpc8313erdb/mpc8313erdb.c @@ -129,12 +129,12 @@ void board_init_f(ulong bootflag) { board_early_init_f(); NS16550_init((NS16550_t)(CONFIG_SYS_IMMR + 0x4500), - CONFIG_SYS_NS16550_CLK / 16 / CONFIG_BAUDRATE); + CONFIG_SYS_NS16550_CLK / 16 / CONFIG_BAUDRATE); puts("NAND boot... "); init_timebase(); initdram(0); relocate_code(CONFIG_SYS_NAND_U_BOOT_RELOC_SP, (gd_t *)gd, - CONFIG_SYS_NAND_U_BOOT_RELOC); + CONFIG_SYS_NAND_U_BOOT_RELOC); } void board_init_r(gd_t *gd, ulong dest_addr) diff --git a/board/freescale/mpc8360emds/mpc8360emds.c b/board/freescale/mpc8360emds/mpc8360emds.c index 39a86df..ac96163 100644 --- a/board/freescale/mpc8360emds/mpc8360emds.c +++ b/board/freescale/mpc8360emds/mpc8360emds.c @@ -427,7 +427,7 @@ void ft_board_setup(void *blob, bd_t *bd) if (prop) { path = fdt_path_offset(blob, prop); prop = fdt_getprop(blob, path, - "phy-connection-type", 0); + "phy-connection-type", 0); if (prop && (strcmp(prop, "rgmii-id") == 0)) fdt_fixup_phy_connection(blob, path, PHY_INTERFACE_MODE_RGMII_RXID); @@ -439,7 +439,7 @@ void ft_board_setup(void *blob, bd_t *bd) if (prop) { path = fdt_path_offset(blob, prop); prop = fdt_getprop(blob, path, - "phy-connection-type", 0); + "phy-connection-type", 0); if (prop && (strcmp(prop, "rgmii-id") == 0)) fdt_fixup_phy_connection(blob, path, PHY_INTERFACE_MODE_RGMII_RXID); diff --git a/board/funkwerk/vovpn-gw/vovpn-gw.c b/board/funkwerk/vovpn-gw/vovpn-gw.c index d33563b..c2aad6e 100644 --- a/board/funkwerk/vovpn-gw/vovpn-gw.c +++ b/board/funkwerk/vovpn-gw/vovpn-gw.c @@ -270,9 +270,9 @@ int misc_init_r (void) for (i = 0; i < 64; i++) { c = *dummy; printf( "UPMA[%02d]: 0x%08lx,0x%08lx: 0x%08lx\n",i, - memctl->memc_mamr, - memctl->memc_mar, - memctl->memc_mdr ); + memctl->memc_mamr, + memctl->memc_mar, + memctl->memc_mdr ); } memctl->memc_mamr = 0x00044440; #endif diff --git a/board/gen860t/flash.c b/board/gen860t/flash.c index 8433d5d..ca1ed3d 100644 --- a/board/gen860t/flash.c +++ b/board/gen860t/flash.c @@ -182,7 +182,7 @@ flash_get_offsets (ulong base, flash_info_t *info) default: printf ("Don't know sector offsets for FLASH" - " type 0x%lx\n", info->flash_id); + " type 0x%lx\n", info->flash_id); return; } } diff --git a/board/incaip/incaip.c b/board/incaip/incaip.c index 911fb63..217b8af 100644 --- a/board/incaip/incaip.c +++ b/board/incaip/incaip.c @@ -58,9 +58,9 @@ phys_size_t initdram(int board_type) for (rows = 0xB; rows <= 0xD; rows++) { *INCA_IP_SDRAM_MC_CFGPB0 = (0x14 << 8) | - (rows << 4) | cols; + (rows << 4) | cols; size = get_ram_size((long *)CONFIG_SYS_SDRAM_BASE, - max_sdram_size()); + max_sdram_size()); if (size > max_size) { diff --git a/board/matrix_vision/mvbc_p/Makefile b/board/matrix_vision/mvbc_p/Makefile index 84add28..61474aa 100644 --- a/board/matrix_vision/mvbc_p/Makefile +++ b/board/matrix_vision/mvbc_p/Makefile @@ -19,7 +19,7 @@ OBJS := $(addprefix $(obj),$(COBJS)) SOBJS := $(addprefix $(obj),$(SOBJS)) $(LIB): $(obj).depend $(OBJS) - $(call cmd_link_o_target, $(OBJS)) + $(call cmd_link_o_target, $(OBJS)) ######################################################################### diff --git a/board/matrix_vision/mvsmr/Makefile b/board/matrix_vision/mvsmr/Makefile index 0e53e8d..ef768fe 100644 --- a/board/matrix_vision/mvsmr/Makefile +++ b/board/matrix_vision/mvsmr/Makefile @@ -19,7 +19,7 @@ OBJS := $(addprefix $(obj),$(COBJS)) SOBJS := $(addprefix $(obj),$(SOBJS)) $(LIB): $(obj).depend $(OBJS) - $(call cmd_link_o_target, $(OBJS)) + $(call cmd_link_o_target, $(OBJS)) @mkimage -T script -C none -n mvSMR_Script -d bootscript $(obj)bootscript.img ######################################################################### diff --git a/board/openrisc/openrisc-generic/or1ksim.cfg b/board/openrisc/openrisc-generic/or1ksim.cfg index 9926093..2bd8642 100644 --- a/board/openrisc/openrisc-generic/or1ksim.cfg +++ b/board/openrisc/openrisc-generic/or1ksim.cfg @@ -20,8 +20,8 @@ the simulator. not found too, it reverts to the built-in default configuration. NOTE: Users should not rely on the built-in configuration, since the - default configuration may differ between version. - Rather create a configuration file that sets all critical values. + default configuration may differ between version. + Rather create a configuration file that sets all critical values. This file may contain (standard C) comments only - no // support. @@ -306,7 +306,7 @@ end debug = 0-9 0 : no debug messages 1-9: debug message level. - higher numbers produce more messages + higher numbers produce more messages profile = 0/1 '0': don't generate profiling file 'sim.profile' @@ -375,11 +375,11 @@ end Core Verification. enabled = 0/1 - '0': disbable VAPI server - '1': enable/start VAPI server + '0': disbable VAPI server + '1': enable/start VAPI server server_port = - TCP/IP port to start VAPI server on + TCP/IP port to start VAPI server on log_enabled = 0/1 '0': disable VAPI requests logging @@ -565,56 +565,56 @@ end This section configures the UARTs enabled = <0|1> - Enable/disable the peripheral. By default if it is enabled. + Enable/disable the peripheral. By default if it is enabled. baseaddr = - address of first UART register for this device + address of first UART register for this device channel = : - The channel parameter indicates the source of received UART characters - and the sink for transmitted UART characters. + The channel parameter indicates the source of received UART characters + and the sink for transmitted UART characters. - The can be either "file", "xterm", "tcp", "fd", or "tty" - (without quotes). + The can be either "file", "xterm", "tcp", "fd", or "tty" + (without quotes). - A) To send/receive characters from a pair of files, use a file - channel: + A) To send/receive characters from a pair of files, use a file + channel: - channel=file:, + channel=file:, B) To create an interactive terminal window, use an xterm channel: - channel=xterm:[]* + channel=xterm:[]* C) To create a bidirectional tcp socket which one could, for example, - access via telnet, use a tcp channel: + access via telnet, use a tcp channel: - channel=tcp: + channel=tcp: D) To cause the UART to read/write from existing numeric file - descriptors, use an fd channel: + descriptors, use an fd channel: - channel=fd:, + channel=fd:, - E) To connect the UART to a physical serial port, create a tty - channel: + E) To connect the UART to a physical serial port, create a tty + channel: channel=tty:device=/dev/ttyS0,baud=9600 irq = - irq number for this device + irq number for this device 16550 = 0/1 - '0': this device is a UART16450 - '1': this device is a UART16550 + '0': this device is a UART16450 + '1': this device is a UART16550 jitter = - in msecs... time to block, -1 to disable it + in msecs... time to block, -1 to disable it vapi_id = - VAPI id of this instance + VAPI id of this instance */ section uart @@ -634,16 +634,16 @@ end This section configures the DMAs enabled = <0|1> - Enable/disable the peripheral. By default if it is enabled. + Enable/disable the peripheral. By default if it is enabled. baseaddr = - address of first DMA register for this device + address of first DMA register for this device irq = - irq number for this device + irq number for this device vapi_id = - VAPI id of this instance + VAPI id of this instance */ section dma @@ -658,37 +658,37 @@ end This section configures the ETHERNETs enabled = <0|1> - Enable/disable the peripheral. By default if it is enabled. + Enable/disable the peripheral. By default if it is enabled. baseaddr = - address of first ethernet register for this device + address of first ethernet register for this device dma = - which controller is this ethernet "connected" to + which controller is this ethernet "connected" to irq = - ethernet mac IRQ level + ethernet mac IRQ level rtx_type = - use 0 - file interface, 1 - socket interface + use 0 - file interface, 1 - socket interface rx_channel = - DMA channel used for RX + DMA channel used for RX tx_channel = - DMA channel used for TX + DMA channel used for TX rxfile = "" - filename, where to read data from + filename, where to read data from txfile = "" - filename, where to write data to + filename, where to write data to sockif = "" - interface name of ethernet socket + interface name of ethernet socket vapi_id = - VAPI id of this instance + VAPI id of this instance */ section ethernet @@ -711,16 +711,16 @@ end This section configures the GPIOs enabled = <0|1> - Enable/disable the peripheral. By default if it is enabled. + Enable/disable the peripheral. By default if it is enabled. baseaddr = - address of first GPIO register for this device + address of first GPIO register for this device irq = - irq number for this device + irq number for this device base_vapi_id = - first VAPI id of this instance + first VAPI id of this instance GPIO uses 8 consecutive VAPI IDs */ @@ -736,19 +736,19 @@ end This section configures the VGA/LCD controller enabled = <0|1> - Enable/disable the peripheral. By default if it is enabled. + Enable/disable the peripheral. By default if it is enabled. baseaddr = - address of first VGA register + address of first VGA register irq = - irq number for this device + irq number for this device refresh_rate = - number of cycles between screen dumps + number of cycles between screen dumps filename = "" - template name for generated names (e.g. "primary" produces "primary0023.bmp") + template name for generated names (e.g. "primary" produces "primary0023.bmp") */ section vga @@ -825,39 +825,39 @@ end This section configures the ATA/ATAPI host controller baseaddr = - address of first ATA register + address of first ATA register enabled = <0|1> - Enable/disable the peripheral. By default if it is enabled. + Enable/disable the peripheral. By default if it is enabled. irq = - irq number for this device + irq number for this device debug = - debug level for ata models. + debug level for ata models. 0: no debug messages 1: verbose messages 3: normal messages (more messages than verbose) - 5: debug messages (normal debug messages) + 5: debug messages (normal debug messages) 7: flow control messages (debug statemachine flows) 9: low priority message (display everything the code does) dev_type0/1 = - ata device 0 type - 0: NO_CONNeCT: none (not connected) + ata device 0 type + 0: NO_CONNeCT: none (not connected) 1: FILE : simulated harddisk 2: LOCAL : local system harddisk dev_file0/1 = "" - filename for simulated ATA device + filename for simulated ATA device valid only if dev_type0 == 1 dev_size0/1 = - size of simulated hard-disk (in MBytes) + size of simulated hard-disk (in MBytes) valid only if dev_type0 == 1 dev_packet0/1 = - 0: simulated ATA device does NOT implement PACKET command feature set + 0: simulated ATA device does NOT implement PACKET command feature set 1: simulated ATA device does implement PACKET command feature set FIXME: irq number diff --git a/board/ppmc8260/strataflash.c b/board/ppmc8260/strataflash.c index 03d36ee..ea3c42e 100644 --- a/board/ppmc8260/strataflash.c +++ b/board/ppmc8260/strataflash.c @@ -407,7 +407,7 @@ static int flash_full_status_check(flash_info_t * info, ulong sector, ulong tout printf("Command Sequence Error.\n"); } else if(flash_isset(info, sector, 0, FLASH_STATUS_ECLBS)){ printf("Block Erase Error.\n"); - retcode = ERR_NOT_ERASED; + retcode = ERR_NOT_ERASED; } else if (flash_isset(info, sector, 0, FLASH_STATUS_PSLBS)) { printf("Locking Error\n"); } diff --git a/board/pxa255_idp/pxa_reg_calcs.py b/board/pxa255_idp/pxa_reg_calcs.py index 786edc6..4a721d1 100644 --- a/board/pxa255_idp/pxa_reg_calcs.py +++ b/board/pxa255_idp/pxa_reg_calcs.py @@ -21,7 +21,7 @@ class gpio: self.clr = clr self.alt = alt self.desc = desc - + # the following is a dictionary of all GPIOs in the system # the key is the GPIO number @@ -280,8 +280,8 @@ for reg in registers: # print define to past right into U-Boot source code -print -print +print +print for reg in registers: print '#define %s 0x%x' % (uboot_reg_names[reg], pxa_regs[reg]) diff --git a/board/rbc823/rbc823.c b/board/rbc823/rbc823.c index f276e5e..5881111 100644 --- a/board/rbc823/rbc823.c +++ b/board/rbc823/rbc823.c @@ -78,7 +78,7 @@ const uint static_table[] = 0x0FFFFC04, 0x0FF3FC04, 0x0FF3CC04, 0x0FF3CC04, 0x0FF3EC04, 0x0FF3CC00, 0x0FF7FC04, 0x3FFFFC04, 0xFFFFFC04, 0xFFFFFC05, /* last */ - _NOT_USED_, _NOT_USED_, + _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, _NOT_USED_, diff --git a/board/samsung/dts/exynos5250-snow.dts b/board/samsung/dts/exynos5250-snow.dts index 9b7e57e..12cd67e 100644 --- a/board/samsung/dts/exynos5250-snow.dts +++ b/board/samsung/dts/exynos5250-snow.dts @@ -71,9 +71,9 @@ codec-enable-gpio = <&gpio 0xb7 0>; }; - sound@12d60000 { - status = "disabled"; - }; + sound@12d60000 { + status = "disabled"; + }; i2c@12cd0000 { soundcodec@22 { diff --git a/board/svm_sc8xx/svm_sc8xx.c b/board/svm_sc8xx/svm_sc8xx.c index a508595..5db4850 100644 --- a/board/svm_sc8xx/svm_sc8xx.c +++ b/board/svm_sc8xx/svm_sc8xx.c @@ -139,6 +139,6 @@ phys_size_t initdram (int board_type) #if defined(CONFIG_CMD_DOC) void doc_init (void) { - doc_probe (CONFIG_SYS_DOC_BASE); + doc_probe (CONFIG_SYS_DOC_BASE); } #endif diff --git a/boards.cfg b/boards.cfg index 24df88b..c90dddd 100644 --- a/boards.cfg +++ b/boards.cfg @@ -196,7 +196,7 @@ Active arm arm926ejs mb86r0x syteco jadecpu Active arm arm926ejs mx25 freescale mx25pdk mx25pdk mx25pdk:IMX_CONFIG=board/freescale/mx25pdk/imximage.cfg Fabio Estevam Active arm arm926ejs mx25 karo tx25 tx25 - John Rigby Active arm arm926ejs mx25 syteco zmx25 zmx25 - Matthias Weisser -Active arm arm926ejs mx27 armadeus apf27 apf27 - Philippe Reynes :Eric Jarrige +Active arm arm926ejs mx27 armadeus apf27 apf27 - Philippe Reynes :Eric Jarrige Active arm arm926ejs mx27 logicpd imx27lite imx27lite - Wolfgang Denk Active arm arm926ejs mx27 logicpd imx27lite magnesium - Heiko Schocher Active arm arm926ejs mxs bluegiga apx4devkit apx4devkit apx4devkit Lauri Hintsala diff --git a/common/cmd_bmp.c b/common/cmd_bmp.c index c093fef..cc904c2 100644 --- a/common/cmd_bmp.c +++ b/common/cmd_bmp.c @@ -121,9 +121,9 @@ static int do_bmp_display(cmd_tbl_t * cmdtp, int flag, int argc, char * const ar break; case 4: addr = simple_strtoul(argv[1], NULL, 16); - x = simple_strtoul(argv[2], NULL, 10); - y = simple_strtoul(argv[3], NULL, 10); - break; + x = simple_strtoul(argv[2], NULL, 10); + y = simple_strtoul(argv[3], NULL, 10); + break; default: return CMD_RET_USAGE; } diff --git a/common/main.c b/common/main.c index ae37fee..6f475f0 100644 --- a/common/main.c +++ b/common/main.c @@ -365,7 +365,7 @@ static void process_boot_delay(void) #ifdef CONFIG_BOOTCOUNT_LIMIT if (bootlimit && (bootcount > bootlimit)) { printf ("Warning: Bootlimit (%u) exceeded. Using altbootcmd.\n", - (unsigned)bootlimit); + (unsigned)bootlimit); s = getenv ("altbootcmd"); } else diff --git a/config.mk b/config.mk index 48913f6..91a8f24 100644 --- a/config.mk +++ b/config.mk @@ -320,7 +320,7 @@ endif # Linus' kernel sanity checking tool CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \ - -Wbitwise -Wno-return-void -D__CHECK_ENDIAN__ $(CF) + -Wbitwise -Wno-return-void -D__CHECK_ENDIAN__ $(CF) # Location of a usable BFD library, where we define "usable" as # "built for ${HOST}, supports ${TARGET}". Sensible values are diff --git a/doc/DocBook/Makefile b/doc/DocBook/Makefile index 521e8bc..29b79d7 100644 --- a/doc/DocBook/Makefile +++ b/doc/DocBook/Makefile @@ -134,7 +134,7 @@ build_main_index = rm -rf $(main_idx); \ quiet_cmd_db2html = HTML $@ cmd_db2html = xmlto html $(XMLTOFLAGS) -o $(patsubst %.html,%,$@) $< && \ echo ' \ - $(patsubst %.html,%,$(notdir $@))

' > $@ + $(patsubst %.html,%,$(notdir $@))

' > $@ %.html: %.xml @(which xmlto > /dev/null 2>&1) || \ @@ -143,7 +143,7 @@ quiet_cmd_db2html = HTML $@ @rm -rf $@ $(patsubst %.html,%,$@) $(call cmd_db2html) @if [ ! -z "$(PNG-$(basename $(notdir $@)))" ]; then \ - cp $(PNG-$(basename $(notdir $@))) $(patsubst %.html,%,$@); fi + cp $(PNG-$(basename $(notdir $@))) $(patsubst %.html,%,$@); fi quiet_cmd_db2man = MAN $@ cmd_db2man = if grep -q refentry $<; then xmlto man $(XMLTOFLAGS) -o $(obj)/man $< ; gzip -f $(obj)/man/*.9; fi diff --git a/doc/README.ext4 b/doc/README.ext4 index b7d0ad3..9a2de50 100644 --- a/doc/README.ext4 +++ b/doc/README.ext4 @@ -28,30 +28,30 @@ Steps to test: 1. After applying the patch, ext4 specific commands can be seen in the boot loader prompt using - UBOOT #help + UBOOT #help - ext4load- load binary file from a Ext4 file system - ext4ls - list files in a directory (default /) - ext4write- create a file in ext4 formatted partition + ext4load- load binary file from a Ext4 file system + ext4ls - list files in a directory (default /) + ext4write- create a file in ext4 formatted partition 2. To list the files in ext4 formatted partition, execute - ext4ls [directory] - For example: - UBOOT #ext4ls mmc 0:5 /usr/lib + ext4ls [directory] + For example: + UBOOT #ext4ls mmc 0:5 /usr/lib 3. To read and load a file from an ext4 formatted partition to RAM, execute - ext4load [addr] [filename] [bytes] - For example: - UBOOT #ext4load mmc 2:2 0x30007fc0 uImage + ext4load [addr] [filename] [bytes] + For example: + UBOOT #ext4load mmc 2:2 0x30007fc0 uImage 4. To write a file to a ext4 formatted partition. - a) First load a file to RAM at a particular address for example 0x30007fc0. - Now execute ext4write command - ext4write [filename] [Address] [sizebytes] - For example: - UBOOT #ext4write mmc 2:2 /boot/uImage 0x30007fc0 6183120 - (here 6183120 is the size of the file to be written) - Note: Absolute path is required for the file to be written + a) First load a file to RAM at a particular address for example 0x30007fc0. + Now execute ext4write command + ext4write [filename] [Address] [sizebytes] + For example: + UBOOT #ext4write mmc 2:2 /boot/uImage 0x30007fc0 6183120 + (here 6183120 is the size of the file to be written) + Note: Absolute path is required for the file to be written References : -- ext4 implementation in Linux Kernel diff --git a/doc/README.kwbimage b/doc/README.kwbimage index e91c387..8ed708c 100644 --- a/doc/README.kwbimage +++ b/doc/README.kwbimage @@ -40,9 +40,9 @@ Board specific configuration file specifications: ------------------------------------------------ 1. This file must present in the $(BOARDDIR). The default name is kwbimage.cfg. The name can be set as part of the full path - to the file using CONFIG_SYS_KWD_CONFIG (probably in - include/configs/.h). The path should look like: - $(SRCTREE)/$(CONFIG_BOARDDIR)/.cfg + to the file using CONFIG_SYS_KWD_CONFIG (probably in + include/configs/.h). The path should look like: + $(SRCTREE)/$(CONFIG_BOARDDIR)/.cfg 2. This file can have empty lines and lines starting with "#" as first character to put comments 3. This file can have configuration command lines as mentioned below, diff --git a/doc/README.mxc_hab b/doc/README.mxc_hab index 97f8b7d..43e64a2 100644 --- a/doc/README.mxc_hab +++ b/doc/README.mxc_hab @@ -20,7 +20,7 @@ Data Size: 327680 Bytes = 320.00 kB = 0.31 MB Load Address: 177ff420 Entry Point: 17800000 HAB Blocks: 177ff400 00000000 0004dc00 - ^^^^^^^^ ^^^^^^^^ ^^^^^^^^ + ^^^^^^^^ ^^^^^^^^ ^^^^^^^^ | | | | | -------- (1) | | diff --git a/doc/README.mxsimage b/doc/README.mxsimage index 88a2caf..0d31cba 100644 --- a/doc/README.mxsimage +++ b/doc/README.mxsimage @@ -10,7 +10,7 @@ The mxsimage tool is targeted to be a simple replacement for the elftosb2 . To generate an image, write an image configuration file and run: mkimage -A arm -O u-boot -T mxsimage -n \ - + The output bootstream file is usually using the .sb file extension. Note that the example configuration files for producing bootable BootStream with @@ -54,33 +54,33 @@ These semantics and rules will be outlined now. LOAD u32_address string_filename - Instructs the BootROM to load file pointed by "string_filename" onto - address "u32_address". + address "u32_address". LOAD IVT u32_address u32_IVT_entry_point - Crafts and loads IVT onto address "u32_address" with the entry point - of u32_IVT_entry_point. + of u32_IVT_entry_point. - i.MX28-specific instruction! LOAD DCD u32_address u32_DCD_block_ID - Loads the DCD block with ID "u32_DCD_block_ID" onto address - "u32_address" and executes the contents of this DCD block + "u32_address" and executes the contents of this DCD block - i.MX28-specific instruction! FILL u32_address u32_pattern u32_length - Starts to write memory from addres "u32_address" with a pattern - specified by "u32_pattern". Writes exactly "u32_length" bytes of the + specified by "u32_pattern". Writes exactly "u32_length" bytes of the pattern. JUMP [HAB] u32_address [u32_r0_arg] - Jumps onto memory address specified by "u32_address" by setting this - address in PT. The BootROM will pass the "u32_r0_arg" value in ARM + address in PT. The BootROM will pass the "u32_r0_arg" value in ARM register "r0" to the executed code if this option is specified. Otherwise, ARM register "r0" will default to value 0x00000000. The optional "HAB" flag is i.MX28-specific flag turning on the HAB boot. CALL [HAB] u32_address [u32_r0_arg] - See JUMP instruction above, as the operation is exactly the same with - one difference. The CALL instruction does allow returning into the + one difference. The CALL instruction does allow returning into the BootROM from the executed code. U-Boot makes use of this in it's SPL code. @@ -88,10 +88,10 @@ These semantics and rules will be outlined now. - Restart the CPU and start booting from device specified by the "string_mode" argument. The "string_mode" differs for each CPU and can be: - i.MX23, string_mode = USB/I2C/SPI1_FLASH/SPI2_FLASH/NAND_BCH - JTAG/SPI3_EEPROM/SD_SSP0/SD_SSP1 - i.MX28, string_mode = USB/I2C/SPI2_FLASH/SPI3_FLASH/NAND_BCH - JTAG/SPI2_EEPROM/SD_SSP0/SD_SSP1 + i.MX23, string_mode = USB/I2C/SPI1_FLASH/SPI2_FLASH/NAND_BCH + JTAG/SPI3_EEPROM/SD_SSP0/SD_SSP1 + i.MX28, string_mode = USB/I2C/SPI2_FLASH/SPI3_FLASH/NAND_BCH + JTAG/SPI2_EEPROM/SD_SSP0/SD_SSP1 - An optional "DCD" blocks can be added at the begining of the configuration file. Note that the DCD is only supported on i.MX28. @@ -99,7 +99,7 @@ These semantics and rules will be outlined now. configuration file. - The DCD block has the following semantics: - DCD u32_DCD_block_ID + DCD u32_DCD_block_ID - u32_DCD_block_ID :: The ID number of the DCD block, must match the ID number used by "LOAD DCD" instruction. diff --git a/doc/README.nokia_rx51 b/doc/README.nokia_rx51 index a8fdfcd..586ed7c 100644 --- a/doc/README.nokia_rx51 +++ b/doc/README.nokia_rx51 @@ -62,7 +62,7 @@ Available additional commands/variables: * run trymmcscriptboot - Try to load and boot script ${mmcscriptfile} * run trymmckernboot - Try to load and boot kernel image ${mmckernfile} * run trymmckerninitrdboot - Try to load and boot kernel image ${mmckernfile} - with initrd image ${mmcinitrdfile} + with initrd image ${mmcinitrdfile} Additional variables for loading files from mmc: diff --git a/doc/README.ramboot-ppc85xx b/doc/README.ramboot-ppc85xx index 8ed45fb..5cc546a 100644 --- a/doc/README.ramboot-ppc85xx +++ b/doc/README.ramboot-ppc85xx @@ -23,11 +23,11 @@ methods could be handy. the board.And then execute the bootloader from DDR. Some usecases where this may be used: - While developing some new feature of u-boot, for example USB driver or - SPI driver. - Suppose the board already has a working bootloader on it. And you would - prefer to keep it intact, at the same time want to test your bootloader. - In this case you can get your test bootloader binary into DDR via tftp - for example. Then execute the test bootloader. + SPI driver. + Suppose the board already has a working bootloader on it. And you would + prefer to keep it intact, at the same time want to test your bootloader. + In this case you can get your test bootloader binary into DDR via tftp + for example. Then execute the test bootloader. - Suppose a platform already has a propreitery bootloader which does not support for example AMP boot. In this case also RAM boot loader can be utilized. diff --git a/doc/README.trace b/doc/README.trace index 8106290..f0c9699 100644 --- a/doc/README.trace +++ b/doc/README.trace @@ -63,20 +63,20 @@ In: serial Out: serial Err: serial =>trace stats - 671,406 function sites - 69,712 function calls - 0 untracked function calls - 73,373 traced function calls - 16 maximum observed call depth - 15 call depth limit - 66,491 calls not traced due to depth + 671,406 function sites + 69,712 function calls + 0 untracked function calls + 73,373 traced function calls + 16 maximum observed call depth + 15 call depth limit + 66,491 calls not traced due to depth =>trace stats - 671,406 function sites + 671,406 function sites 1,279,450 function calls - 0 untracked function calls - 950,490 traced function calls (333217 dropped due to overflow) - 16 maximum observed call depth - 15 call depth limit + 0 untracked function calls + 950,490 traced function calls (333217 dropped due to overflow) + 16 maximum observed call depth + 15 call depth limit 1,275,767 calls not traced due to depth =>trace calls 0 e00000 Call list dumped to 00000000, size 0xae0a40 diff --git a/doc/README.ubi b/doc/README.ubi index 3cf4ef2..007de44 100644 --- a/doc/README.ubi +++ b/doc/README.ubi @@ -185,8 +185,8 @@ ubifsls [directory] For example: => ubifsls - 17442 Thu Jan 01 02:57:38 1970 imx28-evk.dtb - 2998146 Thu Jan 01 02:57:43 1970 zImage + 17442 Thu Jan 01 02:57:38 1970 imx28-evk.dtb + 2998146 Thu Jan 01 02:57:43 1970 zImage And the ubifsload command allows you to load a file from a UBI diff --git a/doc/README.zfs b/doc/README.zfs index f5998f2..7f237c4 100644 --- a/doc/README.zfs +++ b/doc/README.zfs @@ -7,20 +7,20 @@ Steps to test: 1. After applying the patch, zfs specific commands can be seen in the boot loader prompt using - UBOOT #help + UBOOT #help - zfsload- load binary file from a ZFS file system - zfsls - list files in a directory (default /) + zfsload- load binary file from a ZFS file system + zfsls - list files in a directory (default /) 2. To list the files in zfs pool, device or partition, execute - zfsls [POOL/@/dir/file] - For example: - UBOOT #zfsls mmc 0:5 /rpool/@/usr/bin/ + zfsls [POOL/@/dir/file] + For example: + UBOOT #zfsls mmc 0:5 /rpool/@/usr/bin/ 3. To read and load a file from an ZFS formatted partition to RAM, execute - zfsload [addr] [filename] [bytes] - For example: - UBOOT #zfsload mmc 2:2 0x30007fc0 /rpool/@/boot/uImage + zfsload [addr] [filename] [bytes] + For example: + UBOOT #zfsload mmc 2:2 0x30007fc0 /rpool/@/boot/uImage References : -- ZFS GRUB sources from Solaris GRUB-0.97 diff --git a/doc/driver-model/UDM-cores.txt b/doc/driver-model/UDM-cores.txt index 4e13188..6032333 100644 --- a/doc/driver-model/UDM-cores.txt +++ b/doc/driver-model/UDM-cores.txt @@ -94,12 +94,12 @@ Pavel Herrmann driver_activate(instance *inst); This call will recursively activate all devices necessary for using the specified device. the code could be simplified as: - { - if (is_activated(inst)) - return; - driver_activate(inst->bus); - get_driver(inst)->probe(inst); - } + { + if (is_activated(inst)) + return; + driver_activate(inst->bus); + get_driver(inst)->probe(inst); + } The case with multiple parents will need to be handled here as well. get_driver is an accessor to available drivers, which will get struct @@ -107,12 +107,12 @@ Pavel Herrmann i2c_write(instance *inst, ...); An actual call to some method of the driver. This code will look like: - { - driver_activate(inst); - struct instance *core = get_core_instance(CORE_I2C); - device_ops = get_ops(inst); - device_ops->write(...); - } + { + driver_activate(inst); + struct instance *core = get_core_instance(CORE_I2C); + device_ops = get_ops(inst); + device_ops->write(...); + } get_ops will not be an exported function, it will be internal and specific to the core, as it needs to know how are the ops stored, and what type diff --git a/doc/driver-model/UDM-design.txt b/doc/driver-model/UDM-design.txt index 185f477..9f03bba 100644 --- a/doc/driver-model/UDM-design.txt +++ b/doc/driver-model/UDM-design.txt @@ -87,7 +87,7 @@ III) The drivers of the cores. FIXME: Should *cores[] be really struct driver, pointing to drivers that - represent the cores? Shouldn't it be core instance pointer? + represent the cores? Shouldn't it be core instance pointer? 2) Instantiation of a driver ---------------------------- @@ -101,7 +101,7 @@ III) The drivers functions. FIXME: We need some functions that will return list of busses of certain type - registered with the system so the user can find proper instance even if + registered with the system so the user can find proper instance even if he has no bus pointer (this will come handy if the user isn't registering the driver from board init function, but somewhere else). @@ -183,12 +183,12 @@ III) The drivers int driver_bind(struct instance *in) { ... - core_bind(&core_i2c_static_instance, in, i2c_bus_funcs); - ... + core_bind(&core_i2c_static_instance, in, i2c_bus_funcs); + ... } FIXME: What if we need to run-time determine, depending on some hardware - register, what kind of i2c_bus_funcs to pass? + register, what kind of i2c_bus_funcs to pass? This makes the i2c core aware of a new bus. The i2c_bus_funcs is a constant structure of functions any i2c bus driver must provide to work. This will @@ -196,7 +196,7 @@ III) The drivers the pointer to the instance of a core this driver provides function to. FIXME: Maybe replace "core-i2c" with CORE_I2C global pointer to an instance of - the core? + the core? 4) The instantiation of a core driver ------------------------------------- diff --git a/doc/driver-model/UDM-gpio.txt b/doc/driver-model/UDM-gpio.txt index 8ff0a96..87554dd 100644 --- a/doc/driver-model/UDM-gpio.txt +++ b/doc/driver-model/UDM-gpio.txt @@ -56,11 +56,11 @@ II) Approach struct gpio_driver_ops { int (*gpio_request)(struct instance *i, unsigned gpio, - const char *label); + const char *label); int (*gpio_free)(struct instance *i, unsigned gpio); int (*gpio_direction_input)(struct instance *i, unsigned gpio); int (*gpio_direction_output)(struct instance *i, unsigned gpio, - int value); + int value); int (*gpio_get_value)(struct instance *i, unsigned gpio); void (*gpio_set_value)(struct instance *i, unsigned gpio, int value); } diff --git a/doc/driver-model/UDM-hwmon.txt b/doc/driver-model/UDM-hwmon.txt index cc5d529..9048cc0 100644 --- a/doc/driver-model/UDM-hwmon.txt +++ b/doc/driver-model/UDM-hwmon.txt @@ -36,15 +36,15 @@ II) Approach In the UDM each hwmon driver would register itself by a function int hwmon_device_register(struct instance *i, - struct hwmon_device_ops *o); + struct hwmon_device_ops *o); The structure being defined as follows: struct hwmon_device_ops { - int (*read)(struct instance *i, int sensor, int reg); - int (*write)(struct instance *i, int sensor, int reg, - int val); - int (*get_temp)(struct instance *i, int sensor); + int (*read)(struct instance *i, int sensor, int reg); + int (*write)(struct instance *i, int sensor, int reg, + int val); + int (*get_temp)(struct instance *i, int sensor); }; diff --git a/doc/driver-model/UDM-mmc.txt b/doc/driver-model/UDM-mmc.txt index bed4306..1f07d87 100644 --- a/doc/driver-model/UDM-mmc.txt +++ b/doc/driver-model/UDM-mmc.txt @@ -107,7 +107,7 @@ struct mmc { /* DRIVER: Function used to submit command to the card */ int (*send_cmd)(struct mmc *mmc, - struct mmc_cmd *cmd, struct mmc_data *data); + struct mmc_cmd *cmd, struct mmc_data *data); /* DRIVER: Function used to configure the host */ void (*set_ios)(struct mmc *mmc); @@ -139,7 +139,7 @@ provided by the MMC driver: struct mmc_driver_ops { /* Function used to submit command to the card */ int (*send_cmd)(struct mmc *mmc, - struct mmc_cmd *cmd, struct mmc_data *data); + struct mmc_cmd *cmd, struct mmc_data *data); /* DRIVER: Function used to configure the host */ void (*set_ios)(struct mmc *mmc); /* Function used to initialize the host */ @@ -206,7 +206,7 @@ struct mmc_card_props { The probe() function will then register the MMC driver by calling: mmc_device_register(struct instance *i, struct mmc_driver_ops *o, - struct mmc_driver_params *p); + struct mmc_driver_params *p); The struct mmc_driver_params will have to be dynamic in some cases, but the driver shouldn't modify it's contents elsewhere than in probe() call. diff --git a/doc/driver-model/UDM-power.txt b/doc/driver-model/UDM-power.txt index 9ac1a5f..015c773 100644 --- a/doc/driver-model/UDM-power.txt +++ b/doc/driver-model/UDM-power.txt @@ -57,20 +57,20 @@ III) Analysis of in-tree drivers All methods of this file are moved to another location. void ftpmu010_32768osc_enable(void): Move to boards hacks void ftpmu010_mfpsr_select_dev(unsigned int dev): Move to board file - arch/nds32/lib/board.c + arch/nds32/lib/board.c void ftpmu010_mfpsr_diselect_dev(unsigned int dev): Dead code void ftpmu010_dlldis_disable(void): Dead code void ftpmu010_sdram_clk_disable(unsigned int cr0): Move to board file - arch/nds32/lib/board.c + arch/nds32/lib/board.c void ftpmu010_sdramhtc_set(unsigned int val): Move to board file - arch/nds32/lib/board.c + arch/nds32/lib/board.c 2) twl4030.c ------------ All methods of this file are moved to another location. void twl4030_power_reset_init(void): Move to board hacks void twl4030_pmrecv_vsel_cfg(u8 vsel_reg, u8 vsel_val, u8 dev_grp, - u8 dev_grp_sel): Move to board hacks + u8 dev_grp_sel): Move to board hacks void twl4030_power_init(void): Move to board hacks void twl4030_power_mmc_init(void): Move to board hacks @@ -83,6 +83,6 @@ III) Analysis of in-tree drivers int twl6030_get_battery_voltage(void): Convert to new API void twl6030_init_battery_charging(void): Convert to new API void twl6030_power_mmc_init(): Move to board file - drivers/mmc/omap_hsmmc.c + drivers/mmc/omap_hsmmc.c void twl6030_usb_device_settings(): Move to board file - drivers/usb/musb/omap3.c + drivers/usb/musb/omap3.c diff --git a/doc/driver-model/UDM-rtc.txt b/doc/driver-model/UDM-rtc.txt index 6aaeb86..8391f38 100644 --- a/doc/driver-model/UDM-rtc.txt +++ b/doc/driver-model/UDM-rtc.txt @@ -12,15 +12,15 @@ U-Boot currently implements one common API for RTC devices. The interface is defined in include/rtc.h and comprises of functions and structures: struct rtc_time { - int tm_sec; - int tm_min; - int tm_hour; - int tm_mday; - int tm_mon; - int tm_year; - int tm_wday; - int tm_yday; - int tm_isdst; + int tm_sec; + int tm_min; + int tm_hour; + int tm_mday; + int tm_mon; + int tm_year; + int tm_wday; + int tm_yday; + int tm_isdst; }; int rtc_get (struct rtc_time *); @@ -42,14 +42,14 @@ II) Approach In the UDM each rtc driver would register itself by a function int rtc_device_register(struct instance *i, - struct rtc_device_ops *o); + struct rtc_device_ops *o); The structure being defined as follows: struct rtc_device_ops { - int (*get_time)(struct instance *i, struct rtc_time *t); - int (*set_time)(struct instance *i, struct rtc_time *t); - int (*reset)(struct instance *i); + int (*get_time)(struct instance *i, struct rtc_time *t); + int (*set_time)(struct instance *i, struct rtc_time *t); + int (*reset)(struct instance *i); }; diff --git a/doc/driver-model/UDM-spi.txt b/doc/driver-model/UDM-spi.txt index 7442a32..6e6acc8 100644 --- a/doc/driver-model/UDM-spi.txt +++ b/doc/driver-model/UDM-spi.txt @@ -15,12 +15,12 @@ I) Overview void spi_init(void); struct spi_slave *spi_setup_slave(unsigned int bus, unsigned int cs, - unsigned int max_hz, unsigned int mode); + unsigned int max_hz, unsigned int mode); void spi_free_slave(struct spi_slave *slave); int spi_claim_bus(struct spi_slave *slave); void spi_release_bus(struct spi_slave *slave); int spi_xfer(struct spi_slave *slave, unsigned int bitlen, - const void *dout, void *din, unsigned long flags); + const void *dout, void *din, unsigned long flags); int spi_cs_is_valid(unsigned int bus, unsigned int cs); void spi_cs_activate(struct spi_slave *slave); void spi_cs_deactivate(struct spi_slave *slave); @@ -69,13 +69,13 @@ II) Approach struct ops { int (*spi_request_bus)(struct instance *i, unsigned int bus, - unsigned int cs, unsigned int max_hz, - unsigned int mode); + unsigned int cs, unsigned int max_hz, + unsigned int mode); void (*spi_release_bus)(struct instance *i); int (*spi_xfer) (struct instance *i, unsigned int bitlen, - const void *dout, void *din, unsigned long flags); + const void *dout, void *din, unsigned long flags); int (*spi_cs_is_valid)(struct instance *i, unsigned int bus, - unsigned int cs); + unsigned int cs); void (*spi_cs_activate)(struct instance *i); void (*spi_cs_deactivate)(struct instance *i); void (*spi_set_speed)(struct instance *i, uint hz); diff --git a/doc/driver-model/UDM-stdio.txt b/doc/driver-model/UDM-stdio.txt index a6c484f..c0b1c90 100644 --- a/doc/driver-model/UDM-stdio.txt +++ b/doc/driver-model/UDM-stdio.txt @@ -17,29 +17,29 @@ Each device that wants to register with STDIO subsystem has to define struct stdio_dev, defined in include/stdio_dev.h and containing the following fields: struct stdio_dev { - int flags; /* Device flags: input/output/system */ - int ext; /* Supported extensions */ - char name[16]; /* Device name */ + int flags; /* Device flags: input/output/system */ + int ext; /* Supported extensions */ + char name[16]; /* Device name */ /* GENERAL functions */ - int (*start) (void); /* To start the device */ - int (*stop) (void); /* To stop the device */ + int (*start) (void); /* To start the device */ + int (*stop) (void); /* To stop the device */ /* OUTPUT functions */ - void (*putc) (const char c); /* To put a char */ - void (*puts) (const char *s); /* To put a string (accelerator) */ + void (*putc) (const char c); /* To put a char */ + void (*puts) (const char *s); /* To put a string (accelerator) */ /* INPUT functions */ - int (*tstc) (void); /* To test if a char is ready... */ - int (*getc) (void); /* To get that char */ + int (*tstc) (void); /* To test if a char is ready... */ + int (*getc) (void); /* To get that char */ /* Other functions */ - void *priv; /* Private extensions */ - struct list_head list; + void *priv; /* Private extensions */ + struct list_head list; }; Currently used flags are DEV_FLAGS_INPUT, DEV_FLAGS_OUTPUT and DEV_FLAGS_SYSTEM, @@ -139,13 +139,13 @@ II) Approach purpose. The following flags will be defined: STDIO_FLG_STDIN ..... This device will be used as an input device. All input - from all devices with this flag set will be received + from all devices with this flag set will be received and passed to the upper layers. STDIO_FLG_STDOUT .... This device will be used as an output device. All - output sent to stdout will be routed to all devices + output sent to stdout will be routed to all devices with this flag set. STDIO_FLG_STDERR .... This device will be used as an standard error output - device. All output sent to stderr will be routed to + device. All output sent to stderr will be routed to all devices with this flag set. The "list" member of this structure allows to have a linked list of all diff --git a/doc/driver-model/UDM-tpm.txt b/doc/driver-model/UDM-tpm.txt index 91a953a..0beff4a 100644 --- a/doc/driver-model/UDM-tpm.txt +++ b/doc/driver-model/UDM-tpm.txt @@ -14,7 +14,7 @@ controlling it is very much based on this. The API is very simple: int tis_open(void); int tis_close(void); int tis_sendrecv(const u8 *sendbuf, size_t send_size, - u8 *recvbuf, size_t *recv_len); + u8 *recvbuf, size_t *recv_len); The command operating the TPM chip only provides operations to send and receive bytes from the chip. diff --git a/doc/driver-model/UDM-watchdog.txt b/doc/driver-model/UDM-watchdog.txt index 7db3286..7948e59 100644 --- a/doc/driver-model/UDM-watchdog.txt +++ b/doc/driver-model/UDM-watchdog.txt @@ -41,13 +41,13 @@ II) Approach In the UDM each watchdog driver would register itself by a function int watchdog_device_register(struct instance *i, - const struct watchdog_device_ops *o); + const struct watchdog_device_ops *o); The structure being defined as follows: struct watchdog_device_ops { - int (*disable)(struct instance *i); - void (*reset)(struct instance *i); + int (*disable)(struct instance *i); + void (*reset)(struct instance *i); }; The watchdog_init() function will be dissolved into probe() function. diff --git a/drivers/mtd/nand/fsl_elbc_spl.c b/drivers/mtd/nand/fsl_elbc_spl.c index a7476b4..2952135 100644 --- a/drivers/mtd/nand/fsl_elbc_spl.c +++ b/drivers/mtd/nand/fsl_elbc_spl.c @@ -59,20 +59,20 @@ static int nand_load_image(uint32_t offs, unsigned int uboot_size, void *vdst) if (large) { fmr |= FMR_ECCM; out_be32(®s->fcr, (NAND_CMD_READ0 << FCR_CMD0_SHIFT) | - (NAND_CMD_READSTART << FCR_CMD1_SHIFT)); + (NAND_CMD_READSTART << FCR_CMD1_SHIFT)); out_be32(®s->fir, - (FIR_OP_CW0 << FIR_OP0_SHIFT) | - (FIR_OP_CA << FIR_OP1_SHIFT) | - (FIR_OP_PA << FIR_OP2_SHIFT) | - (FIR_OP_CW1 << FIR_OP3_SHIFT) | - (FIR_OP_RBW << FIR_OP4_SHIFT)); + (FIR_OP_CW0 << FIR_OP0_SHIFT) | + (FIR_OP_CA << FIR_OP1_SHIFT) | + (FIR_OP_PA << FIR_OP2_SHIFT) | + (FIR_OP_CW1 << FIR_OP3_SHIFT) | + (FIR_OP_RBW << FIR_OP4_SHIFT)); } else { out_be32(®s->fcr, NAND_CMD_READ0 << FCR_CMD0_SHIFT); out_be32(®s->fir, - (FIR_OP_CW0 << FIR_OP0_SHIFT) | - (FIR_OP_CA << FIR_OP1_SHIFT) | - (FIR_OP_PA << FIR_OP2_SHIFT) | - (FIR_OP_RBW << FIR_OP3_SHIFT)); + (FIR_OP_CW0 << FIR_OP0_SHIFT) | + (FIR_OP_CA << FIR_OP1_SHIFT) | + (FIR_OP_PA << FIR_OP2_SHIFT) | + (FIR_OP_RBW << FIR_OP3_SHIFT)); } out_be32(®s->fbcr, 0); diff --git a/drivers/mtd/nand/fsl_upm.c b/drivers/mtd/nand/fsl_upm.c index a0fe1e0..3ae0044 100644 --- a/drivers/mtd/nand/fsl_upm.c +++ b/drivers/mtd/nand/fsl_upm.c @@ -112,10 +112,10 @@ static void fun_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl) fsl_upm_run_pattern(&fun->upm, fun->width, io_addr, mar); /* - * Some boards/chips needs this. At least the MPC8360E-RDK - * needs it. Probably weird chip, because I don't see any - * need for this on MPC8555E + Samsung K9F1G08U0A. Usually - * here are 0-2 unexpected busy states per block read. + * Some boards/chips needs this. At least the MPC8360E-RDK + * needs it. Probably weird chip, because I don't see any + * need for this on MPC8555E + Samsung K9F1G08U0A. Usually + * here are 0-2 unexpected busy states per block read. */ if (fun->wait_flags & FSL_UPM_WAIT_RUN_PATTERN) fun_wait(fun); diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c index 9e05cef..3e3959f 100644 --- a/drivers/mtd/nand/nand_base.c +++ b/drivers/mtd/nand/nand_base.c @@ -3272,7 +3272,7 @@ int nand_scan_tail(struct mtd_info *mtd) case NAND_ECC_NONE: pr_warn("NAND_ECC_NONE selected by board driver. " - "This is not recommended !!\n"); + "This is not recommended !!\n"); chip->ecc.read_page = nand_read_page_raw; chip->ecc.write_page = nand_write_page_raw; chip->ecc.read_oob = nand_read_oob_std; diff --git a/drivers/mtd/nand/nand_util.c b/drivers/mtd/nand/nand_util.c index d149a6d..5246bbf 100644 --- a/drivers/mtd/nand/nand_util.c +++ b/drivers/mtd/nand/nand_util.c @@ -142,8 +142,8 @@ int nand_erase_opts(nand_info_t *meminfo, const nand_erase_options_t *opts) ops.mode = MTD_OPS_AUTO_OOB; result = mtd_write_oob(meminfo, - erase.addr, - &ops); + erase.addr, + &ops); if (result != 0) { printf("\n%s: MTD writeoob failure: %d\n", mtd_device, result); diff --git a/drivers/mtd/ubi/crc32.c b/drivers/mtd/ubi/crc32.c index ab439b3..f1bebf5 100644 --- a/drivers/mtd/ubi/crc32.c +++ b/drivers/mtd/ubi/crc32.c @@ -102,7 +102,7 @@ u32 crc32_le(u32 crc, unsigned char const *p, size_t len) if((len >= 4)){ /* load data 32 bits wide, xor data 32 bits wide. */ size_t save_len = len & 3; - len = len >> 2; + len = len >> 2; --b; /* use pre increment below(*++b) for speed */ do { crc ^= *++b; @@ -200,7 +200,7 @@ u32 __attribute_pure__ crc32_be(u32 crc, unsigned char const *p, size_t len) if(likely(len >= 4)){ /* load data 32 bits wide, xor data 32 bits wide. */ size_t save_len = len & 3; - len = len >> 2; + len = len >> 2; --b; /* use pre increment below(*++b) for speed */ do { crc ^= *++b; diff --git a/drivers/net/dm9000x.c b/drivers/net/dm9000x.c index ffb8ea0..f7170e0 100644 --- a/drivers/net/dm9000x.c +++ b/drivers/net/dm9000x.c @@ -17,7 +17,7 @@ V0.11 06/20/2001 REG_0A bit3=1, default enable BP with DA match R17 = (R17 & 0xfff0) | NF v1.00 modify by simon 2001.9.5 - change for kernel 2.4.x + change for kernel 2.4.x v1.1 11/09/2001 fix force mode bug diff --git a/drivers/net/plb2800_eth.c b/drivers/net/plb2800_eth.c index 308bf8f..f869514 100644 --- a/drivers/net/plb2800_eth.c +++ b/drivers/net/plb2800_eth.c @@ -260,7 +260,7 @@ static int plb2800_eth_recv(struct eth_device *dev) printf("Received %d bytes\n", length); #endif NetReceive((void*)(NetRxPackets[rx_new]), - length - 4); + length - 4); } else { diff --git a/drivers/rtc/bfin_rtc.c b/drivers/rtc/bfin_rtc.c index 5de6953..21a2189 100644 --- a/drivers/rtc/bfin_rtc.c +++ b/drivers/rtc/bfin_rtc.c @@ -68,7 +68,7 @@ int rtc_set(struct rtc_time *tmp) /* Calculate number of seconds this incoming time represents */ remain = mktime(tmp->tm_year, tmp->tm_mon, tmp->tm_mday, - tmp->tm_hour, tmp->tm_min, tmp->tm_sec); + tmp->tm_hour, tmp->tm_min, tmp->tm_sec); /* Figure out how many days since epoch */ days = remain / NUM_SECS_IN_DAY; diff --git a/drivers/rtc/pl031.c b/drivers/rtc/pl031.c index 11cc719..c4d1259 100644 --- a/drivers/rtc/pl031.c +++ b/drivers/rtc/pl031.c @@ -73,7 +73,7 @@ int rtc_set(struct rtc_time *tmp) /* Calculate number of seconds this incoming time represents */ tim = mktime(tmp->tm_year, tmp->tm_mon, tmp->tm_mday, - tmp->tm_hour, tmp->tm_min, tmp->tm_sec); + tmp->tm_hour, tmp->tm_min, tmp->tm_sec); RTC_WRITE_REG(RTC_LR, tim); diff --git a/drivers/spi/mpc8xxx_spi.c b/drivers/spi/mpc8xxx_spi.c index 348361a..0d59c36 100644 --- a/drivers/spi/mpc8xxx_spi.c +++ b/drivers/spi/mpc8xxx_spi.c @@ -110,10 +110,10 @@ int spi_xfer(struct spi_slave *slave, unsigned int bitlen, const void *dout, if (bitlen <= 16) { if (bitlen <= 4) spi->mode = (spi->mode & 0xff0fffff) | - (3 << 20); + (3 << 20); else spi->mode = (spi->mode & 0xff0fffff) | - ((bitlen - 1) << 20); + ((bitlen - 1) << 20); } else { spi->mode = (spi->mode & 0xff0fffff); /* Set up the next iteration if sending > 32 bits */ diff --git a/drivers/spi/omap3_spi.c b/drivers/spi/omap3_spi.c index 3b38c34..e80be8e 100644 --- a/drivers/spi/omap3_spi.c +++ b/drivers/spi/omap3_spi.c @@ -50,7 +50,7 @@ static void omap3_spi_write_chconf(struct omap3_spi_slave *ds, int val) static void omap3_spi_set_enable(struct omap3_spi_slave *ds, int enable) { writel(enable, &ds->regs->channel[ds->slave.cs].chctrl); - /* Flash post writes to make immediate effect */ + /* Flash post writes to make immediate effect */ readl(&ds->regs->channel[ds->slave.cs].chctrl); } @@ -253,9 +253,9 @@ int omap3_spi_write(struct spi_slave *slave, unsigned int len, const u8 *txp, writel(txp[i], &ds->regs->channel[ds->slave.cs].tx); } - /* wait to finish of transfer */ - while (!(readl(&ds->regs->channel[ds->slave.cs].chstat) & - OMAP3_MCSPI_CHSTAT_EOT)); + /* wait to finish of transfer */ + while (!(readl(&ds->regs->channel[ds->slave.cs].chstat) & + OMAP3_MCSPI_CHSTAT_EOT)); /* Disable the channel otherwise the next immediate RX will get affected */ omap3_spi_set_enable(ds,OMAP3_MCSPI_CHCTRL_DIS); @@ -359,7 +359,7 @@ int omap3_spi_txrx(struct spi_slave *slave, rxp[i] = readl(&ds->regs->channel[ds->slave.cs].rx); } /* Disable the channel */ - omap3_spi_set_enable(ds,OMAP3_MCSPI_CHCTRL_DIS); + omap3_spi_set_enable(ds,OMAP3_MCSPI_CHCTRL_DIS); /*if transfer must be terminated disable the channel*/ if (flags & SPI_XFER_END) { diff --git a/drivers/video/ati_radeon_fb.h b/drivers/video/ati_radeon_fb.h index 0659045..9dd638b 100644 --- a/drivers/video/ati_radeon_fb.h +++ b/drivers/video/ati_radeon_fb.h @@ -92,7 +92,7 @@ static inline void radeon_engine_flush (struct radeonfb_info *rinfo) /* initiate flush */ OUTREGP(RB2D_DSTCACHE_CTLSTAT, RB2D_DC_FLUSH_ALL, - ~RB2D_DC_FLUSH_ALL); + ~RB2D_DC_FLUSH_ALL); for (i=0; i < 2000000; i++) { if (!(INREG(RB2D_DSTCACHE_CTLSTAT) & RB2D_DC_BUSY)) diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c index 9acf243..67f115f 100644 --- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -771,7 +771,7 @@ static int mount_ubifs(struct ubifs_info *c) dbg_msg("node sizes: ref %zu, cmt. start %zu, orph %zu", UBIFS_REF_NODE_SZ, UBIFS_CS_NODE_SZ, UBIFS_ORPH_NODE_SZ); dbg_msg("max. node sizes: data %zu, inode %zu dentry %zu", - UBIFS_MAX_DATA_NODE_SZ, UBIFS_MAX_INO_NODE_SZ, + UBIFS_MAX_DATA_NODE_SZ, UBIFS_MAX_INO_NODE_SZ, UBIFS_MAX_DENT_NODE_SZ); dbg_msg("dead watermark: %d", c->dead_wm); dbg_msg("dark watermark: %d", c->dark_wm); diff --git a/include/configs/DU440.h b/include/configs/DU440.h index 0827113..5ffa6e4 100644 --- a/include/configs/DU440.h +++ b/include/configs/DU440.h @@ -223,7 +223,7 @@ "flash_self=run ramargs addip addtty optargs;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${img};run nfsargs addip addtty optargs;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/tftpboot/du440/target_root_du440\0" \ "img=/tftpboot/du440/uImage\0" \ "kernel_addr=FFC00000\0" \ diff --git a/include/configs/P3G4.h b/include/configs/P3G4.h index 6328ba9..354e9d2 100644 --- a/include/configs/P3G4.h +++ b/include/configs/P3G4.h @@ -85,7 +85,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/opt/eldk/ppc_74xx\0" \ "bootfile=/tftpboot/p3g4/uImage\0" \ "kernel_addr=ff000000\0" \ diff --git a/include/configs/RBC823.h b/include/configs/RBC823.h index 3970dc3..7b10d28 100644 --- a/include/configs/RBC823.h +++ b/include/configs/RBC823.h @@ -262,8 +262,8 @@ */ #define SCCR_MASK SCCR_EBDF11 #define CONFIG_SYS_SCCR (SCCR_RTDIV | SCCR_RTSEL | SCCR_CRQEN | \ - SCCR_PRQEN | SCCR_EBDF00 | \ - SCCR_COM01 | SCCR_DFSYNC00 | SCCR_DFBRG00 | \ + SCCR_PRQEN | SCCR_EBDF00 | \ + SCCR_COM01 | SCCR_DFSYNC00 | SCCR_DFBRG00 | \ SCCR_DFNL000 | SCCR_DFNH000 | SCCR_DFLCD001 | \ SCCR_DFALCD00) @@ -344,7 +344,7 @@ #define CONFIG_SYS_OR1_PRELIM (CONFIG_SYS_PRELIM_OR_AM | CONFIG_SYS_OR_TIMING_MSYS) #define CONFIG_SYS_BR1_PRELIM ((FLASH_BASE1_PRELIM & BR_BA_MSK) | BR_MS_UPMB | \ - BR_PS_8 | BR_V) + BR_PS_8 | BR_V) /* * BR4 and OR4 (SDRAM) diff --git a/include/configs/alpr.h b/include/configs/alpr.h index 86b874c..f0a8962 100644 --- a/include/configs/alpr.h +++ b/include/configs/alpr.h @@ -148,7 +148,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "net_nfs_fdt=tftp 200000 ${bootfile};" \ "tftp ${fdt_addr} ${fdt_file};" \ "run nfsargs addip addtty;" \ diff --git a/include/configs/devkit8000.h b/include/configs/devkit8000.h index baa2635..c1e996e 100644 --- a/include/configs/devkit8000.h +++ b/include/configs/devkit8000.h @@ -278,8 +278,8 @@ #define CONFIG_SYS_INIT_RAM_ADDR 0x4020f800 #define CONFIG_SYS_INIT_RAM_SIZE 0x800 #define CONFIG_SYS_INIT_SP_ADDR (CONFIG_SYS_INIT_RAM_ADDR + \ - CONFIG_SYS_INIT_RAM_SIZE - \ - GENERATED_GBL_DATA_SIZE) + CONFIG_SYS_INIT_RAM_SIZE - \ + GENERATED_GBL_DATA_SIZE) /* SRAM config */ #define CONFIG_SYS_SRAM_START 0x40200000 diff --git a/include/configs/p3mx.h b/include/configs/p3mx.h index d94e9c6..8157f47 100644 --- a/include/configs/p3mx.h +++ b/include/configs/p3mx.h @@ -194,7 +194,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/opt/eldk/ppc_6xx\0" \ "u-boot=p3mx/u-boot/u-boot.bin\0" \ "load=tftp 100000 ${u-boot}\0" \ diff --git a/include/configs/p3p440.h b/include/configs/p3p440.h index 5e5adbc..f6cb813 100644 --- a/include/configs/p3p440.h +++ b/include/configs/p3p440.h @@ -129,7 +129,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/opt/eldk/ppc_4xx\0" \ "bootfile=/tftpboot/p3p440/uImage\0" \ "kernel_addr=ff800000\0" \ diff --git a/include/configs/pcs440ep.h b/include/configs/pcs440ep.h index e397615..7cf22ba 100644 --- a/include/configs/pcs440ep.h +++ b/include/configs/pcs440ep.h @@ -157,7 +157,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/opt/eldk/ppc_4xx\0" \ "bootfile=/tftpboot/pcs440ep/uImage\0" \ "kernel_addr=FFF00000\0" \ diff --git a/include/configs/pdnb3.h b/include/configs/pdnb3.h index c664010..d3e9017 100644 --- a/include/configs/pdnb3.h +++ b/include/configs/pdnb3.h @@ -131,7 +131,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/opt/buildroot\0" \ "bootfile=/tftpboot/netbox/uImage\0" \ "kernel_addr=50080000\0" \ @@ -280,7 +280,7 @@ #define I2C_TRISTATE GPIO_OUTPUT_DISABLE(CONFIG_SYS_GPIO_I2C_SDA) #define I2C_READ ((*IXP425_GPIO_GPINR & PB_SDA) != 0) #define I2C_SDA(bit) if (bit) GPIO_OUTPUT_SET(CONFIG_SYS_GPIO_I2C_SDA); \ - else GPIO_OUTPUT_CLEAR(CONFIG_SYS_GPIO_I2C_SDA) + else GPIO_OUTPUT_CLEAR(CONFIG_SYS_GPIO_I2C_SDA) #define I2C_SCL(bit) if (bit) GPIO_OUTPUT_SET(CONFIG_SYS_GPIO_I2C_SCL); \ else GPIO_OUTPUT_CLEAR(CONFIG_SYS_GPIO_I2C_SCL) #define I2C_DELAY udelay(3) /* 1/4 I2C clock duration */ diff --git a/include/configs/uc100.h b/include/configs/uc100.h index 8bf7353..faae7ff 100644 --- a/include/configs/uc100.h +++ b/include/configs/uc100.h @@ -65,7 +65,7 @@ "flash_self=run ramargs addip addtty;" \ "bootm ${kernel_addr} ${ramdisk_addr}\0" \ "net_nfs=tftp 200000 ${bootfile};run nfsargs addip addtty;" \ - "bootm\0" \ + "bootm\0" \ "rootpath=/opt/eldk/ppc_8xx\0" \ "bootfile=/tftpboot/uc100/uImage\0" \ "kernel_addr=40000000\0" \ diff --git a/include/configs/zeus.h b/include/configs/zeus.h index 8d0db5c..237fcb1 100644 --- a/include/configs/zeus.h +++ b/include/configs/zeus.h @@ -322,7 +322,7 @@ " ramdisk_size=${ramdisk_size}\0" \ "addip=setenv bootargs ${bootargs} " \ "ip=${ipaddr}:${serverip}:${gatewayip}:${netmask}" \ - ":${hostname}:${netdev}:off panic=1\0" \ + ":${hostname}:${netdev}:off panic=1\0" \ "addtty=setenv bootargs ${bootargs} console=ttyS0," \ "${baudrate}\0" \ "net_nfs=tftp ${kernel_mem_addr} ${file_kernel};" \ @@ -352,7 +352,7 @@ "file_fs=/zeus/rootfs_ba.img\0" \ "tftp_fs=tftp 100000 ${file_fs}\0" \ "update_fs=protect off ff300000 ff87ffff;era ff300000 ff87ffff;"\ - "cp.b 100000 ff300000 580000\0" \ + "cp.b 100000 ff300000 580000\0" \ "upd_fs=run tftp_fs;run update_fs\0" \ "bootcmd=chkreset;run ramargs addip addtty addmisc;" \ "bootm ${kernel_fl_addr} ${ramdisk_fl_addr}\0" \ diff --git a/include/ddr_spd.h b/include/ddr_spd.h index 9e74d87..f5809e5 100644 --- a/include/ddr_spd.h +++ b/include/ddr_spd.h @@ -39,7 +39,7 @@ typedef struct ddr1_spd_eeprom_s { unsigned char dev_attr; /* 22 SDRAM Device Attributes */ unsigned char clk_cycle2; /* 23 Min SDRAM Cycle time @ CL=X-0.5 */ unsigned char clk_access2; /* 24 SDRAM Access from - Clk @ CL=X-0.5 (tAC) */ + Clk @ CL=X-0.5 (tAC) */ unsigned char clk_cycle3; /* 25 Min SDRAM Cycle time @ CL=X-1 */ unsigned char clk_access3; /* 26 Max Access from Clk @ CL=X-1 (tAC) */ unsigned char trp; /* 27 Min Row Precharge Time (tRP)*/ @@ -112,9 +112,9 @@ typedef struct ddr2_spd_eeprom_s { unsigned char ca_setup; /* 32 Addr+Cmd Setup Time Before Clk (tIS) */ unsigned char ca_hold; /* 33 Addr+Cmd Hold Time After Clk (tIH) */ unsigned char data_setup; /* 34 Data Input Setup Time - Before Strobe (tDS) */ + Before Strobe (tDS) */ unsigned char data_hold; /* 35 Data Input Hold Time - After Strobe (tDH) */ + After Strobe (tDH) */ unsigned char twr; /* 36 Write Recovery time tWR */ unsigned char twtr; /* 37 Int write to read delay tWTR */ unsigned char trtp; /* 38 Int read to precharge delay tRTP */ @@ -128,40 +128,40 @@ typedef struct ddr2_spd_eeprom_s { unsigned char pll_relock; /* 46 PLL Relock time */ unsigned char Tcasemax; /* 47 Tcasemax */ unsigned char psiTAdram; /* 48 Thermal Resistance of DRAM Package from - Top (Case) to Ambient (Psi T-A DRAM) */ + Top (Case) to Ambient (Psi T-A DRAM) */ unsigned char dt0_mode; /* 49 DRAM Case Temperature Rise from Ambient - due to Activate-Precharge/Mode Bits + due to Activate-Precharge/Mode Bits (DT0/Mode Bits) */ unsigned char dt2n_dt2q; /* 50 DRAM Case Temperature Rise from Ambient - due to Precharge/Quiet Standby + due to Precharge/Quiet Standby (DT2N/DT2Q) */ unsigned char dt2p; /* 51 DRAM Case Temperature Rise from Ambient - due to Precharge Power-Down (DT2P) */ + due to Precharge Power-Down (DT2P) */ unsigned char dt3n; /* 52 DRAM Case Temperature Rise from Ambient - due to Active Standby (DT3N) */ + due to Active Standby (DT3N) */ unsigned char dt3pfast; /* 53 DRAM Case Temperature Rise from Ambient - due to Active Power-Down with + due to Active Power-Down with Fast PDN Exit (DT3Pfast) */ unsigned char dt3pslow; /* 54 DRAM Case Temperature Rise from Ambient - due to Active Power-Down with Slow + due to Active Power-Down with Slow PDN Exit (DT3Pslow) */ unsigned char dt4r_dt4r4w; /* 55 DRAM Case Temperature Rise from Ambient - due to Page Open Burst Read/DT4R4W + due to Page Open Burst Read/DT4R4W Mode Bit (DT4R/DT4R4W Mode Bit) */ unsigned char dt5b; /* 56 DRAM Case Temperature Rise from Ambient - due to Burst Refresh (DT5B) */ + due to Burst Refresh (DT5B) */ unsigned char dt7; /* 57 DRAM Case Temperature Rise from Ambient - due to Bank Interleave Reads with + due to Bank Interleave Reads with Auto-Precharge (DT7) */ unsigned char psiTApll; /* 58 Thermal Resistance of PLL Package form - Top (Case) to Ambient (Psi T-A PLL) */ + Top (Case) to Ambient (Psi T-A PLL) */ unsigned char psiTAreg; /* 59 Thermal Reisitance of Register Package - from Top (Case) to Ambient + from Top (Case) to Ambient (Psi T-A Register) */ unsigned char dtpllactive; /* 60 PLL Case Temperature Rise from Ambient - due to PLL Active (DT PLL Active) */ + due to PLL Active (DT PLL Active) */ unsigned char dtregact; /* 61 Register Case Temperature Rise from - Ambient due to Register Active/Mode Bit + Ambient due to Register Active/Mode Bit (DT Register Active/Mode Bit) */ unsigned char spd_rev; /* 62 SPD Data Revision Code */ unsigned char cksum; /* 63 Checksum for bytes 0-62 */ diff --git a/nand_spl/nand_boot_fsl_elbc.c b/nand_spl/nand_boot_fsl_elbc.c index 03e25f3..1afa1a2 100644 --- a/nand_spl/nand_boot_fsl_elbc.c +++ b/nand_spl/nand_boot_fsl_elbc.c @@ -126,7 +126,7 @@ void nand_boot(void) * Load U-Boot image from NAND into RAM */ nand_load(CONFIG_SYS_NAND_U_BOOT_OFFS, CONFIG_SYS_NAND_U_BOOT_SIZE, - (uchar *)CONFIG_SYS_NAND_U_BOOT_DST); + (uchar *)CONFIG_SYS_NAND_U_BOOT_DST); /* * Jump to U-Boot image diff --git a/post/lib_powerpc/andi.c b/post/lib_powerpc/andi.c index 878b2ca..8a4b89b 100644 --- a/post/lib_powerpc/andi.c +++ b/post/lib_powerpc/andi.c @@ -89,7 +89,7 @@ int cpu_post_test_andi (void) if (ret != 0) { - post_log ("Error at andi test %d !\n", i); + post_log ("Error at andi test %d !\n", i); } } } diff --git a/post/lib_powerpc/cpu_asm.h b/post/lib_powerpc/cpu_asm.h index 66adad8..b5c5889 100644 --- a/post/lib_powerpc/cpu_asm.h +++ b/post/lib_powerpc/cpu_asm.h @@ -112,64 +112,64 @@ #define ASM_0(opcode) (opcode) #define ASM_1(opcode, rd) ((opcode) + \ - ((rd) << 21)) + ((rd) << 21)) #define ASM_1C(opcode, cr) ((opcode) + \ - ((cr) << 23)) + ((cr) << 23)) #define ASM_11(opcode, rd, rs) ((opcode) + \ - ((rd) << 21) + \ + ((rd) << 21) + \ ((rs) << 16)) #define ASM_11C(opcode, cd, cs) ((opcode) + \ - ((cd) << 23) + \ + ((cd) << 23) + \ ((cs) << 18)) #define ASM_11X(opcode, rd, rs) ((opcode) + \ - ((rs) << 21) + \ + ((rs) << 21) + \ ((rd) << 16)) #define ASM_11I(opcode, rd, rs, simm) ((opcode) + \ - ((rd) << 21) + \ + ((rd) << 21) + \ ((rs) << 16) + \ ((simm) & 0xffff)) #define ASM_11IF(opcode, rd, rs, simm) ((opcode) + \ - ((rd) << 21) + \ + ((rd) << 21) + \ ((rs) << 16) + \ ((simm) << 11)) #define ASM_11S(opcode, rd, rs, sh) ((opcode) + \ - ((rs) << 21) + \ + ((rs) << 21) + \ ((rd) << 16) + \ ((sh) << 11)) #define ASM_11IX(opcode, rd, rs, imm) ((opcode) + \ - ((rs) << 21) + \ + ((rs) << 21) + \ ((rd) << 16) + \ ((imm) & 0xffff)) #define ASM_12(opcode, rd, rs1, rs2) ((opcode) + \ - ((rd) << 21) + \ + ((rd) << 21) + \ ((rs1) << 16) + \ ((rs2) << 11)) #define ASM_12F(opcode, fd, fs1, fs2) ((opcode) + \ - ((fd) << 21) + \ + ((fd) << 21) + \ ((fs1) << 16) + \ ((fs2) << 11)) #define ASM_12X(opcode, rd, rs1, rs2) ((opcode) + \ - ((rs1) << 21) + \ + ((rs1) << 21) + \ ((rd) << 16) + \ ((rs2) << 11)) #define ASM_2C(opcode, cr, rs1, rs2) ((opcode) + \ - ((cr) << 23) + \ + ((cr) << 23) + \ ((rs1) << 16) + \ ((rs2) << 11)) #define ASM_1IC(opcode, cr, rs, imm) ((opcode) + \ - ((cr) << 23) + \ + ((cr) << 23) + \ ((rs) << 16) + \ ((imm) & 0xffff)) #define ASM_122(opcode, rd, rs1, rs2, imm1, imm2) \ ((opcode) + \ - ((rs1) << 21) + \ + ((rs1) << 21) + \ ((rd) << 16) + \ ((rs2) << 11) + \ ((imm1) << 6) + \ ((imm2) << 1)) #define ASM_113(opcode, rd, rs, imm1, imm2, imm3) \ ((opcode) + \ - ((rs) << 21) + \ + ((rs) << 21) + \ ((rd) << 16) + \ ((imm1) << 11) + \ ((imm2) << 6) + \ diff --git a/post/lib_powerpc/rlwimi.c b/post/lib_powerpc/rlwimi.c index eccf71d..6bd53d0 100644 --- a/post/lib_powerpc/rlwimi.c +++ b/post/lib_powerpc/rlwimi.c @@ -114,7 +114,7 @@ int cpu_post_test_rlwimi (void) if (ret != 0) { - post_log ("Error at rlwimi test %d !\n", i); + post_log ("Error at rlwimi test %d !\n", i); } } @@ -127,8 +127,8 @@ int cpu_post_test_rlwimi (void) if (ret != 0) { - post_log ("Error at rlwimi test %d !\n", i); - } + post_log ("Error at rlwimi test %d !\n", i); + } } } } diff --git a/post/lib_powerpc/rlwinm.c b/post/lib_powerpc/rlwinm.c index 5eaf30b..35a1a40 100644 --- a/post/lib_powerpc/rlwinm.c +++ b/post/lib_powerpc/rlwinm.c @@ -107,7 +107,7 @@ int cpu_post_test_rlwinm (void) if (ret != 0) { - post_log ("Error at rlwinm test %d !\n", i); + post_log ("Error at rlwinm test %d !\n", i); } } @@ -120,8 +120,8 @@ int cpu_post_test_rlwinm (void) if (ret != 0) { - post_log ("Error at rlwinm test %d !\n", i); - } + post_log ("Error at rlwinm test %d !\n", i); + } } } } diff --git a/post/lib_powerpc/rlwnm.c b/post/lib_powerpc/rlwnm.c index 83ee70b..6e82893 100644 --- a/post/lib_powerpc/rlwnm.c +++ b/post/lib_powerpc/rlwnm.c @@ -117,7 +117,7 @@ int cpu_post_test_rlwnm (void) if (ret != 0) { - post_log ("Error at rlwnm test %d !\n", i); + post_log ("Error at rlwnm test %d !\n", i); } } @@ -130,8 +130,8 @@ int cpu_post_test_rlwnm (void) if (ret != 0) { - post_log ("Error at rlwnm test %d !\n", i); - } + post_log ("Error at rlwnm test %d !\n", i); + } } } } diff --git a/post/lib_powerpc/srawi.c b/post/lib_powerpc/srawi.c index b276014..3723e33 100644 --- a/post/lib_powerpc/srawi.c +++ b/post/lib_powerpc/srawi.c @@ -108,7 +108,7 @@ int cpu_post_test_srawi (void) if (ret != 0) { - post_log ("Error at srawi test %d !\n", i); + post_log ("Error at srawi test %d !\n", i); } } @@ -121,8 +121,8 @@ int cpu_post_test_srawi (void) if (ret != 0) { - post_log ("Error at srawi test %d !\n", i); - } + post_log ("Error at srawi test %d !\n", i); + } } } } diff --git a/post/lib_powerpc/three.c b/post/lib_powerpc/three.c index f7317dd..c3e7f68 100644 --- a/post/lib_powerpc/three.c +++ b/post/lib_powerpc/three.c @@ -211,7 +211,7 @@ int cpu_post_test_three (void) if (ret != 0) { - post_log ("Error at three test %d !\n", i); + post_log ("Error at three test %d !\n", i); } } @@ -224,8 +224,8 @@ int cpu_post_test_three (void) if (ret != 0) { - post_log ("Error at three test %d !\n", i); - } + post_log ("Error at three test %d !\n", i); + } } } } diff --git a/post/lib_powerpc/threei.c b/post/lib_powerpc/threei.c index f6d9052..1d40e1a 100644 --- a/post/lib_powerpc/threei.c +++ b/post/lib_powerpc/threei.c @@ -103,7 +103,7 @@ int cpu_post_test_threei (void) if (ret != 0) { - post_log ("Error at threei test %d !\n", i); + post_log ("Error at threei test %d !\n", i); } } } diff --git a/post/lib_powerpc/threex.c b/post/lib_powerpc/threex.c index 906fefd..ce1edf8 100644 --- a/post/lib_powerpc/threex.c +++ b/post/lib_powerpc/threex.c @@ -181,7 +181,7 @@ int cpu_post_test_threex (void) if (ret != 0) { - post_log ("Error at threex test %d !\n", i); + post_log ("Error at threex test %d !\n", i); } } @@ -194,8 +194,8 @@ int cpu_post_test_threex (void) if (ret != 0) { - post_log ("Error at threex test %d !\n", i); - } + post_log ("Error at threex test %d !\n", i); + } } } } diff --git a/post/lib_powerpc/twox.c b/post/lib_powerpc/twox.c index 3a9b136..9549dbb 100644 --- a/post/lib_powerpc/twox.c +++ b/post/lib_powerpc/twox.c @@ -128,7 +128,7 @@ int cpu_post_test_twox (void) if (ret != 0) { - post_log ("Error at twox test %d !\n", i); + post_log ("Error at twox test %d !\n", i); } } @@ -141,8 +141,8 @@ int cpu_post_test_twox (void) if (ret != 0) { - post_log ("Error at twox test %d !\n", i); - } + post_log ("Error at twox test %d !\n", i); + } } } } diff --git a/test/compression.c b/test/compression.c index 8834d5e..139ea01 100644 --- a/test/compression.c +++ b/test/compression.c @@ -127,8 +127,8 @@ static int uncompress_using_gzip(void *in, unsigned long in_size, } static int compress_using_bzip2(void *in, unsigned long in_size, - void *out, unsigned long out_max, - unsigned long *out_size) + void *out, unsigned long out_max, + unsigned long *out_size) { /* There is no bzip2 compression in u-boot, so fake it. */ assert(in_size == strlen(plain)); diff --git a/test/image/test-fit.py b/test/image/test-fit.py index 7394df7..33eb91d 100755 --- a/test/image/test-fit.py +++ b/test/image/test-fit.py @@ -32,49 +32,49 @@ base_its = ''' /dts-v1/; / { - description = "Chrome OS kernel image with one or more FDT blobs"; - #address-cells = <1>; - - images { - kernel@1 { - data = /incbin/("%(kernel)s"); - type = "kernel"; - arch = "sandbox"; - os = "linux"; - compression = "none"; - load = <0x40000>; - entry = <0x8>; - }; - fdt@1 { - description = "snow"; - data = /incbin/("u-boot.dtb"); - type = "flat_dt"; - arch = "sandbox"; - %(fdt_load)s - compression = "none"; - signature@1 { - algo = "sha1,rsa2048"; - key-name-hint = "dev"; - }; - }; - ramdisk@1 { - description = "snow"; - data = /incbin/("%(ramdisk)s"); - type = "ramdisk"; - arch = "sandbox"; - os = "linux"; - %(ramdisk_load)s - compression = "none"; - }; - }; - configurations { - default = "conf@1"; - conf@1 { - kernel = "kernel@1"; - fdt = "fdt@1"; - %(ramdisk_config)s - }; - }; + description = "Chrome OS kernel image with one or more FDT blobs"; + #address-cells = <1>; + + images { + kernel@1 { + data = /incbin/("%(kernel)s"); + type = "kernel"; + arch = "sandbox"; + os = "linux"; + compression = "none"; + load = <0x40000>; + entry = <0x8>; + }; + fdt@1 { + description = "snow"; + data = /incbin/("u-boot.dtb"); + type = "flat_dt"; + arch = "sandbox"; + %(fdt_load)s + compression = "none"; + signature@1 { + algo = "sha1,rsa2048"; + key-name-hint = "dev"; + }; + }; + ramdisk@1 { + description = "snow"; + data = /incbin/("%(ramdisk)s"); + type = "ramdisk"; + arch = "sandbox"; + os = "linux"; + %(ramdisk_load)s + compression = "none"; + }; + }; + configurations { + default = "conf@1"; + conf@1 { + kernel = "kernel@1"; + fdt = "fdt@1"; + %(ramdisk_config)s + }; + }; }; ''' @@ -83,8 +83,8 @@ base_fdt = ''' /dts-v1/; / { - model = "Sandbox Verified Boot Test"; - compatible = "sandbox"; + model = "Sandbox Verified Boot Test"; + compatible = "sandbox"; }; ''' @@ -107,9 +107,9 @@ def make_fname(leaf): """Make a temporary filename Args: - leaf: Leaf name of file to create (within temporary directory) + leaf: Leaf name of file to create (within temporary directory) Return: - Temporary filename + Temporary filename """ global base_dir @@ -119,9 +119,9 @@ def filesize(fname): """Get the size of a file Args: - fname: Filename to check + fname: Filename to check Return: - Size of file in bytes + Size of file in bytes """ return os.stat(fname).st_size @@ -129,23 +129,23 @@ def read_file(fname): """Read the contents of a file Args: - fname: Filename to read + fname: Filename to read Returns: - Contents of file as a string + Contents of file as a string """ with open(fname, 'r') as fd: - return fd.read() + return fd.read() def make_dtb(): """Make a sample .dts file and compile it to a .dtb Returns: - Filename of .dtb file created + Filename of .dtb file created """ src = make_fname('u-boot.dts') dtb = make_fname('u-boot.dtb') with open(src, 'w') as fd: - print >>fd, base_fdt + print >>fd, base_fdt command.Output('dtc', src, '-O', 'dtb', '-o', dtb) return dtb @@ -153,13 +153,13 @@ def make_its(params): """Make a sample .its file with parameters embedded Args: - params: Dictionary containing parameters to embed in the %() strings + params: Dictionary containing parameters to embed in the %() strings Returns: - Filename of .its file created + Filename of .its file created """ its = make_fname('test.its') with open(its, 'w') as fd: - print >>fd, base_its % params + print >>fd, base_its % params return its def make_fit(mkimage, params): @@ -169,44 +169,44 @@ def make_fit(mkimage, params): turn this into a .fit image. Args: - mkimage: Filename of 'mkimage' utility - params: Dictionary containing parameters to embed in the %() strings + mkimage: Filename of 'mkimage' utility + params: Dictionary containing parameters to embed in the %() strings Return: - Filename of .fit file created + Filename of .fit file created """ fit = make_fname('test.fit') its = make_its(params) command.Output(mkimage, '-f', its, fit) with open(make_fname('u-boot.dts'), 'w') as fd: - print >>fd, base_fdt + print >>fd, base_fdt return fit def make_kernel(): """Make a sample kernel with test data Returns: - Filename of kernel created + Filename of kernel created """ fname = make_fname('test-kernel.bin') data = '' for i in range(100): - data += 'this kernel %d is unlikely to boot\n' % i + data += 'this kernel %d is unlikely to boot\n' % i with open(fname, 'w') as fd: - print >>fd, data + print >>fd, data return fname def make_ramdisk(): """Make a sample ramdisk with test data Returns: - Filename of ramdisk created + Filename of ramdisk created """ fname = make_fname('test-ramdisk.bin') data = '' for i in range(100): - data += 'ramdisk %d was seldom used in the middle ages\n' % i + data += 'ramdisk %d was seldom used in the middle ages\n' % i with open(fname, 'w') as fd: - print >>fd, data + print >>fd, data return fname def find_matching(text, match): @@ -223,12 +223,12 @@ def find_matching(text, match): to use regex and return groups. Args: - text: Text to check (each line separated by \n) - match: String to search for + text: Text to check (each line separated by \n) + match: String to search for Return: - String containing unmatched portion of line + String containing unmatched portion of line Exceptions: - ValueError: If match is not found + ValueError: If match is not found >>> find_matching('first line:10\\nsecond_line:20', 'first line:') '10' @@ -240,9 +240,9 @@ def find_matching(text, match): '20' """ for line in text.splitlines(): - pos = line.find(match) - if pos != -1: - return line[:pos] + line[pos + len(match):] + pos = line.find(match) + if pos != -1: + return line[:pos] + line[pos + len(match):] print "Expected '%s' but not found in output:" print text @@ -252,7 +252,7 @@ def set_test(name): """Set the name of the current test and print a message Args: - name: Name of test + name: Name of test """ global test_name @@ -263,7 +263,7 @@ def fail(msg, stdout): """Raise an error with a helpful failure message Args: - msg: Message to display + msg: Message to display """ print stdout raise ValueError("Test '%s' failed: %s" % (test_name, msg)) @@ -276,13 +276,13 @@ def run_fit_test(mkimage, u_boot): - signature algorithms - invalid sig/contents should be detected - compression - checking that errors are detected like: - - image overwriting - - missing images - - invalid configurations - - incorrect os/arch/type fields - - empty data - - images too large/small - - invalid FDT (e.g. putting a random binary in instead) + - image overwriting + - missing images + - invalid configurations + - incorrect os/arch/type fields + - empty data + - images too large/small + - invalid FDT (e.g. putting a random binary in instead) - default configuration selection - bootm command line parameters should have desired effect - run code coverage to make sure we are testing all the code @@ -299,24 +299,24 @@ def run_fit_test(mkimage, u_boot): # Set up basic parameters with default values params = { - 'fit_addr' : 0x1000, - - 'kernel' : kernel, - 'kernel_out' : kernel_out, - 'kernel_addr' : 0x40000, - 'kernel_size' : filesize(kernel), - - 'fdt_out' : fdt_out, - 'fdt_addr' : 0x80000, - 'fdt_size' : filesize(control_dtb), - 'fdt_load' : '', - - 'ramdisk' : ramdisk, - 'ramdisk_out' : ramdisk_out, - 'ramdisk_addr' : 0xc0000, - 'ramdisk_size' : filesize(ramdisk), - 'ramdisk_load' : '', - 'ramdisk_config' : '', + 'fit_addr' : 0x1000, + + 'kernel' : kernel, + 'kernel_out' : kernel_out, + 'kernel_addr' : 0x40000, + 'kernel_size' : filesize(kernel), + + 'fdt_out' : fdt_out, + 'fdt_addr' : 0x80000, + 'fdt_size' : filesize(control_dtb), + 'fdt_load' : '', + + 'ramdisk' : ramdisk, + 'ramdisk_out' : ramdisk_out, + 'ramdisk_addr' : 0xc0000, + 'ramdisk_size' : filesize(ramdisk), + 'ramdisk_load' : '', + 'ramdisk_config' : '', } # Make a basic FIT and a script to load it @@ -329,11 +329,11 @@ def run_fit_test(mkimage, u_boot): set_test('Kernel load') stdout = command.Output(u_boot, '-d', control_dtb, '-c', cmd) if read_file(kernel) != read_file(kernel_out): - fail('Kernel not loaded', stdout) + fail('Kernel not loaded', stdout) if read_file(control_dtb) == read_file(fdt_out): - fail('FDT loaded but should be ignored', stdout) + fail('FDT loaded but should be ignored', stdout) if read_file(ramdisk) == read_file(ramdisk_out): - fail('Ramdisk loaded but should not be', stdout) + fail('Ramdisk loaded but should not be', stdout) # Find out the offset in the FIT where U-Boot has found the FDT line = find_matching(stdout, 'Booting using the fdt blob at ') @@ -344,8 +344,8 @@ def run_fit_test(mkimage, u_boot): # Now find where it actually is in the FIT (skip the first word) real_fit_offset = data.find(fdt_magic, 4) if fit_offset != real_fit_offset: - fail('U-Boot loaded FDT from offset %#x, FDT is actually at %#x' % - (fit_offset, real_fit_offset), stdout) + fail('U-Boot loaded FDT from offset %#x, FDT is actually at %#x' % + (fit_offset, real_fit_offset), stdout) # Now a kernel and an FDT set_test('Kernel + FDT load') @@ -353,11 +353,11 @@ def run_fit_test(mkimage, u_boot): fit = make_fit(mkimage, params) stdout = command.Output(u_boot, '-d', control_dtb, '-c', cmd) if read_file(kernel) != read_file(kernel_out): - fail('Kernel not loaded', stdout) + fail('Kernel not loaded', stdout) if read_file(control_dtb) != read_file(fdt_out): - fail('FDT not loaded', stdout) + fail('FDT not loaded', stdout) if read_file(ramdisk) == read_file(ramdisk_out): - fail('Ramdisk loaded but should not be', stdout) + fail('Ramdisk loaded but should not be', stdout) # Try a ramdisk set_test('Kernel + FDT + Ramdisk load') @@ -366,7 +366,7 @@ def run_fit_test(mkimage, u_boot): fit = make_fit(mkimage, params) stdout = command.Output(u_boot, '-d', control_dtb, '-c', cmd) if read_file(ramdisk) != read_file(ramdisk_out): - fail('Ramdisk not loaded', stdout) + fail('Ramdisk not loaded', stdout) def run_tests(): """Parse options, run the FIT tests and print the result""" @@ -376,12 +376,12 @@ def run_tests(): base_dir = tempfile.mkdtemp() parser = OptionParser() parser.add_option('-u', '--u-boot', - default=os.path.join(base_path, 'u-boot'), - help='Select U-Boot sandbox binary') + default=os.path.join(base_path, 'u-boot'), + help='Select U-Boot sandbox binary') parser.add_option('-k', '--keep', action='store_true', - help="Don't delete temporary directory even when tests pass") + help="Don't delete temporary directory even when tests pass") parser.add_option('-t', '--selftest', action='store_true', - help='Run internal self tests') + help='Run internal self tests') (options, args) = parser.parse_args() # Find the path to U-Boot, and assume mkimage is in its tools/mkimage dir @@ -390,8 +390,8 @@ def run_tests(): # There are a few doctests - handle these here if options.selftest: - doctest.testmod() - return + doctest.testmod() + return title = 'FIT Tests' print title, '\n', '=' * len(title) @@ -403,8 +403,8 @@ def run_tests(): # Remove the tempoerary directory unless we are asked to keep it if options.keep: - print "Output files are in '%s'" % base_dir + print "Output files are in '%s'" % base_dir else: - shutil.rmtree(base_dir) + shutil.rmtree(base_dir) run_tests() diff --git a/tools/Makefile b/tools/Makefile index c36cde2..ca76f94 100644 --- a/tools/Makefile +++ b/tools/Makefile @@ -162,7 +162,7 @@ HOSTCPPFLAGS = -include $(SRCTREE)/include/libfdt_env.h \ -idirafter $(SRCTREE)/include \ -idirafter $(OBJTREE)/include2 \ -idirafter $(OBJTREE)/include \ - -I $(SRCTREE)/lib/libfdt \ + -I $(SRCTREE)/lib/libfdt \ -I $(SRCTREE)/tools \ -DCONFIG_SYS_TEXT_BASE=$(CONFIG_SYS_TEXT_BASE) \ -DUSE_HOSTCC \ diff --git a/tools/bddb/defs.php b/tools/bddb/defs.php index 39605ab..0b50602 100644 --- a/tools/bddb/defs.php +++ b/tools/bddb/defs.php @@ -60,7 +60,7 @@ // CPU types $cputyp_vals = array('','MPC8260(HIP3)','MPC8260A(HIP4)','MPC8280(HIP7)','MPC8560'); - // CPU/BUS/CPM clock speeds + // CPU/BUS/CPM clock speeds $clk_vals = array('','33MHZ','66MHZ','100MHZ','133MHZ','166MHZ','200MHZ','233MHZ','266MHZ','300MHZ','333MHZ','366MHZ','400MHZ','433MHZ','466MHZ','500MHZ','533MHZ','566MHZ','600MHZ','633MHZ','666MHZ','700MHZ','733MHZ','766MHZ','800MHZ','833MHZ','866MHZ','900MHZ','933MHZ','966MHZ','1000MHZ','1033MHZ','1066MHZ','1100MHZ','1133MHZ','1166MHZ','1200MHZ','1233MHZ','1266MHZ','1300MHZ','1333MHZ'); // sdram sizes (nbits array is for eeprom config file) @@ -178,7 +178,7 @@ function enum_to_index($name, $vals) { $index = array_search($GLOBALS[$name], $vals); if ($vals[0] != '') - $index++; + $index++; return $index; } diff --git a/tools/buildman/README b/tools/buildman/README index f63f278..b6b771c 100644 --- a/tools/buildman/README +++ b/tools/buildman/README @@ -149,50 +149,50 @@ Scanning for tool chains - looking in '/.' - looking in '/bin' - looking in '/usr/bin' - - found '/usr/bin/gcc' + - found '/usr/bin/gcc' Tool chain test: OK - - found '/usr/bin/c89-gcc' + - found '/usr/bin/c89-gcc' Tool chain test: OK - - found '/usr/bin/c99-gcc' + - found '/usr/bin/c99-gcc' Tool chain test: OK - - found '/usr/bin/x86_64-linux-gnu-gcc' + - found '/usr/bin/x86_64-linux-gnu-gcc' Tool chain test: OK - scanning path '/toolchains/powerpc-linux' - looking in '/toolchains/powerpc-linux/.' - looking in '/toolchains/powerpc-linux/bin' - - found '/toolchains/powerpc-linux/bin/powerpc-linux-gcc' + - found '/toolchains/powerpc-linux/bin/powerpc-linux-gcc' Tool chain test: OK - looking in '/toolchains/powerpc-linux/usr/bin' - scanning path '/toolchains/nds32le-linux-glibc-v1f' - looking in '/toolchains/nds32le-linux-glibc-v1f/.' - looking in '/toolchains/nds32le-linux-glibc-v1f/bin' - - found '/toolchains/nds32le-linux-glibc-v1f/bin/nds32le-linux-gcc' + - found '/toolchains/nds32le-linux-glibc-v1f/bin/nds32le-linux-gcc' Tool chain test: OK - looking in '/toolchains/nds32le-linux-glibc-v1f/usr/bin' - scanning path '/toolchains/nios2' - looking in '/toolchains/nios2/.' - looking in '/toolchains/nios2/bin' - - found '/toolchains/nios2/bin/nios2-linux-gcc' + - found '/toolchains/nios2/bin/nios2-linux-gcc' Tool chain test: OK - - found '/toolchains/nios2/bin/nios2-linux-uclibc-gcc' + - found '/toolchains/nios2/bin/nios2-linux-uclibc-gcc' Tool chain test: OK - looking in '/toolchains/nios2/usr/bin' - - found '/toolchains/nios2/usr/bin/nios2-linux-gcc' + - found '/toolchains/nios2/usr/bin/nios2-linux-gcc' Tool chain test: OK - - found '/toolchains/nios2/usr/bin/nios2-linux-uclibc-gcc' + - found '/toolchains/nios2/usr/bin/nios2-linux-uclibc-gcc' Tool chain test: OK - scanning path '/toolchains/microblaze-unknown-linux-gnu' - looking in '/toolchains/microblaze-unknown-linux-gnu/.' - looking in '/toolchains/microblaze-unknown-linux-gnu/bin' - - found '/toolchains/microblaze-unknown-linux-gnu/bin/microblaze-unknown-linux-gnu-gcc' + - found '/toolchains/microblaze-unknown-linux-gnu/bin/microblaze-unknown-linux-gnu-gcc' Tool chain test: OK - - found '/toolchains/microblaze-unknown-linux-gnu/bin/mb-linux-gcc' + - found '/toolchains/microblaze-unknown-linux-gnu/bin/mb-linux-gcc' Tool chain test: OK - looking in '/toolchains/microblaze-unknown-linux-gnu/usr/bin' - scanning path '/toolchains/mips-linux' - looking in '/toolchains/mips-linux/.' - looking in '/toolchains/mips-linux/bin' - - found '/toolchains/mips-linux/bin/mips-linux-gcc' + - found '/toolchains/mips-linux/bin/mips-linux-gcc' Tool chain test: OK - looking in '/toolchains/mips-linux/usr/bin' - scanning path '/toolchains/old' @@ -202,25 +202,25 @@ Tool chain test: OK - scanning path '/toolchains/i386-linux' - looking in '/toolchains/i386-linux/.' - looking in '/toolchains/i386-linux/bin' - - found '/toolchains/i386-linux/bin/i386-linux-gcc' + - found '/toolchains/i386-linux/bin/i386-linux-gcc' Tool chain test: OK - looking in '/toolchains/i386-linux/usr/bin' - scanning path '/toolchains/bfin-uclinux' - looking in '/toolchains/bfin-uclinux/.' - looking in '/toolchains/bfin-uclinux/bin' - - found '/toolchains/bfin-uclinux/bin/bfin-uclinux-gcc' + - found '/toolchains/bfin-uclinux/bin/bfin-uclinux-gcc' Tool chain test: OK - looking in '/toolchains/bfin-uclinux/usr/bin' - scanning path '/toolchains/sparc-elf' - looking in '/toolchains/sparc-elf/.' - looking in '/toolchains/sparc-elf/bin' - - found '/toolchains/sparc-elf/bin/sparc-elf-gcc' + - found '/toolchains/sparc-elf/bin/sparc-elf-gcc' Tool chain test: OK - looking in '/toolchains/sparc-elf/usr/bin' - scanning path '/toolchains/arm-2010q1' - looking in '/toolchains/arm-2010q1/.' - looking in '/toolchains/arm-2010q1/bin' - - found '/toolchains/arm-2010q1/bin/arm-none-linux-gnueabi-gcc' + - found '/toolchains/arm-2010q1/bin/arm-none-linux-gnueabi-gcc' Tool chain test: OK - looking in '/toolchains/arm-2010q1/usr/bin' - scanning path '/toolchains/from' @@ -230,19 +230,19 @@ Tool chain test: OK - scanning path '/toolchains/sh4-gentoo-linux-gnu' - looking in '/toolchains/sh4-gentoo-linux-gnu/.' - looking in '/toolchains/sh4-gentoo-linux-gnu/bin' - - found '/toolchains/sh4-gentoo-linux-gnu/bin/sh4-gentoo-linux-gnu-gcc' + - found '/toolchains/sh4-gentoo-linux-gnu/bin/sh4-gentoo-linux-gnu-gcc' Tool chain test: OK - looking in '/toolchains/sh4-gentoo-linux-gnu/usr/bin' - scanning path '/toolchains/avr32-linux' - looking in '/toolchains/avr32-linux/.' - looking in '/toolchains/avr32-linux/bin' - - found '/toolchains/avr32-linux/bin/avr32-gcc' + - found '/toolchains/avr32-linux/bin/avr32-gcc' Tool chain test: OK - looking in '/toolchains/avr32-linux/usr/bin' - scanning path '/toolchains/m68k-linux' - looking in '/toolchains/m68k-linux/.' - looking in '/toolchains/m68k-linux/bin' - - found '/toolchains/m68k-linux/bin/m68k-linux-gcc' + - found '/toolchains/m68k-linux/bin/m68k-linux-gcc' Tool chain test: OK - looking in '/toolchains/m68k-linux/usr/bin' List of available toolchains (17): @@ -417,12 +417,12 @@ The full build output in this case is available in: ../lcd9b/12_of_18_gd92aff7_lcd--Add-support-for/lubbock/ done: Indicates the build was done, and holds the return code from make. - This is 0 for a good build, typically 2 for a failure. + This is 0 for a good build, typically 2 for a failure. err: Output from stderr, if any. Errors and warnings appear here. log: Output from stdout. Normally there isn't any since buildman runs - in silent mode for now. + in silent mode for now. toolchain: Shows information about the toolchain used for the build. @@ -492,118 +492,118 @@ $ ./tools/buildman/buildman -b us-mem4 -sSdB ... 19: Roll crc32 into hash infrastructure arm: (for 10/10 boards) all -143.4 bss +1.2 data -4.8 rodata -48.2 text -91.6 - paz00 : all +23 bss -4 rodata -29 text +56 - u-boot: add: 1/0, grow: 3/-2 bytes: 168/-104 (64) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - ext4fs_read_file 540 568 +28 - insert_var_value_sub 688 692 +4 - run_list_real 1996 1992 -4 - do_mem_crc 168 68 -100 - trimslice : all -9 bss +16 rodata -29 text +4 - u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - ext4fs_iterate_dir 672 668 -4 - ext4fs_read_file 568 548 -20 - do_mem_crc 168 68 -100 - whistler : all -9 bss +16 rodata -29 text +4 - u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - ext4fs_iterate_dir 672 668 -4 - ext4fs_read_file 568 548 -20 - do_mem_crc 168 68 -100 - seaboard : all -9 bss -28 rodata -29 text +48 - u-boot: add: 1/0, grow: 3/-2 bytes: 160/-104 (56) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - ext4fs_read_file 548 568 +20 - run_list_real 1996 2000 +4 - do_nandboot 760 756 -4 - do_mem_crc 168 68 -100 - colibri_t20_iris: all -9 rodata -29 text +20 - u-boot: add: 1/0, grow: 2/-3 bytes: 140/-112 (28) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - read_abs_bbt 204 208 +4 - do_nandboot 760 756 -4 - ext4fs_read_file 576 568 -8 - do_mem_crc 168 68 -100 - ventana : all -37 bss -12 rodata -29 text +4 - u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - ext4fs_iterate_dir 672 668 -4 - ext4fs_read_file 568 548 -20 - do_mem_crc 168 68 -100 - harmony : all -37 bss -16 rodata -29 text +8 - u-boot: add: 1/0, grow: 2/-3 bytes: 140/-124 (16) - function old new delta - hash_command 80 160 +80 - crc32_wd_buf - 56 +56 - nand_write_oob_syndrome 428 432 +4 - ext4fs_iterate_dir 672 668 -4 - ext4fs_read_file 568 548 -20 - do_mem_crc 168 68 -100 - medcom-wide : all -417 bss +28 data -16 rodata -93 text -336 - u-boot: add: 1/-1, grow: 1/-2 bytes: 88/-376 (-288) - function old new delta - crc32_wd_buf - 56 +56 - do_fat_read_at 2872 2904 +32 - hash_algo 16 - -16 - do_mem_crc 168 68 -100 - hash_command 420 160 -260 - tec : all -449 bss -4 data -16 rodata -93 text -336 - u-boot: add: 1/-1, grow: 1/-2 bytes: 88/-376 (-288) - function old new delta - crc32_wd_buf - 56 +56 - do_fat_read_at 2872 2904 +32 - hash_algo 16 - -16 - do_mem_crc 168 68 -100 - hash_command 420 160 -260 - plutux : all -481 bss +16 data -16 rodata -93 text -388 - u-boot: add: 1/-1, grow: 1/-3 bytes: 68/-408 (-340) - function old new delta - crc32_wd_buf - 56 +56 - do_load_serial_bin 1688 1700 +12 - hash_algo 16 - -16 - do_fat_read_at 2904 2872 -32 - do_mem_crc 168 68 -100 - hash_command 420 160 -260 + paz00 : all +23 bss -4 rodata -29 text +56 + u-boot: add: 1/0, grow: 3/-2 bytes: 168/-104 (64) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + ext4fs_read_file 540 568 +28 + insert_var_value_sub 688 692 +4 + run_list_real 1996 1992 -4 + do_mem_crc 168 68 -100 + trimslice : all -9 bss +16 rodata -29 text +4 + u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + ext4fs_iterate_dir 672 668 -4 + ext4fs_read_file 568 548 -20 + do_mem_crc 168 68 -100 + whistler : all -9 bss +16 rodata -29 text +4 + u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + ext4fs_iterate_dir 672 668 -4 + ext4fs_read_file 568 548 -20 + do_mem_crc 168 68 -100 + seaboard : all -9 bss -28 rodata -29 text +48 + u-boot: add: 1/0, grow: 3/-2 bytes: 160/-104 (56) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + ext4fs_read_file 548 568 +20 + run_list_real 1996 2000 +4 + do_nandboot 760 756 -4 + do_mem_crc 168 68 -100 + colibri_t20_iris: all -9 rodata -29 text +20 + u-boot: add: 1/0, grow: 2/-3 bytes: 140/-112 (28) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + read_abs_bbt 204 208 +4 + do_nandboot 760 756 -4 + ext4fs_read_file 576 568 -8 + do_mem_crc 168 68 -100 + ventana : all -37 bss -12 rodata -29 text +4 + u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + ext4fs_iterate_dir 672 668 -4 + ext4fs_read_file 568 548 -20 + do_mem_crc 168 68 -100 + harmony : all -37 bss -16 rodata -29 text +8 + u-boot: add: 1/0, grow: 2/-3 bytes: 140/-124 (16) + function old new delta + hash_command 80 160 +80 + crc32_wd_buf - 56 +56 + nand_write_oob_syndrome 428 432 +4 + ext4fs_iterate_dir 672 668 -4 + ext4fs_read_file 568 548 -20 + do_mem_crc 168 68 -100 + medcom-wide : all -417 bss +28 data -16 rodata -93 text -336 + u-boot: add: 1/-1, grow: 1/-2 bytes: 88/-376 (-288) + function old new delta + crc32_wd_buf - 56 +56 + do_fat_read_at 2872 2904 +32 + hash_algo 16 - -16 + do_mem_crc 168 68 -100 + hash_command 420 160 -260 + tec : all -449 bss -4 data -16 rodata -93 text -336 + u-boot: add: 1/-1, grow: 1/-2 bytes: 88/-376 (-288) + function old new delta + crc32_wd_buf - 56 +56 + do_fat_read_at 2872 2904 +32 + hash_algo 16 - -16 + do_mem_crc 168 68 -100 + hash_command 420 160 -260 + plutux : all -481 bss +16 data -16 rodata -93 text -388 + u-boot: add: 1/-1, grow: 1/-3 bytes: 68/-408 (-340) + function old new delta + crc32_wd_buf - 56 +56 + do_load_serial_bin 1688 1700 +12 + hash_algo 16 - -16 + do_fat_read_at 2904 2872 -32 + do_mem_crc 168 68 -100 + hash_command 420 160 -260 powerpc: (for 5/5 boards) all +37.4 data -3.2 rodata -41.8 text +82.4 - MPC8610HPCD : all +55 rodata -29 text +84 - u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) - function old new delta - hash_command - 176 +176 - do_mem_crc 184 88 -96 - MPC8641HPCN : all +55 rodata -29 text +84 - u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) - function old new delta - hash_command - 176 +176 - do_mem_crc 184 88 -96 - MPC8641HPCN_36BIT: all +55 rodata -29 text +84 - u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) - function old new delta - hash_command - 176 +176 - do_mem_crc 184 88 -96 - sbc8641d : all +55 rodata -29 text +84 - u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) - function old new delta - hash_command - 176 +176 - do_mem_crc 184 88 -96 - xpedite517x : all -33 data -16 rodata -93 text +76 - u-boot: add: 1/-1, grow: 0/-1 bytes: 176/-112 (64) - function old new delta - hash_command - 176 +176 - hash_algo 16 - -16 - do_mem_crc 184 88 -96 + MPC8610HPCD : all +55 rodata -29 text +84 + u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) + function old new delta + hash_command - 176 +176 + do_mem_crc 184 88 -96 + MPC8641HPCN : all +55 rodata -29 text +84 + u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) + function old new delta + hash_command - 176 +176 + do_mem_crc 184 88 -96 + MPC8641HPCN_36BIT: all +55 rodata -29 text +84 + u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) + function old new delta + hash_command - 176 +176 + do_mem_crc 184 88 -96 + sbc8641d : all +55 rodata -29 text +84 + u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80) + function old new delta + hash_command - 176 +176 + do_mem_crc 184 88 -96 + xpedite517x : all -33 data -16 rodata -93 text +76 + u-boot: add: 1/-1, grow: 0/-1 bytes: 176/-112 (64) + function old new delta + hash_command - 176 +176 + hash_algo 16 - -16 + do_mem_crc 184 88 -96 ... @@ -617,7 +617,7 @@ is the sizes for each function. This information starts with: add - number of functions added / removed grow - number of functions which grew / shrunk bytes - number of bytes of code added to / removed from all functions, - plus the total byte change in brackets + plus the total byte change in brackets The change seems to be that hash_command() has increased by more than the do_mem_crc() function has decreased. The function sizes typically add up to diff --git a/tools/buildman/board.py b/tools/buildman/board.py index 1d3db20..cc2a545 100644 --- a/tools/buildman/board.py +++ b/tools/buildman/board.py @@ -6,149 +6,149 @@ class Board: """A particular board that we can build""" def __init__(self, status, arch, cpu, soc, vendor, board_name, target, options): - """Create a new board type. - - Args: - status: define whether the board is 'Active' or 'Orphaned' - arch: Architecture name (e.g. arm) - cpu: Cpu name (e.g. arm1136) - soc: Name of SOC, or '' if none (e.g. mx31) - vendor: Name of vendor (e.g. armltd) - board_name: Name of board (e.g. integrator) - target: Target name (use make _config to configure) - options: board-specific options (e.g. integratorcp:CM1136) - """ - self.target = target - self.arch = arch - self.cpu = cpu - self.board_name = board_name - self.vendor = vendor - self.soc = soc - self.props = [self.target, self.arch, self.cpu, self.board_name, - self.vendor, self.soc] - self.options = options - self.build_it = False + """Create a new board type. + + Args: + status: define whether the board is 'Active' or 'Orphaned' + arch: Architecture name (e.g. arm) + cpu: Cpu name (e.g. arm1136) + soc: Name of SOC, or '' if none (e.g. mx31) + vendor: Name of vendor (e.g. armltd) + board_name: Name of board (e.g. integrator) + target: Target name (use make _config to configure) + options: board-specific options (e.g. integratorcp:CM1136) + """ + self.target = target + self.arch = arch + self.cpu = cpu + self.board_name = board_name + self.vendor = vendor + self.soc = soc + self.props = [self.target, self.arch, self.cpu, self.board_name, + self.vendor, self.soc] + self.options = options + self.build_it = False class Boards: """Manage a list of boards.""" def __init__(self): - # Use a simple list here, sinc OrderedDict requires Python 2.7 - self._boards = [] + # Use a simple list here, sinc OrderedDict requires Python 2.7 + self._boards = [] def AddBoard(self, board): - """Add a new board to the list. + """Add a new board to the list. - The board's target member must not already exist in the board list. + The board's target member must not already exist in the board list. - Args: - board: board to add - """ - self._boards.append(board) + Args: + board: board to add + """ + self._boards.append(board) def ReadBoards(self, fname): - """Read a list of boards from a board file. - - Create a board object for each and add it to our _boards list. - - Args: - fname: Filename of boards.cfg file - """ - with open(fname, 'r') as fd: - for line in fd: - if line[0] == '#': - continue - fields = line.split() - if not fields: - continue - for upto in range(len(fields)): - if fields[upto] == '-': - fields[upto] = '' - while len(fields) < 8: - fields.append('') - if len(fields) > 8: - fields = fields[:8] - - board = Board(*fields) - self.AddBoard(board) + """Read a list of boards from a board file. + + Create a board object for each and add it to our _boards list. + + Args: + fname: Filename of boards.cfg file + """ + with open(fname, 'r') as fd: + for line in fd: + if line[0] == '#': + continue + fields = line.split() + if not fields: + continue + for upto in range(len(fields)): + if fields[upto] == '-': + fields[upto] = '' + while len(fields) < 8: + fields.append('') + if len(fields) > 8: + fields = fields[:8] + + board = Board(*fields) + self.AddBoard(board) def GetList(self): - """Return a list of available boards. + """Return a list of available boards. - Returns: - List of Board objects - """ - return self._boards + Returns: + List of Board objects + """ + return self._boards def GetDict(self): - """Build a dictionary containing all the boards. - - Returns: - Dictionary: - key is board.target - value is board - """ - board_dict = {} - for board in self._boards: - board_dict[board.target] = board - return board_dict + """Build a dictionary containing all the boards. + + Returns: + Dictionary: + key is board.target + value is board + """ + board_dict = {} + for board in self._boards: + board_dict[board.target] = board + return board_dict def GetSelectedDict(self): - """Return a dictionary containing the selected boards + """Return a dictionary containing the selected boards - Returns: - List of Board objects that are marked selected - """ - board_dict = {} - for board in self._boards: - if board.build_it: - board_dict[board.target] = board - return board_dict + Returns: + List of Board objects that are marked selected + """ + board_dict = {} + for board in self._boards: + if board.build_it: + board_dict[board.target] = board + return board_dict def GetSelected(self): - """Return a list of selected boards + """Return a list of selected boards - Returns: - List of Board objects that are marked selected - """ - return [board for board in self._boards if board.build_it] + Returns: + List of Board objects that are marked selected + """ + return [board for board in self._boards if board.build_it] def GetSelectedNames(self): - """Return a list of selected boards + """Return a list of selected boards - Returns: - List of board names that are marked selected - """ - return [board.target for board in self._boards if board.build_it] + Returns: + List of board names that are marked selected + """ + return [board.target for board in self._boards if board.build_it] def SelectBoards(self, args): - """Mark boards selected based on args - - Args: - List of strings specifying boards to include, either named, or - by their target, architecture, cpu, vendor or soc. If empty, all - boards are selected. - - Returns: - Dictionary which holds the number of boards which were selected - due to each argument, arranged by argument. - """ - result = {} - for arg in args: - result[arg] = 0 - result['all'] = 0 - - for board in self._boards: - if args: - for arg in args: - if arg in board.props: - if not board.build_it: - board.build_it = True - result[arg] += 1 - result['all'] += 1 - else: - board.build_it = True - result['all'] += 1 - - return result + """Mark boards selected based on args + + Args: + List of strings specifying boards to include, either named, or + by their target, architecture, cpu, vendor or soc. If empty, all + boards are selected. + + Returns: + Dictionary which holds the number of boards which were selected + due to each argument, arranged by argument. + """ + result = {} + for arg in args: + result[arg] = 0 + result['all'] = 0 + + for board in self._boards: + if args: + for arg in args: + if arg in board.props: + if not board.build_it: + board.build_it = True + result[arg] += 1 + result['all'] += 1 + else: + board.build_it = True + result['all'] += 1 + + return result diff --git a/tools/buildman/bsettings.py b/tools/buildman/bsettings.py index 9164798..3e001f6 100644 --- a/tools/buildman/bsettings.py +++ b/tools/buildman/bsettings.py @@ -11,7 +11,7 @@ def Setup(fname=''): """Set up the buildman settings module by reading config files Args: - config_fname: Config filename to read ('' for default) + config_fname: Config filename to read ('' for default) """ global settings global config_fname @@ -19,23 +19,23 @@ def Setup(fname=''): settings = ConfigParser.SafeConfigParser() config_fname = fname if config_fname == '': - config_fname = '%s/.buildman' % os.getenv('HOME') + config_fname = '%s/.buildman' % os.getenv('HOME') if config_fname: - settings.read(config_fname) + settings.read(config_fname) def GetItems(section): """Get the items from a section of the config. Args: - section: name of section to retrieve + section: name of section to retrieve Returns: - List of (name, value) tuples for the section + List of (name, value) tuples for the section """ try: - return settings.items(section) + return settings.items(section) except ConfigParser.NoSectionError as e: - print e - return [] + print e + return [] except: - raise + raise diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index 4a2d753..dac9497 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -71,21 +71,21 @@ like this: us-net/ base directory 01_of_02_g4ed4ebc_net--Add-tftp-speed-/ - sandbox/ - u-boot.bin - seaboard/ - u-boot.bin + sandbox/ + u-boot.bin + seaboard/ + u-boot.bin 02_of_02_g4ed4ebc_net--Check-tftp-comp/ - sandbox/ - u-boot.bin - seaboard/ - u-boot.bin + sandbox/ + u-boot.bin + seaboard/ + u-boot.bin .bm-work/ - 00/ working directory for thread 0 (contains source checkout) - build/ build output - 01/ working directory for thread 1 - build/ build output - ... + 00/ working directory for thread 0 (contains source checkout) + build/ build output + 01/ working directory for thread 1 + build/ build output + ... u-boot/ source directory .git/ repository """ @@ -101,26 +101,26 @@ def Mkdir(dirname): """Make a directory if it doesn't already exist. Args: - dirname: Directory to create + dirname: Directory to create """ try: - os.mkdir(dirname) + os.mkdir(dirname) except OSError as err: - if err.errno == errno.EEXIST: - pass - else: - raise + if err.errno == errno.EEXIST: + pass + else: + raise class BuilderJob: """Holds information about a job to be performed by a thread Members: - board: Board object to build - commits: List of commit options to build. + board: Board object to build + commits: List of commit options to build. """ def __init__(self): - self.board = None - self.commits = [] + self.board = None + self.commits = [] class ResultThread(threading.Thread): @@ -130,23 +130,23 @@ class ResultThread(threading.Thread): result thread, and this helps to serialise the build output. """ def __init__(self, builder): - """Set up a new result thread + """Set up a new result thread - Args: - builder: Builder which will be sent each result - """ - threading.Thread.__init__(self) - self.builder = builder + Args: + builder: Builder which will be sent each result + """ + threading.Thread.__init__(self) + self.builder = builder def run(self): - """Called to start up the result thread. + """Called to start up the result thread. - We collect the next result job and pass it on to the build. - """ - while True: - result = self.builder.out_queue.get() - self.builder.ProcessResult(result) - self.builder.out_queue.task_done() + We collect the next result job and pass it on to the build. + """ + while True: + result = self.builder.out_queue.get() + self.builder.ProcessResult(result) + self.builder.out_queue.task_done() class BuilderThread(threading.Thread): @@ -156,1275 +156,1275 @@ class BuilderThread(threading.Thread): and then pass the results on to the output queue. Members: - builder: The builder which contains information we might need - thread_num: Our thread number (0-n-1), used to decide on a - temporary directory + builder: The builder which contains information we might need + thread_num: Our thread number (0-n-1), used to decide on a + temporary directory """ def __init__(self, builder, thread_num): - """Set up a new builder thread""" - threading.Thread.__init__(self) - self.builder = builder - self.thread_num = thread_num + """Set up a new builder thread""" + threading.Thread.__init__(self) + self.builder = builder + self.thread_num = thread_num def Make(self, commit, brd, stage, cwd, *args, **kwargs): - """Run 'make' on a particular commit and board. - - The source code will already be checked out, so the 'commit' - argument is only for information. - - Args: - commit: Commit object that is being built - brd: Board object that is being built - stage: Stage of the build. Valid stages are: - distclean - can be called to clean source - config - called to configure for a board - build - the main make invocation - it does the build - args: A list of arguments to pass to 'make' - kwargs: A list of keyword arguments to pass to command.RunPipe() - - Returns: - CommandResult object - """ - return self.builder.do_make(commit, brd, stage, cwd, *args, - **kwargs) + """Run 'make' on a particular commit and board. + + The source code will already be checked out, so the 'commit' + argument is only for information. + + Args: + commit: Commit object that is being built + brd: Board object that is being built + stage: Stage of the build. Valid stages are: + distclean - can be called to clean source + config - called to configure for a board + build - the main make invocation - it does the build + args: A list of arguments to pass to 'make' + kwargs: A list of keyword arguments to pass to command.RunPipe() + + Returns: + CommandResult object + """ + return self.builder.do_make(commit, brd, stage, cwd, *args, + **kwargs) def RunCommit(self, commit_upto, brd, work_dir, do_config, force_build): - """Build a particular commit. - - If the build is already done, and we are not forcing a build, we skip - the build and just return the previously-saved results. - - Args: - commit_upto: Commit number to build (0...n-1) - brd: Board object to build - work_dir: Directory to which the source will be checked out - do_config: True to run a make _config on the source - force_build: Force a build even if one was previously done - - Returns: - tuple containing: - - CommandResult object containing the results of the build - - boolean indicating whether 'make config' is still needed - """ - # Create a default result - it will be overwritte by the call to - # self.Make() below, in the event that we do a build. - result = command.CommandResult() - result.return_code = 0 - out_dir = os.path.join(work_dir, 'build') - - # Check if the job was already completed last time - done_file = self.builder.GetDoneFile(commit_upto, brd.target) - result.already_done = os.path.exists(done_file) - if result.already_done and not force_build: - # Get the return code from that build and use it - with open(done_file, 'r') as fd: - result.return_code = int(fd.readline()) - err_file = self.builder.GetErrFile(commit_upto, brd.target) - if os.path.exists(err_file) and os.stat(err_file).st_size: - result.stderr = 'bad' - else: - # We are going to have to build it. First, get a toolchain - if not self.toolchain: - try: - self.toolchain = self.builder.toolchains.Select(brd.arch) - except ValueError as err: - result.return_code = 10 - result.stdout = '' - result.stderr = str(err) - # TODO(sjg@chromium.org): This gets swallowed, but needs - # to be reported. - - if self.toolchain: - # Checkout the right commit - if commit_upto is not None: - commit = self.builder.commits[commit_upto] - if self.builder.checkout: - git_dir = os.path.join(work_dir, '.git') - gitutil.Checkout(commit.hash, git_dir, work_dir, - force=True) - else: - commit = self.builder.commit # Ick, fix this for BuildCommits() - - # Set up the environment and command line - env = self.toolchain.MakeEnvironment() - Mkdir(out_dir) - args = ['O=build', '-s'] - if self.builder.num_jobs is not None: - args.extend(['-j', str(self.builder.num_jobs)]) - config_args = ['%s_config' % brd.target] - config_out = '' - args.extend(self.builder.toolchains.GetMakeArguments(brd)) - - # If we need to reconfigure, do that now - if do_config: - result = self.Make(commit, brd, 'distclean', work_dir, - 'distclean', *args, env=env) - result = self.Make(commit, brd, 'config', work_dir, - *(args + config_args), env=env) - config_out = result.combined - do_config = False # No need to configure next time - if result.return_code == 0: - result = self.Make(commit, brd, 'build', work_dir, *args, - env=env) - result.stdout = config_out + result.stdout - else: - result.return_code = 1 - result.stderr = 'No tool chain for %s\n' % brd.arch - result.already_done = False - - result.toolchain = self.toolchain - result.brd = brd - result.commit_upto = commit_upto - result.out_dir = out_dir - return result, do_config + """Build a particular commit. + + If the build is already done, and we are not forcing a build, we skip + the build and just return the previously-saved results. + + Args: + commit_upto: Commit number to build (0...n-1) + brd: Board object to build + work_dir: Directory to which the source will be checked out + do_config: True to run a make _config on the source + force_build: Force a build even if one was previously done + + Returns: + tuple containing: + - CommandResult object containing the results of the build + - boolean indicating whether 'make config' is still needed + """ + # Create a default result - it will be overwritte by the call to + # self.Make() below, in the event that we do a build. + result = command.CommandResult() + result.return_code = 0 + out_dir = os.path.join(work_dir, 'build') + + # Check if the job was already completed last time + done_file = self.builder.GetDoneFile(commit_upto, brd.target) + result.already_done = os.path.exists(done_file) + if result.already_done and not force_build: + # Get the return code from that build and use it + with open(done_file, 'r') as fd: + result.return_code = int(fd.readline()) + err_file = self.builder.GetErrFile(commit_upto, brd.target) + if os.path.exists(err_file) and os.stat(err_file).st_size: + result.stderr = 'bad' + else: + # We are going to have to build it. First, get a toolchain + if not self.toolchain: + try: + self.toolchain = self.builder.toolchains.Select(brd.arch) + except ValueError as err: + result.return_code = 10 + result.stdout = '' + result.stderr = str(err) + # TODO(sjg@chromium.org): This gets swallowed, but needs + # to be reported. + + if self.toolchain: + # Checkout the right commit + if commit_upto is not None: + commit = self.builder.commits[commit_upto] + if self.builder.checkout: + git_dir = os.path.join(work_dir, '.git') + gitutil.Checkout(commit.hash, git_dir, work_dir, + force=True) + else: + commit = self.builder.commit # Ick, fix this for BuildCommits() + + # Set up the environment and command line + env = self.toolchain.MakeEnvironment() + Mkdir(out_dir) + args = ['O=build', '-s'] + if self.builder.num_jobs is not None: + args.extend(['-j', str(self.builder.num_jobs)]) + config_args = ['%s_config' % brd.target] + config_out = '' + args.extend(self.builder.toolchains.GetMakeArguments(brd)) + + # If we need to reconfigure, do that now + if do_config: + result = self.Make(commit, brd, 'distclean', work_dir, + 'distclean', *args, env=env) + result = self.Make(commit, brd, 'config', work_dir, + *(args + config_args), env=env) + config_out = result.combined + do_config = False # No need to configure next time + if result.return_code == 0: + result = self.Make(commit, brd, 'build', work_dir, *args, + env=env) + result.stdout = config_out + result.stdout + else: + result.return_code = 1 + result.stderr = 'No tool chain for %s\n' % brd.arch + result.already_done = False + + result.toolchain = self.toolchain + result.brd = brd + result.commit_upto = commit_upto + result.out_dir = out_dir + return result, do_config def _WriteResult(self, result, keep_outputs): - """Write a built result to the output directory. - - Args: - result: CommandResult object containing result to write - keep_outputs: True to store the output binaries, False - to delete them - """ - # Fatal error - if result.return_code < 0: - return - - # Aborted? - if result.stderr and 'No child processes' in result.stderr: - return - - if result.already_done: - return - - # Write the output and stderr - output_dir = self.builder._GetOutputDir(result.commit_upto) - Mkdir(output_dir) - build_dir = self.builder.GetBuildDir(result.commit_upto, - result.brd.target) - Mkdir(build_dir) - - outfile = os.path.join(build_dir, 'log') - with open(outfile, 'w') as fd: - if result.stdout: - fd.write(result.stdout) - - errfile = self.builder.GetErrFile(result.commit_upto, - result.brd.target) - if result.stderr: - with open(errfile, 'w') as fd: - fd.write(result.stderr) - elif os.path.exists(errfile): - os.remove(errfile) - - if result.toolchain: - # Write the build result and toolchain information. - done_file = self.builder.GetDoneFile(result.commit_upto, - result.brd.target) - with open(done_file, 'w') as fd: - fd.write('%s' % result.return_code) - with open(os.path.join(build_dir, 'toolchain'), 'w') as fd: - print >>fd, 'gcc', result.toolchain.gcc - print >>fd, 'path', result.toolchain.path - print >>fd, 'cross', result.toolchain.cross - print >>fd, 'arch', result.toolchain.arch - fd.write('%s' % result.return_code) - - with open(os.path.join(build_dir, 'toolchain'), 'w') as fd: - print >>fd, 'gcc', result.toolchain.gcc - print >>fd, 'path', result.toolchain.path - - # Write out the image and function size information and an objdump - env = result.toolchain.MakeEnvironment() - lines = [] - for fname in ['u-boot', 'spl/u-boot-spl']: - cmd = ['%snm' % self.toolchain.cross, '--size-sort', fname] - nm_result = command.RunPipe([cmd], capture=True, - capture_stderr=True, cwd=result.out_dir, - raise_on_error=False, env=env) - if nm_result.stdout: - nm = self.builder.GetFuncSizesFile(result.commit_upto, - result.brd.target, fname) - with open(nm, 'w') as fd: - print >>fd, nm_result.stdout, - - cmd = ['%sobjdump' % self.toolchain.cross, '-h', fname] - dump_result = command.RunPipe([cmd], capture=True, - capture_stderr=True, cwd=result.out_dir, - raise_on_error=False, env=env) - rodata_size = '' - if dump_result.stdout: - objdump = self.builder.GetObjdumpFile(result.commit_upto, - result.brd.target, fname) - with open(objdump, 'w') as fd: - print >>fd, dump_result.stdout, - for line in dump_result.stdout.splitlines(): - fields = line.split() - if len(fields) > 5 and fields[1] == '.rodata': - rodata_size = fields[2] - - cmd = ['%ssize' % self.toolchain.cross, fname] - size_result = command.RunPipe([cmd], capture=True, - capture_stderr=True, cwd=result.out_dir, - raise_on_error=False, env=env) - if size_result.stdout: - lines.append(size_result.stdout.splitlines()[1] + ' ' + - rodata_size) - - # Write out the image sizes file. This is similar to the output - # of binutil's 'size' utility, but it omits the header line and - # adds an additional hex value at the end of each line for the - # rodata size - if len(lines): - sizes = self.builder.GetSizesFile(result.commit_upto, - result.brd.target) - with open(sizes, 'w') as fd: - print >>fd, '\n'.join(lines) - - # Now write the actual build output - if keep_outputs: - patterns = ['u-boot', '*.bin', 'u-boot.dtb', '*.map', - 'include/autoconf.mk', 'spl/u-boot-spl', - 'spl/u-boot-spl.bin'] - for pattern in patterns: - file_list = glob.glob(os.path.join(result.out_dir, pattern)) - for fname in file_list: - shutil.copy(fname, build_dir) + """Write a built result to the output directory. + + Args: + result: CommandResult object containing result to write + keep_outputs: True to store the output binaries, False + to delete them + """ + # Fatal error + if result.return_code < 0: + return + + # Aborted? + if result.stderr and 'No child processes' in result.stderr: + return + + if result.already_done: + return + + # Write the output and stderr + output_dir = self.builder._GetOutputDir(result.commit_upto) + Mkdir(output_dir) + build_dir = self.builder.GetBuildDir(result.commit_upto, + result.brd.target) + Mkdir(build_dir) + + outfile = os.path.join(build_dir, 'log') + with open(outfile, 'w') as fd: + if result.stdout: + fd.write(result.stdout) + + errfile = self.builder.GetErrFile(result.commit_upto, + result.brd.target) + if result.stderr: + with open(errfile, 'w') as fd: + fd.write(result.stderr) + elif os.path.exists(errfile): + os.remove(errfile) + + if result.toolchain: + # Write the build result and toolchain information. + done_file = self.builder.GetDoneFile(result.commit_upto, + result.brd.target) + with open(done_file, 'w') as fd: + fd.write('%s' % result.return_code) + with open(os.path.join(build_dir, 'toolchain'), 'w') as fd: + print >>fd, 'gcc', result.toolchain.gcc + print >>fd, 'path', result.toolchain.path + print >>fd, 'cross', result.toolchain.cross + print >>fd, 'arch', result.toolchain.arch + fd.write('%s' % result.return_code) + + with open(os.path.join(build_dir, 'toolchain'), 'w') as fd: + print >>fd, 'gcc', result.toolchain.gcc + print >>fd, 'path', result.toolchain.path + + # Write out the image and function size information and an objdump + env = result.toolchain.MakeEnvironment() + lines = [] + for fname in ['u-boot', 'spl/u-boot-spl']: + cmd = ['%snm' % self.toolchain.cross, '--size-sort', fname] + nm_result = command.RunPipe([cmd], capture=True, + capture_stderr=True, cwd=result.out_dir, + raise_on_error=False, env=env) + if nm_result.stdout: + nm = self.builder.GetFuncSizesFile(result.commit_upto, + result.brd.target, fname) + with open(nm, 'w') as fd: + print >>fd, nm_result.stdout, + + cmd = ['%sobjdump' % self.toolchain.cross, '-h', fname] + dump_result = command.RunPipe([cmd], capture=True, + capture_stderr=True, cwd=result.out_dir, + raise_on_error=False, env=env) + rodata_size = '' + if dump_result.stdout: + objdump = self.builder.GetObjdumpFile(result.commit_upto, + result.brd.target, fname) + with open(objdump, 'w') as fd: + print >>fd, dump_result.stdout, + for line in dump_result.stdout.splitlines(): + fields = line.split() + if len(fields) > 5 and fields[1] == '.rodata': + rodata_size = fields[2] + + cmd = ['%ssize' % self.toolchain.cross, fname] + size_result = command.RunPipe([cmd], capture=True, + capture_stderr=True, cwd=result.out_dir, + raise_on_error=False, env=env) + if size_result.stdout: + lines.append(size_result.stdout.splitlines()[1] + ' ' + + rodata_size) + + # Write out the image sizes file. This is similar to the output + # of binutil's 'size' utility, but it omits the header line and + # adds an additional hex value at the end of each line for the + # rodata size + if len(lines): + sizes = self.builder.GetSizesFile(result.commit_upto, + result.brd.target) + with open(sizes, 'w') as fd: + print >>fd, '\n'.join(lines) + + # Now write the actual build output + if keep_outputs: + patterns = ['u-boot', '*.bin', 'u-boot.dtb', '*.map', + 'include/autoconf.mk', 'spl/u-boot-spl', + 'spl/u-boot-spl.bin'] + for pattern in patterns: + file_list = glob.glob(os.path.join(result.out_dir, pattern)) + for fname in file_list: + shutil.copy(fname, build_dir) def RunJob(self, job): - """Run a single job - - A job consists of a building a list of commits for a particular board. - - Args: - job: Job to build - """ - brd = job.board - work_dir = self.builder.GetThreadDir(self.thread_num) - self.toolchain = None - if job.commits: - # Run 'make board_config' on the first commit - do_config = True - commit_upto = 0 - force_build = False - for commit_upto in range(0, len(job.commits), job.step): - result, request_config = self.RunCommit(commit_upto, brd, - work_dir, do_config, - force_build or self.builder.force_build) - failed = result.return_code or result.stderr - if failed and not do_config: - # If our incremental build failed, try building again - # with a reconfig. - if self.builder.force_config_on_failure: - result, request_config = self.RunCommit(commit_upto, - brd, work_dir, True, True) - do_config = request_config - - # If we built that commit, then config is done. But if we got - # an warning, reconfig next time to force it to build the same - # files that created warnings this time. Otherwise an - # incremental build may not build the same file, and we will - # think that the warning has gone away. - # We could avoid this by using -Werror everywhere... - # For errors, the problem doesn't happen, since presumably - # the build stopped and didn't generate output, so will retry - # that file next time. So we could detect warnings and deal - # with them specially here. For now, we just reconfigure if - # anything goes work. - # Of course this is substantially slower if there are build - # errors/warnings (e.g. 2-3x slower even if only 10% of builds - # have problems). - if (failed and not result.already_done and not do_config and - self.builder.force_config_on_failure): - # If this build failed, try the next one with a - # reconfigure. - # Sometimes if the board_config.h file changes it can mess - # with dependencies, and we get: - # make: *** No rule to make target `include/autoconf.mk', - # needed by `depend'. - do_config = True - force_build = True - else: - force_build = False - if self.builder.force_config_on_failure: - if failed: - do_config = True - result.commit_upto = commit_upto - if result.return_code < 0: - raise ValueError('Interrupt') - - # We have the build results, so output the result - self._WriteResult(result, job.keep_outputs) - self.builder.out_queue.put(result) - else: - # Just build the currently checked-out build - result = self.RunCommit(None, True) - result.commit_upto = self.builder.upto - self.builder.out_queue.put(result) + """Run a single job + + A job consists of a building a list of commits for a particular board. + + Args: + job: Job to build + """ + brd = job.board + work_dir = self.builder.GetThreadDir(self.thread_num) + self.toolchain = None + if job.commits: + # Run 'make board_config' on the first commit + do_config = True + commit_upto = 0 + force_build = False + for commit_upto in range(0, len(job.commits), job.step): + result, request_config = self.RunCommit(commit_upto, brd, + work_dir, do_config, + force_build or self.builder.force_build) + failed = result.return_code or result.stderr + if failed and not do_config: + # If our incremental build failed, try building again + # with a reconfig. + if self.builder.force_config_on_failure: + result, request_config = self.RunCommit(commit_upto, + brd, work_dir, True, True) + do_config = request_config + + # If we built that commit, then config is done. But if we got + # an warning, reconfig next time to force it to build the same + # files that created warnings this time. Otherwise an + # incremental build may not build the same file, and we will + # think that the warning has gone away. + # We could avoid this by using -Werror everywhere... + # For errors, the problem doesn't happen, since presumably + # the build stopped and didn't generate output, so will retry + # that file next time. So we could detect warnings and deal + # with them specially here. For now, we just reconfigure if + # anything goes work. + # Of course this is substantially slower if there are build + # errors/warnings (e.g. 2-3x slower even if only 10% of builds + # have problems). + if (failed and not result.already_done and not do_config and + self.builder.force_config_on_failure): + # If this build failed, try the next one with a + # reconfigure. + # Sometimes if the board_config.h file changes it can mess + # with dependencies, and we get: + # make: *** No rule to make target `include/autoconf.mk', + # needed by `depend'. + do_config = True + force_build = True + else: + force_build = False + if self.builder.force_config_on_failure: + if failed: + do_config = True + result.commit_upto = commit_upto + if result.return_code < 0: + raise ValueError('Interrupt') + + # We have the build results, so output the result + self._WriteResult(result, job.keep_outputs) + self.builder.out_queue.put(result) + else: + # Just build the currently checked-out build + result = self.RunCommit(None, True) + result.commit_upto = self.builder.upto + self.builder.out_queue.put(result) def run(self): - """Our thread's run function - - This thread picks a job from the queue, runs it, and then goes to the - next job. - """ - alive = True - while True: - job = self.builder.queue.get() - try: - if self.builder.active and alive: - self.RunJob(job) - except Exception as err: - alive = False - print err - self.builder.queue.task_done() + """Our thread's run function + + This thread picks a job from the queue, runs it, and then goes to the + next job. + """ + alive = True + while True: + job = self.builder.queue.get() + try: + if self.builder.active and alive: + self.RunJob(job) + except Exception as err: + alive = False + print err + self.builder.queue.task_done() class Builder: """Class for building U-Boot for a particular commit. Public members: (many should ->private) - active: True if the builder is active and has not been stopped - already_done: Number of builds already completed - base_dir: Base directory to use for builder - checkout: True to check out source, False to skip that step. - This is used for testing. - col: terminal.Color() object - count: Number of commits to build - do_make: Method to call to invoke Make - fail: Number of builds that failed due to error - force_build: Force building even if a build already exists - force_config_on_failure: If a commit fails for a board, disable - incremental building for the next commit we build for that - board, so that we will see all warnings/errors again. - git_dir: Git directory containing source repository - last_line_len: Length of the last line we printed (used for erasing - it with new progress information) - num_jobs: Number of jobs to run at once (passed to make as -j) - num_threads: Number of builder threads to run - out_queue: Queue of results to process - re_make_err: Compiled regular expression for ignore_lines - queue: Queue of jobs to run - threads: List of active threads - toolchains: Toolchains object to use for building - upto: Current commit number we are building (0.count-1) - warned: Number of builds that produced at least one warning + active: True if the builder is active and has not been stopped + already_done: Number of builds already completed + base_dir: Base directory to use for builder + checkout: True to check out source, False to skip that step. + This is used for testing. + col: terminal.Color() object + count: Number of commits to build + do_make: Method to call to invoke Make + fail: Number of builds that failed due to error + force_build: Force building even if a build already exists + force_config_on_failure: If a commit fails for a board, disable + incremental building for the next commit we build for that + board, so that we will see all warnings/errors again. + git_dir: Git directory containing source repository + last_line_len: Length of the last line we printed (used for erasing + it with new progress information) + num_jobs: Number of jobs to run at once (passed to make as -j) + num_threads: Number of builder threads to run + out_queue: Queue of results to process + re_make_err: Compiled regular expression for ignore_lines + queue: Queue of jobs to run + threads: List of active threads + toolchains: Toolchains object to use for building + upto: Current commit number we are building (0.count-1) + warned: Number of builds that produced at least one warning Private members: - _base_board_dict: Last-summarised Dict of boards - _base_err_lines: Last-summarised list of errors - _build_period_us: Time taken for a single build (float object). - _complete_delay: Expected delay until completion (timedelta) - _next_delay_update: Next time we plan to display a progress update - (datatime) - _show_unknown: Show unknown boards (those not built) in summary - _timestamps: List of timestamps for the completion of the last - last _timestamp_count builds. Each is a datetime object. - _timestamp_count: Number of timestamps to keep in our list. - _working_dir: Base working directory containing all threads + _base_board_dict: Last-summarised Dict of boards + _base_err_lines: Last-summarised list of errors + _build_period_us: Time taken for a single build (float object). + _complete_delay: Expected delay until completion (timedelta) + _next_delay_update: Next time we plan to display a progress update + (datatime) + _show_unknown: Show unknown boards (those not built) in summary + _timestamps: List of timestamps for the completion of the last + last _timestamp_count builds. Each is a datetime object. + _timestamp_count: Number of timestamps to keep in our list. + _working_dir: Base working directory containing all threads """ class Outcome: - """Records a build outcome for a single make invocation - - Public Members: - rc: Outcome value (OUTCOME_...) - err_lines: List of error lines or [] if none - sizes: Dictionary of image size information, keyed by filename - - Each value is itself a dictionary containing - values for 'text', 'data' and 'bss', being the integer - size in bytes of each section. - func_sizes: Dictionary keyed by filename - e.g. 'u-boot'. Each - value is itself a dictionary: - key: function name - value: Size of function in bytes - """ - def __init__(self, rc, err_lines, sizes, func_sizes): - self.rc = rc - self.err_lines = err_lines - self.sizes = sizes - self.func_sizes = func_sizes + """Records a build outcome for a single make invocation + + Public Members: + rc: Outcome value (OUTCOME_...) + err_lines: List of error lines or [] if none + sizes: Dictionary of image size information, keyed by filename + - Each value is itself a dictionary containing + values for 'text', 'data' and 'bss', being the integer + size in bytes of each section. + func_sizes: Dictionary keyed by filename - e.g. 'u-boot'. Each + value is itself a dictionary: + key: function name + value: Size of function in bytes + """ + def __init__(self, rc, err_lines, sizes, func_sizes): + self.rc = rc + self.err_lines = err_lines + self.sizes = sizes + self.func_sizes = func_sizes def __init__(self, toolchains, base_dir, git_dir, num_threads, num_jobs, - checkout=True, show_unknown=True, step=1): - """Create a new Builder object - - Args: - toolchains: Toolchains object to use for building - base_dir: Base directory to use for builder - git_dir: Git directory containing source repository - num_threads: Number of builder threads to run - num_jobs: Number of jobs to run at once (passed to make as -j) - checkout: True to check out source, False to skip that step. - This is used for testing. - show_unknown: Show unknown boards (those not built) in summary - step: 1 to process every commit, n to process every nth commit - """ - self.toolchains = toolchains - self.base_dir = base_dir - self._working_dir = os.path.join(base_dir, '.bm-work') - self.threads = [] - self.active = True - self.do_make = self.Make - self.checkout = checkout - self.num_threads = num_threads - self.num_jobs = num_jobs - self.already_done = 0 - self.force_build = False - self.git_dir = git_dir - self._show_unknown = show_unknown - self._timestamp_count = 10 - self._build_period_us = None - self._complete_delay = None - self._next_delay_update = datetime.now() - self.force_config_on_failure = True - self._step = step - - self.col = terminal.Color() - - self.queue = Queue.Queue() - self.out_queue = Queue.Queue() - for i in range(self.num_threads): - t = BuilderThread(self, i) - t.setDaemon(True) - t.start() - self.threads.append(t) - - self.last_line_len = 0 - t = ResultThread(self) - t.setDaemon(True) - t.start() - self.threads.append(t) - - ignore_lines = ['(make.*Waiting for unfinished)', '(Segmentation fault)'] - self.re_make_err = re.compile('|'.join(ignore_lines)) + checkout=True, show_unknown=True, step=1): + """Create a new Builder object + + Args: + toolchains: Toolchains object to use for building + base_dir: Base directory to use for builder + git_dir: Git directory containing source repository + num_threads: Number of builder threads to run + num_jobs: Number of jobs to run at once (passed to make as -j) + checkout: True to check out source, False to skip that step. + This is used for testing. + show_unknown: Show unknown boards (those not built) in summary + step: 1 to process every commit, n to process every nth commit + """ + self.toolchains = toolchains + self.base_dir = base_dir + self._working_dir = os.path.join(base_dir, '.bm-work') + self.threads = [] + self.active = True + self.do_make = self.Make + self.checkout = checkout + self.num_threads = num_threads + self.num_jobs = num_jobs + self.already_done = 0 + self.force_build = False + self.git_dir = git_dir + self._show_unknown = show_unknown + self._timestamp_count = 10 + self._build_period_us = None + self._complete_delay = None + self._next_delay_update = datetime.now() + self.force_config_on_failure = True + self._step = step + + self.col = terminal.Color() + + self.queue = Queue.Queue() + self.out_queue = Queue.Queue() + for i in range(self.num_threads): + t = BuilderThread(self, i) + t.setDaemon(True) + t.start() + self.threads.append(t) + + self.last_line_len = 0 + t = ResultThread(self) + t.setDaemon(True) + t.start() + self.threads.append(t) + + ignore_lines = ['(make.*Waiting for unfinished)', '(Segmentation fault)'] + self.re_make_err = re.compile('|'.join(ignore_lines)) def __del__(self): - """Get rid of all threads created by the builder""" - for t in self.threads: - del t + """Get rid of all threads created by the builder""" + for t in self.threads: + del t def _AddTimestamp(self): - """Add a new timestamp to the list and record the build period. - - The build period is the length of time taken to perform a single - build (one board, one commit). - """ - now = datetime.now() - self._timestamps.append(now) - count = len(self._timestamps) - delta = self._timestamps[-1] - self._timestamps[0] - seconds = delta.total_seconds() - - # If we have enough data, estimate build period (time taken for a - # single build) and therefore completion time. - if count > 1 and self._next_delay_update < now: - self._next_delay_update = now + timedelta(seconds=2) - if seconds > 0: - self._build_period = float(seconds) / count - todo = self.count - self.upto - self._complete_delay = timedelta(microseconds= - self._build_period * todo * 1000000) - # Round it - self._complete_delay -= timedelta( - microseconds=self._complete_delay.microseconds) - - if seconds > 60: - self._timestamps.popleft() - count -= 1 + """Add a new timestamp to the list and record the build period. + + The build period is the length of time taken to perform a single + build (one board, one commit). + """ + now = datetime.now() + self._timestamps.append(now) + count = len(self._timestamps) + delta = self._timestamps[-1] - self._timestamps[0] + seconds = delta.total_seconds() + + # If we have enough data, estimate build period (time taken for a + # single build) and therefore completion time. + if count > 1 and self._next_delay_update < now: + self._next_delay_update = now + timedelta(seconds=2) + if seconds > 0: + self._build_period = float(seconds) / count + todo = self.count - self.upto + self._complete_delay = timedelta(microseconds= + self._build_period * todo * 1000000) + # Round it + self._complete_delay -= timedelta( + microseconds=self._complete_delay.microseconds) + + if seconds > 60: + self._timestamps.popleft() + count -= 1 def ClearLine(self, length): - """Clear any characters on the current line + """Clear any characters on the current line - Make way for a new line of length 'length', by outputting enough - spaces to clear out the old line. Then remember the new length for - next time. + Make way for a new line of length 'length', by outputting enough + spaces to clear out the old line. Then remember the new length for + next time. - Args: - length: Length of new line, in characters - """ - if length < self.last_line_len: - print ' ' * (self.last_line_len - length), - print '\r', - self.last_line_len = length - sys.stdout.flush() + Args: + length: Length of new line, in characters + """ + if length < self.last_line_len: + print ' ' * (self.last_line_len - length), + print '\r', + self.last_line_len = length + sys.stdout.flush() def SelectCommit(self, commit, checkout=True): - """Checkout the selected commit for this build - """ - self.commit = commit - if checkout and self.checkout: - gitutil.Checkout(commit.hash) + """Checkout the selected commit for this build + """ + self.commit = commit + if checkout and self.checkout: + gitutil.Checkout(commit.hash) def Make(self, commit, brd, stage, cwd, *args, **kwargs): - """Run make - - Args: - commit: Commit object that is being built - brd: Board object that is being built - stage: Stage that we are at (distclean, config, build) - cwd: Directory where make should be run - args: Arguments to pass to make - kwargs: Arguments to pass to command.RunPipe() - """ - cmd = ['make'] + list(args) - result = command.RunPipe([cmd], capture=True, capture_stderr=True, - cwd=cwd, raise_on_error=False, **kwargs) - return result + """Run make + + Args: + commit: Commit object that is being built + brd: Board object that is being built + stage: Stage that we are at (distclean, config, build) + cwd: Directory where make should be run + args: Arguments to pass to make + kwargs: Arguments to pass to command.RunPipe() + """ + cmd = ['make'] + list(args) + result = command.RunPipe([cmd], capture=True, capture_stderr=True, + cwd=cwd, raise_on_error=False, **kwargs) + return result def ProcessResult(self, result): - """Process the result of a build, showing progress information - - Args: - result: A CommandResult object - """ - col = terminal.Color() - if result: - target = result.brd.target - - if result.return_code < 0: - self.active = False - command.StopAll() - return - - self.upto += 1 - if result.return_code != 0: - self.fail += 1 - elif result.stderr: - self.warned += 1 - if result.already_done: - self.already_done += 1 - else: - target = '(starting)' - - # Display separate counts for ok, warned and fail - ok = self.upto - self.warned - self.fail - line = '\r' + self.col.Color(self.col.GREEN, '%5d' % ok) - line += self.col.Color(self.col.YELLOW, '%5d' % self.warned) - line += self.col.Color(self.col.RED, '%5d' % self.fail) - - name = ' /%-5d ' % self.count - - # Add our current completion time estimate - self._AddTimestamp() - if self._complete_delay: - name += '%s : ' % self._complete_delay - # When building all boards for a commit, we can print a commit - # progress message. - if result and result.commit_upto is None: - name += 'commit %2d/%-3d' % (self.commit_upto + 1, - self.commit_count) - - name += target - print line + name, - length = 13 + len(name) - self.ClearLine(length) + """Process the result of a build, showing progress information + + Args: + result: A CommandResult object + """ + col = terminal.Color() + if result: + target = result.brd.target + + if result.return_code < 0: + self.active = False + command.StopAll() + return + + self.upto += 1 + if result.return_code != 0: + self.fail += 1 + elif result.stderr: + self.warned += 1 + if result.already_done: + self.already_done += 1 + else: + target = '(starting)' + + # Display separate counts for ok, warned and fail + ok = self.upto - self.warned - self.fail + line = '\r' + self.col.Color(self.col.GREEN, '%5d' % ok) + line += self.col.Color(self.col.YELLOW, '%5d' % self.warned) + line += self.col.Color(self.col.RED, '%5d' % self.fail) + + name = ' /%-5d ' % self.count + + # Add our current completion time estimate + self._AddTimestamp() + if self._complete_delay: + name += '%s : ' % self._complete_delay + # When building all boards for a commit, we can print a commit + # progress message. + if result and result.commit_upto is None: + name += 'commit %2d/%-3d' % (self.commit_upto + 1, + self.commit_count) + + name += target + print line + name, + length = 13 + len(name) + self.ClearLine(length) def _GetOutputDir(self, commit_upto): - """Get the name of the output directory for a commit number + """Get the name of the output directory for a commit number - The output directory is typically ...//. + The output directory is typically ...//. - Args: - commit_upto: Commit number to use (0..self.count-1) - """ - commit = self.commits[commit_upto] - subject = commit.subject.translate(trans_valid_chars) - commit_dir = ('%02d_of_%02d_g%s_%s' % (commit_upto + 1, - self.commit_count, commit.hash, subject[:20])) - output_dir = os.path.join(self.base_dir, commit_dir) - return output_dir + Args: + commit_upto: Commit number to use (0..self.count-1) + """ + commit = self.commits[commit_upto] + subject = commit.subject.translate(trans_valid_chars) + commit_dir = ('%02d_of_%02d_g%s_%s' % (commit_upto + 1, + self.commit_count, commit.hash, subject[:20])) + output_dir = os.path.join(self.base_dir, commit_dir) + return output_dir def GetBuildDir(self, commit_upto, target): - """Get the name of the build directory for a commit number + """Get the name of the build directory for a commit number - The build directory is typically ...///. + The build directory is typically ...///. - Args: - commit_upto: Commit number to use (0..self.count-1) - target: Target name - """ - output_dir = self._GetOutputDir(commit_upto) - return os.path.join(output_dir, target) + Args: + commit_upto: Commit number to use (0..self.count-1) + target: Target name + """ + output_dir = self._GetOutputDir(commit_upto) + return os.path.join(output_dir, target) def GetDoneFile(self, commit_upto, target): - """Get the name of the done file for a commit number + """Get the name of the done file for a commit number - Args: - commit_upto: Commit number to use (0..self.count-1) - target: Target name - """ - return os.path.join(self.GetBuildDir(commit_upto, target), 'done') + Args: + commit_upto: Commit number to use (0..self.count-1) + target: Target name + """ + return os.path.join(self.GetBuildDir(commit_upto, target), 'done') def GetSizesFile(self, commit_upto, target): - """Get the name of the sizes file for a commit number + """Get the name of the sizes file for a commit number - Args: - commit_upto: Commit number to use (0..self.count-1) - target: Target name - """ - return os.path.join(self.GetBuildDir(commit_upto, target), 'sizes') + Args: + commit_upto: Commit number to use (0..self.count-1) + target: Target name + """ + return os.path.join(self.GetBuildDir(commit_upto, target), 'sizes') def GetFuncSizesFile(self, commit_upto, target, elf_fname): - """Get the name of the funcsizes file for a commit number and ELF file + """Get the name of the funcsizes file for a commit number and ELF file - Args: - commit_upto: Commit number to use (0..self.count-1) - target: Target name - elf_fname: Filename of elf image - """ - return os.path.join(self.GetBuildDir(commit_upto, target), - '%s.sizes' % elf_fname.replace('/', '-')) + Args: + commit_upto: Commit number to use (0..self.count-1) + target: Target name + elf_fname: Filename of elf image + """ + return os.path.join(self.GetBuildDir(commit_upto, target), + '%s.sizes' % elf_fname.replace('/', '-')) def GetObjdumpFile(self, commit_upto, target, elf_fname): - """Get the name of the objdump file for a commit number and ELF file + """Get the name of the objdump file for a commit number and ELF file - Args: - commit_upto: Commit number to use (0..self.count-1) - target: Target name - elf_fname: Filename of elf image - """ - return os.path.join(self.GetBuildDir(commit_upto, target), - '%s.objdump' % elf_fname.replace('/', '-')) + Args: + commit_upto: Commit number to use (0..self.count-1) + target: Target name + elf_fname: Filename of elf image + """ + return os.path.join(self.GetBuildDir(commit_upto, target), + '%s.objdump' % elf_fname.replace('/', '-')) def GetErrFile(self, commit_upto, target): - """Get the name of the err file for a commit number + """Get the name of the err file for a commit number - Args: - commit_upto: Commit number to use (0..self.count-1) - target: Target name - """ - output_dir = self.GetBuildDir(commit_upto, target) - return os.path.join(output_dir, 'err') + Args: + commit_upto: Commit number to use (0..self.count-1) + target: Target name + """ + output_dir = self.GetBuildDir(commit_upto, target) + return os.path.join(output_dir, 'err') def FilterErrors(self, lines): - """Filter out errors in which we have no interest + """Filter out errors in which we have no interest - We should probably use map(). + We should probably use map(). - Args: - lines: List of error lines, each a string - Returns: - New list with only interesting lines included - """ - out_lines = [] - for line in lines: - if not self.re_make_err.search(line): - out_lines.append(line) - return out_lines + Args: + lines: List of error lines, each a string + Returns: + New list with only interesting lines included + """ + out_lines = [] + for line in lines: + if not self.re_make_err.search(line): + out_lines.append(line) + return out_lines def ReadFuncSizes(self, fname, fd): - """Read function sizes from the output of 'nm' - - Args: - fd: File containing data to read - fname: Filename we are reading from (just for errors) - - Returns: - Dictionary containing size of each function in bytes, indexed by - function name. - """ - sym = {} - for line in fd.readlines(): - try: - size, type, name = line[:-1].split() - except: - print "Invalid line in file '%s': '%s'" % (fname, line[:-1]) - continue - if type in 'tTdDbB': - # function names begin with '.' on 64-bit powerpc - if '.' in name[1:]: - name = 'static.' + name.split('.')[0] - sym[name] = sym.get(name, 0) + int(size, 16) - return sym + """Read function sizes from the output of 'nm' + + Args: + fd: File containing data to read + fname: Filename we are reading from (just for errors) + + Returns: + Dictionary containing size of each function in bytes, indexed by + function name. + """ + sym = {} + for line in fd.readlines(): + try: + size, type, name = line[:-1].split() + except: + print "Invalid line in file '%s': '%s'" % (fname, line[:-1]) + continue + if type in 'tTdDbB': + # function names begin with '.' on 64-bit powerpc + if '.' in name[1:]: + name = 'static.' + name.split('.')[0] + sym[name] = sym.get(name, 0) + int(size, 16) + return sym def GetBuildOutcome(self, commit_upto, target, read_func_sizes): - """Work out the outcome of a build. - - Args: - commit_upto: Commit number to check (0..n-1) - target: Target board to check - read_func_sizes: True to read function size information - - Returns: - Outcome object - """ - done_file = self.GetDoneFile(commit_upto, target) - sizes_file = self.GetSizesFile(commit_upto, target) - sizes = {} - func_sizes = {} - if os.path.exists(done_file): - with open(done_file, 'r') as fd: - return_code = int(fd.readline()) - err_lines = [] - err_file = self.GetErrFile(commit_upto, target) - if os.path.exists(err_file): - with open(err_file, 'r') as fd: - err_lines = self.FilterErrors(fd.readlines()) - - # Decide whether the build was ok, failed or created warnings - if return_code: - rc = OUTCOME_ERROR - elif len(err_lines): - rc = OUTCOME_WARNING - else: - rc = OUTCOME_OK - - # Convert size information to our simple format - if os.path.exists(sizes_file): - with open(sizes_file, 'r') as fd: - for line in fd.readlines(): - values = line.split() - rodata = 0 - if len(values) > 6: - rodata = int(values[6], 16) - size_dict = { - 'all' : int(values[0]) + int(values[1]) + - int(values[2]), - 'text' : int(values[0]) - rodata, - 'data' : int(values[1]), - 'bss' : int(values[2]), - 'rodata' : rodata, - } - sizes[values[5]] = size_dict - - if read_func_sizes: - pattern = self.GetFuncSizesFile(commit_upto, target, '*') - for fname in glob.glob(pattern): - with open(fname, 'r') as fd: - dict_name = os.path.basename(fname).replace('.sizes', - '') - func_sizes[dict_name] = self.ReadFuncSizes(fname, fd) - - return Builder.Outcome(rc, err_lines, sizes, func_sizes) - - return Builder.Outcome(OUTCOME_UNKNOWN, [], {}, {}) + """Work out the outcome of a build. + + Args: + commit_upto: Commit number to check (0..n-1) + target: Target board to check + read_func_sizes: True to read function size information + + Returns: + Outcome object + """ + done_file = self.GetDoneFile(commit_upto, target) + sizes_file = self.GetSizesFile(commit_upto, target) + sizes = {} + func_sizes = {} + if os.path.exists(done_file): + with open(done_file, 'r') as fd: + return_code = int(fd.readline()) + err_lines = [] + err_file = self.GetErrFile(commit_upto, target) + if os.path.exists(err_file): + with open(err_file, 'r') as fd: + err_lines = self.FilterErrors(fd.readlines()) + + # Decide whether the build was ok, failed or created warnings + if return_code: + rc = OUTCOME_ERROR + elif len(err_lines): + rc = OUTCOME_WARNING + else: + rc = OUTCOME_OK + + # Convert size information to our simple format + if os.path.exists(sizes_file): + with open(sizes_file, 'r') as fd: + for line in fd.readlines(): + values = line.split() + rodata = 0 + if len(values) > 6: + rodata = int(values[6], 16) + size_dict = { + 'all' : int(values[0]) + int(values[1]) + + int(values[2]), + 'text' : int(values[0]) - rodata, + 'data' : int(values[1]), + 'bss' : int(values[2]), + 'rodata' : rodata, + } + sizes[values[5]] = size_dict + + if read_func_sizes: + pattern = self.GetFuncSizesFile(commit_upto, target, '*') + for fname in glob.glob(pattern): + with open(fname, 'r') as fd: + dict_name = os.path.basename(fname).replace('.sizes', + '') + func_sizes[dict_name] = self.ReadFuncSizes(fname, fd) + + return Builder.Outcome(rc, err_lines, sizes, func_sizes) + + return Builder.Outcome(OUTCOME_UNKNOWN, [], {}, {}) def GetResultSummary(self, boards_selected, commit_upto, read_func_sizes): - """Calculate a summary of the results of building a commit. - - Args: - board_selected: Dict containing boards to summarise - commit_upto: Commit number to summarize (0..self.count-1) - read_func_sizes: True to read function size information - - Returns: - Tuple: - Dict containing boards which passed building this commit. - keyed by board.target - List containing a summary of error/warning lines - """ - board_dict = {} - err_lines_summary = [] - - for board in boards_selected.itervalues(): - outcome = self.GetBuildOutcome(commit_upto, board.target, - read_func_sizes) - board_dict[board.target] = outcome - for err in outcome.err_lines: - if err and not err.rstrip() in err_lines_summary: - err_lines_summary.append(err.rstrip()) - return board_dict, err_lines_summary + """Calculate a summary of the results of building a commit. + + Args: + board_selected: Dict containing boards to summarise + commit_upto: Commit number to summarize (0..self.count-1) + read_func_sizes: True to read function size information + + Returns: + Tuple: + Dict containing boards which passed building this commit. + keyed by board.target + List containing a summary of error/warning lines + """ + board_dict = {} + err_lines_summary = [] + + for board in boards_selected.itervalues(): + outcome = self.GetBuildOutcome(commit_upto, board.target, + read_func_sizes) + board_dict[board.target] = outcome + for err in outcome.err_lines: + if err and not err.rstrip() in err_lines_summary: + err_lines_summary.append(err.rstrip()) + return board_dict, err_lines_summary def AddOutcome(self, board_dict, arch_list, changes, char, color): - """Add an output to our list of outcomes for each architecture - - This simple function adds failing boards (changes) to the - relevant architecture string, so we can print the results out - sorted by architecture. - - Args: - board_dict: Dict containing all boards - arch_list: Dict keyed by arch name. Value is a string containing - a list of board names which failed for that arch. - changes: List of boards to add to arch_list - color: terminal.Colour object - """ - done_arch = {} - for target in changes: - if target in board_dict: - arch = board_dict[target].arch - else: - arch = 'unknown' - str = self.col.Color(color, ' ' + target) - if not arch in done_arch: - str = self.col.Color(color, char) + ' ' + str - done_arch[arch] = True - if not arch in arch_list: - arch_list[arch] = str - else: - arch_list[arch] += str + """Add an output to our list of outcomes for each architecture + + This simple function adds failing boards (changes) to the + relevant architecture string, so we can print the results out + sorted by architecture. + + Args: + board_dict: Dict containing all boards + arch_list: Dict keyed by arch name. Value is a string containing + a list of board names which failed for that arch. + changes: List of boards to add to arch_list + color: terminal.Colour object + """ + done_arch = {} + for target in changes: + if target in board_dict: + arch = board_dict[target].arch + else: + arch = 'unknown' + str = self.col.Color(color, ' ' + target) + if not arch in done_arch: + str = self.col.Color(color, char) + ' ' + str + done_arch[arch] = True + if not arch in arch_list: + arch_list[arch] = str + else: + arch_list[arch] += str def ColourNum(self, num): - color = self.col.RED if num > 0 else self.col.GREEN - if num == 0: - return '0' - return self.col.Color(color, str(num)) + color = self.col.RED if num > 0 else self.col.GREEN + if num == 0: + return '0' + return self.col.Color(color, str(num)) def ResetResultSummary(self, board_selected): - """Reset the results summary ready for use. + """Reset the results summary ready for use. - Set up the base board list to be all those selected, and set the - error lines to empty. + Set up the base board list to be all those selected, and set the + error lines to empty. - Following this, calls to PrintResultSummary() will use this - information to work out what has changed. + Following this, calls to PrintResultSummary() will use this + information to work out what has changed. - Args: - board_selected: Dict containing boards to summarise, keyed by - board.target - """ - self._base_board_dict = {} - for board in board_selected: - self._base_board_dict[board] = Builder.Outcome(0, [], [], {}) - self._base_err_lines = [] + Args: + board_selected: Dict containing boards to summarise, keyed by + board.target + """ + self._base_board_dict = {} + for board in board_selected: + self._base_board_dict[board] = Builder.Outcome(0, [], [], {}) + self._base_err_lines = [] def PrintFuncSizeDetail(self, fname, old, new): - grow, shrink, add, remove, up, down = 0, 0, 0, 0, 0, 0 - delta, common = [], {} - - for a in old: - if a in new: - common[a] = 1 - - for name in old: - if name not in common: - remove += 1 - down += old[name] - delta.append([-old[name], name]) - - for name in new: - if name not in common: - add += 1 - up += new[name] - delta.append([new[name], name]) - - for name in common: - diff = new.get(name, 0) - old.get(name, 0) - if diff > 0: - grow, up = grow + 1, up + diff - elif diff < 0: - shrink, down = shrink + 1, down - diff - delta.append([diff, name]) - - delta.sort() - delta.reverse() - - args = [add, -remove, grow, -shrink, up, -down, up - down] - if max(args) == 0: - return - args = [self.ColourNum(x) for x in args] - indent = ' ' * 15 - print ('%s%s: add: %s/%s, grow: %s/%s bytes: %s/%s (%s)' % - tuple([indent, self.col.Color(self.col.YELLOW, fname)] + args)) - print '%s %-38s %7s %7s %+7s' % (indent, 'function', 'old', 'new', - 'delta') - for diff, name in delta: - if diff: - color = self.col.RED if diff > 0 else self.col.GREEN - msg = '%s %-38s %7s %7s %+7d' % (indent, name, - old.get(name, '-'), new.get(name,'-'), diff) - print self.col.Color(color, msg) + grow, shrink, add, remove, up, down = 0, 0, 0, 0, 0, 0 + delta, common = [], {} + + for a in old: + if a in new: + common[a] = 1 + + for name in old: + if name not in common: + remove += 1 + down += old[name] + delta.append([-old[name], name]) + + for name in new: + if name not in common: + add += 1 + up += new[name] + delta.append([new[name], name]) + + for name in common: + diff = new.get(name, 0) - old.get(name, 0) + if diff > 0: + grow, up = grow + 1, up + diff + elif diff < 0: + shrink, down = shrink + 1, down - diff + delta.append([diff, name]) + + delta.sort() + delta.reverse() + + args = [add, -remove, grow, -shrink, up, -down, up - down] + if max(args) == 0: + return + args = [self.ColourNum(x) for x in args] + indent = ' ' * 15 + print ('%s%s: add: %s/%s, grow: %s/%s bytes: %s/%s (%s)' % + tuple([indent, self.col.Color(self.col.YELLOW, fname)] + args)) + print '%s %-38s %7s %7s %+7s' % (indent, 'function', 'old', 'new', + 'delta') + for diff, name in delta: + if diff: + color = self.col.RED if diff > 0 else self.col.GREEN + msg = '%s %-38s %7s %7s %+7d' % (indent, name, + old.get(name, '-'), new.get(name,'-'), diff) + print self.col.Color(color, msg) def PrintSizeDetail(self, target_list, show_bloat): - """Show details size information for each board - - Args: - target_list: List of targets, each a dict containing: - 'target': Target name - 'total_diff': Total difference in bytes across all areas - : Difference for that part - show_bloat: Show detail for each function - """ - targets_by_diff = sorted(target_list, reverse=True, - key=lambda x: x['_total_diff']) - for result in targets_by_diff: - printed_target = False - for name in sorted(result): - diff = result[name] - if name.startswith('_'): - continue - if diff != 0: - color = self.col.RED if diff > 0 else self.col.GREEN - msg = ' %s %+d' % (name, diff) - if not printed_target: - print '%10s %-15s:' % ('', result['_target']), - printed_target = True - print self.col.Color(color, msg), - if printed_target: - print - if show_bloat: - target = result['_target'] - outcome = result['_outcome'] - base_outcome = self._base_board_dict[target] - for fname in outcome.func_sizes: - self.PrintFuncSizeDetail(fname, - base_outcome.func_sizes[fname], - outcome.func_sizes[fname]) + """Show details size information for each board + + Args: + target_list: List of targets, each a dict containing: + 'target': Target name + 'total_diff': Total difference in bytes across all areas + : Difference for that part + show_bloat: Show detail for each function + """ + targets_by_diff = sorted(target_list, reverse=True, + key=lambda x: x['_total_diff']) + for result in targets_by_diff: + printed_target = False + for name in sorted(result): + diff = result[name] + if name.startswith('_'): + continue + if diff != 0: + color = self.col.RED if diff > 0 else self.col.GREEN + msg = ' %s %+d' % (name, diff) + if not printed_target: + print '%10s %-15s:' % ('', result['_target']), + printed_target = True + print self.col.Color(color, msg), + if printed_target: + print + if show_bloat: + target = result['_target'] + outcome = result['_outcome'] + base_outcome = self._base_board_dict[target] + for fname in outcome.func_sizes: + self.PrintFuncSizeDetail(fname, + base_outcome.func_sizes[fname], + outcome.func_sizes[fname]) def PrintSizeSummary(self, board_selected, board_dict, show_detail, - show_bloat): - """Print a summary of image sizes broken down by section. - - The summary takes the form of one line per architecture. The - line contains deltas for each of the sections (+ means the section - got bigger, - means smaller). The nunmbers are the average number - of bytes that a board in this section increased by. - - For example: - powerpc: (622 boards) text -0.0 - arm: (285 boards) text -0.0 - nds32: (3 boards) text -8.0 - - Args: - board_selected: Dict containing boards to summarise, keyed by - board.target - board_dict: Dict containing boards for which we built this - commit, keyed by board.target. The value is an Outcome object. - show_detail: Show detail for each board - show_bloat: Show detail for each function - """ - arch_list = {} - arch_count = {} - - # Calculate changes in size for different image parts - # The previous sizes are in Board.sizes, for each board - for target in board_dict: - if target not in board_selected: - continue - base_sizes = self._base_board_dict[target].sizes - outcome = board_dict[target] - sizes = outcome.sizes - - # Loop through the list of images, creating a dict of size - # changes for each image/part. We end up with something like - # {'target' : 'snapper9g45, 'data' : 5, 'u-boot-spl:text' : -4} - # which means that U-Boot data increased by 5 bytes and SPL - # text decreased by 4. - err = {'_target' : target} - for image in sizes: - if image in base_sizes: - base_image = base_sizes[image] - # Loop through the text, data, bss parts - for part in sorted(sizes[image]): - diff = sizes[image][part] - base_image[part] - col = None - if diff: - if image == 'u-boot': - name = part - else: - name = image + ':' + part - err[name] = diff - arch = board_selected[target].arch - if not arch in arch_count: - arch_count[arch] = 1 - else: - arch_count[arch] += 1 - if not sizes: - pass # Only add to our list when we have some stats - elif not arch in arch_list: - arch_list[arch] = [err] - else: - arch_list[arch].append(err) - - # We now have a list of image size changes sorted by arch - # Print out a summary of these - for arch, target_list in arch_list.iteritems(): - # Get total difference for each type - totals = {} - for result in target_list: - total = 0 - for name, diff in result.iteritems(): - if name.startswith('_'): - continue - total += diff - if name in totals: - totals[name] += diff - else: - totals[name] = diff - result['_total_diff'] = total - result['_outcome'] = board_dict[result['_target']] - - count = len(target_list) - printed_arch = False - for name in sorted(totals): - diff = totals[name] - if diff: - # Display the average difference in this name for this - # architecture - avg_diff = float(diff) / count - color = self.col.RED if avg_diff > 0 else self.col.GREEN - msg = ' %s %+1.1f' % (name, avg_diff) - if not printed_arch: - print '%10s: (for %d/%d boards)' % (arch, count, - arch_count[arch]), - printed_arch = True - print self.col.Color(color, msg), - - if printed_arch: - print - if show_detail: - self.PrintSizeDetail(target_list, show_bloat) + show_bloat): + """Print a summary of image sizes broken down by section. + + The summary takes the form of one line per architecture. The + line contains deltas for each of the sections (+ means the section + got bigger, - means smaller). The nunmbers are the average number + of bytes that a board in this section increased by. + + For example: + powerpc: (622 boards) text -0.0 + arm: (285 boards) text -0.0 + nds32: (3 boards) text -8.0 + + Args: + board_selected: Dict containing boards to summarise, keyed by + board.target + board_dict: Dict containing boards for which we built this + commit, keyed by board.target. The value is an Outcome object. + show_detail: Show detail for each board + show_bloat: Show detail for each function + """ + arch_list = {} + arch_count = {} + + # Calculate changes in size for different image parts + # The previous sizes are in Board.sizes, for each board + for target in board_dict: + if target not in board_selected: + continue + base_sizes = self._base_board_dict[target].sizes + outcome = board_dict[target] + sizes = outcome.sizes + + # Loop through the list of images, creating a dict of size + # changes for each image/part. We end up with something like + # {'target' : 'snapper9g45, 'data' : 5, 'u-boot-spl:text' : -4} + # which means that U-Boot data increased by 5 bytes and SPL + # text decreased by 4. + err = {'_target' : target} + for image in sizes: + if image in base_sizes: + base_image = base_sizes[image] + # Loop through the text, data, bss parts + for part in sorted(sizes[image]): + diff = sizes[image][part] - base_image[part] + col = None + if diff: + if image == 'u-boot': + name = part + else: + name = image + ':' + part + err[name] = diff + arch = board_selected[target].arch + if not arch in arch_count: + arch_count[arch] = 1 + else: + arch_count[arch] += 1 + if not sizes: + pass # Only add to our list when we have some stats + elif not arch in arch_list: + arch_list[arch] = [err] + else: + arch_list[arch].append(err) + + # We now have a list of image size changes sorted by arch + # Print out a summary of these + for arch, target_list in arch_list.iteritems(): + # Get total difference for each type + totals = {} + for result in target_list: + total = 0 + for name, diff in result.iteritems(): + if name.startswith('_'): + continue + total += diff + if name in totals: + totals[name] += diff + else: + totals[name] = diff + result['_total_diff'] = total + result['_outcome'] = board_dict[result['_target']] + + count = len(target_list) + printed_arch = False + for name in sorted(totals): + diff = totals[name] + if diff: + # Display the average difference in this name for this + # architecture + avg_diff = float(diff) / count + color = self.col.RED if avg_diff > 0 else self.col.GREEN + msg = ' %s %+1.1f' % (name, avg_diff) + if not printed_arch: + print '%10s: (for %d/%d boards)' % (arch, count, + arch_count[arch]), + printed_arch = True + print self.col.Color(color, msg), + + if printed_arch: + print + if show_detail: + self.PrintSizeDetail(target_list, show_bloat) def PrintResultSummary(self, board_selected, board_dict, err_lines, - show_sizes, show_detail, show_bloat): - """Compare results with the base results and display delta. - - Only boards mentioned in board_selected will be considered. This - function is intended to be called repeatedly with the results of - each commit. It therefore shows a 'diff' between what it saw in - the last call and what it sees now. - - Args: - board_selected: Dict containing boards to summarise, keyed by - board.target - board_dict: Dict containing boards for which we built this - commit, keyed by board.target. The value is an Outcome object. - err_lines: A list of errors for this commit, or [] if there is - none, or we don't want to print errors - show_sizes: Show image size deltas - show_detail: Show detail for each board - show_bloat: Show detail for each function - """ - better = [] # List of boards fixed since last commit - worse = [] # List of new broken boards since last commit - new = [] # List of boards that didn't exist last time - unknown = [] # List of boards that were not built - - for target in board_dict: - if target not in board_selected: - continue - - # If the board was built last time, add its outcome to a list - if target in self._base_board_dict: - base_outcome = self._base_board_dict[target].rc - outcome = board_dict[target] - if outcome.rc == OUTCOME_UNKNOWN: - unknown.append(target) - elif outcome.rc < base_outcome: - better.append(target) - elif outcome.rc > base_outcome: - worse.append(target) - else: - new.append(target) - - # Get a list of errors that have appeared, and disappeared - better_err = [] - worse_err = [] - for line in err_lines: - if line not in self._base_err_lines: - worse_err.append('+' + line) - for line in self._base_err_lines: - if line not in err_lines: - better_err.append('-' + line) - - # Display results by arch - if better or worse or unknown or new or worse_err or better_err: - arch_list = {} - self.AddOutcome(board_selected, arch_list, better, '', - self.col.GREEN) - self.AddOutcome(board_selected, arch_list, worse, '+', - self.col.RED) - self.AddOutcome(board_selected, arch_list, new, '*', self.col.BLUE) - if self._show_unknown: - self.AddOutcome(board_selected, arch_list, unknown, '?', - self.col.MAGENTA) - for arch, target_list in arch_list.iteritems(): - print '%10s: %s' % (arch, target_list) - if better_err: - print self.col.Color(self.col.GREEN, '\n'.join(better_err)) - if worse_err: - print self.col.Color(self.col.RED, '\n'.join(worse_err)) - - if show_sizes: - self.PrintSizeSummary(board_selected, board_dict, show_detail, - show_bloat) - - # Save our updated information for the next call to this function - self._base_board_dict = board_dict - self._base_err_lines = err_lines - - # Get a list of boards that did not get built, if needed - not_built = [] - for board in board_selected: - if not board in board_dict: - not_built.append(board) - if not_built: - print "Boards not built (%d): %s" % (len(not_built), - ', '.join(not_built)) + show_sizes, show_detail, show_bloat): + """Compare results with the base results and display delta. + + Only boards mentioned in board_selected will be considered. This + function is intended to be called repeatedly with the results of + each commit. It therefore shows a 'diff' between what it saw in + the last call and what it sees now. + + Args: + board_selected: Dict containing boards to summarise, keyed by + board.target + board_dict: Dict containing boards for which we built this + commit, keyed by board.target. The value is an Outcome object. + err_lines: A list of errors for this commit, or [] if there is + none, or we don't want to print errors + show_sizes: Show image size deltas + show_detail: Show detail for each board + show_bloat: Show detail for each function + """ + better = [] # List of boards fixed since last commit + worse = [] # List of new broken boards since last commit + new = [] # List of boards that didn't exist last time + unknown = [] # List of boards that were not built + + for target in board_dict: + if target not in board_selected: + continue + + # If the board was built last time, add its outcome to a list + if target in self._base_board_dict: + base_outcome = self._base_board_dict[target].rc + outcome = board_dict[target] + if outcome.rc == OUTCOME_UNKNOWN: + unknown.append(target) + elif outcome.rc < base_outcome: + better.append(target) + elif outcome.rc > base_outcome: + worse.append(target) + else: + new.append(target) + + # Get a list of errors that have appeared, and disappeared + better_err = [] + worse_err = [] + for line in err_lines: + if line not in self._base_err_lines: + worse_err.append('+' + line) + for line in self._base_err_lines: + if line not in err_lines: + better_err.append('-' + line) + + # Display results by arch + if better or worse or unknown or new or worse_err or better_err: + arch_list = {} + self.AddOutcome(board_selected, arch_list, better, '', + self.col.GREEN) + self.AddOutcome(board_selected, arch_list, worse, '+', + self.col.RED) + self.AddOutcome(board_selected, arch_list, new, '*', self.col.BLUE) + if self._show_unknown: + self.AddOutcome(board_selected, arch_list, unknown, '?', + self.col.MAGENTA) + for arch, target_list in arch_list.iteritems(): + print '%10s: %s' % (arch, target_list) + if better_err: + print self.col.Color(self.col.GREEN, '\n'.join(better_err)) + if worse_err: + print self.col.Color(self.col.RED, '\n'.join(worse_err)) + + if show_sizes: + self.PrintSizeSummary(board_selected, board_dict, show_detail, + show_bloat) + + # Save our updated information for the next call to this function + self._base_board_dict = board_dict + self._base_err_lines = err_lines + + # Get a list of boards that did not get built, if needed + not_built = [] + for board in board_selected: + if not board in board_dict: + not_built.append(board) + if not_built: + print "Boards not built (%d): %s" % (len(not_built), + ', '.join(not_built)) def ShowSummary(self, commits, board_selected, show_errors, show_sizes, - show_detail, show_bloat): - """Show a build summary for U-Boot for a given board list. - - Reset the result summary, then repeatedly call GetResultSummary on - each commit's results, then display the differences we see. - - Args: - commit: Commit objects to summarise - board_selected: Dict containing boards to summarise - show_errors: Show errors that occured - show_sizes: Show size deltas - show_detail: Show detail for each board - show_bloat: Show detail for each function - """ - self.commit_count = len(commits) - self.commits = commits - self.ResetResultSummary(board_selected) - - for commit_upto in range(0, self.commit_count, self._step): - board_dict, err_lines = self.GetResultSummary(board_selected, - commit_upto, read_func_sizes=show_bloat) - msg = '%02d: %s' % (commit_upto + 1, commits[commit_upto].subject) - print self.col.Color(self.col.BLUE, msg) - self.PrintResultSummary(board_selected, board_dict, - err_lines if show_errors else [], show_sizes, show_detail, - show_bloat) + show_detail, show_bloat): + """Show a build summary for U-Boot for a given board list. + + Reset the result summary, then repeatedly call GetResultSummary on + each commit's results, then display the differences we see. + + Args: + commit: Commit objects to summarise + board_selected: Dict containing boards to summarise + show_errors: Show errors that occured + show_sizes: Show size deltas + show_detail: Show detail for each board + show_bloat: Show detail for each function + """ + self.commit_count = len(commits) + self.commits = commits + self.ResetResultSummary(board_selected) + + for commit_upto in range(0, self.commit_count, self._step): + board_dict, err_lines = self.GetResultSummary(board_selected, + commit_upto, read_func_sizes=show_bloat) + msg = '%02d: %s' % (commit_upto + 1, commits[commit_upto].subject) + print self.col.Color(self.col.BLUE, msg) + self.PrintResultSummary(board_selected, board_dict, + err_lines if show_errors else [], show_sizes, show_detail, + show_bloat) def SetupBuild(self, board_selected, commits): - """Set up ready to start a build. - - Args: - board_selected: Selected boards to build - commits: Selected commits to build - """ - # First work out how many commits we will build - count = (len(commits) + self._step - 1) / self._step - self.count = len(board_selected) * count - self.upto = self.warned = self.fail = 0 - self._timestamps = collections.deque() + """Set up ready to start a build. + + Args: + board_selected: Selected boards to build + commits: Selected commits to build + """ + # First work out how many commits we will build + count = (len(commits) + self._step - 1) / self._step + self.count = len(board_selected) * count + self.upto = self.warned = self.fail = 0 + self._timestamps = collections.deque() def BuildBoardsForCommit(self, board_selected, keep_outputs): - """Build all boards for a single commit""" - self.SetupBuild(board_selected) - self.count = len(board_selected) - for brd in board_selected.itervalues(): - job = BuilderJob() - job.board = brd - job.commits = None - job.keep_outputs = keep_outputs - self.queue.put(brd) - - self.queue.join() - self.out_queue.join() - print - self.ClearLine(0) + """Build all boards for a single commit""" + self.SetupBuild(board_selected) + self.count = len(board_selected) + for brd in board_selected.itervalues(): + job = BuilderJob() + job.board = brd + job.commits = None + job.keep_outputs = keep_outputs + self.queue.put(brd) + + self.queue.join() + self.out_queue.join() + print + self.ClearLine(0) def BuildCommits(self, commits, board_selected, show_errors, keep_outputs): - """Build all boards for all commits (non-incremental)""" - self.commit_count = len(commits) + """Build all boards for all commits (non-incremental)""" + self.commit_count = len(commits) - self.ResetResultSummary(board_selected) - for self.commit_upto in range(self.commit_count): - self.SelectCommit(commits[self.commit_upto]) - self.SelectOutputDir() - Mkdir(self.output_dir) + self.ResetResultSummary(board_selected) + for self.commit_upto in range(self.commit_count): + self.SelectCommit(commits[self.commit_upto]) + self.SelectOutputDir() + Mkdir(self.output_dir) - self.BuildBoardsForCommit(board_selected, keep_outputs) - board_dict, err_lines = self.GetResultSummary() - self.PrintResultSummary(board_selected, board_dict, - err_lines if show_errors else []) + self.BuildBoardsForCommit(board_selected, keep_outputs) + board_dict, err_lines = self.GetResultSummary() + self.PrintResultSummary(board_selected, board_dict, + err_lines if show_errors else []) - if self.already_done: - print '%d builds already done' % self.already_done + if self.already_done: + print '%d builds already done' % self.already_done def GetThreadDir(self, thread_num): - """Get the directory path to the working dir for a thread. + """Get the directory path to the working dir for a thread. - Args: - thread_num: Number of thread to check. - """ - return os.path.join(self._working_dir, '%02d' % thread_num) + Args: + thread_num: Number of thread to check. + """ + return os.path.join(self._working_dir, '%02d' % thread_num) def _PrepareThread(self, thread_num): - """Prepare the working directory for a thread. - - This clones or fetches the repo into the thread's work directory. - - Args: - thread_num: Thread number (0, 1, ...) - """ - thread_dir = self.GetThreadDir(thread_num) - Mkdir(thread_dir) - git_dir = os.path.join(thread_dir, '.git') - - # Clone the repo if it doesn't already exist - # TODO(sjg@chromium): Perhaps some git hackery to symlink instead, so - # we have a private index but uses the origin repo's contents? - if self.git_dir: - src_dir = os.path.abspath(self.git_dir) - if os.path.exists(git_dir): - gitutil.Fetch(git_dir, thread_dir) - else: - print 'Cloning repo for thread %d' % thread_num - gitutil.Clone(src_dir, thread_dir) + """Prepare the working directory for a thread. + + This clones or fetches the repo into the thread's work directory. + + Args: + thread_num: Thread number (0, 1, ...) + """ + thread_dir = self.GetThreadDir(thread_num) + Mkdir(thread_dir) + git_dir = os.path.join(thread_dir, '.git') + + # Clone the repo if it doesn't already exist + # TODO(sjg@chromium): Perhaps some git hackery to symlink instead, so + # we have a private index but uses the origin repo's contents? + if self.git_dir: + src_dir = os.path.abspath(self.git_dir) + if os.path.exists(git_dir): + gitutil.Fetch(git_dir, thread_dir) + else: + print 'Cloning repo for thread %d' % thread_num + gitutil.Clone(src_dir, thread_dir) def _PrepareWorkingSpace(self, max_threads): - """Prepare the working directory for use. + """Prepare the working directory for use. - Set up the git repo for each thread. + Set up the git repo for each thread. - Args: - max_threads: Maximum number of threads we expect to need. - """ - Mkdir(self._working_dir) - for thread in range(max_threads): - self._PrepareThread(thread) + Args: + max_threads: Maximum number of threads we expect to need. + """ + Mkdir(self._working_dir) + for thread in range(max_threads): + self._PrepareThread(thread) def _PrepareOutputSpace(self): - """Get the output directories ready to receive files. + """Get the output directories ready to receive files. - We delete any output directories which look like ones we need to - create. Having left over directories is confusing when the user wants - to check the output manually. - """ - dir_list = [] - for commit_upto in range(self.commit_count): - dir_list.append(self._GetOutputDir(commit_upto)) + We delete any output directories which look like ones we need to + create. Having left over directories is confusing when the user wants + to check the output manually. + """ + dir_list = [] + for commit_upto in range(self.commit_count): + dir_list.append(self._GetOutputDir(commit_upto)) - for dirname in glob.glob(os.path.join(self.base_dir, '*')): - if dirname not in dir_list: - shutil.rmtree(dirname) + for dirname in glob.glob(os.path.join(self.base_dir, '*')): + if dirname not in dir_list: + shutil.rmtree(dirname) def BuildBoards(self, commits, board_selected, show_errors, keep_outputs): - """Build all commits for a list of boards - - Args: - commits: List of commits to be build, each a Commit object - boards_selected: Dict of selected boards, key is target name, - value is Board object - show_errors: True to show summarised error/warning info - keep_outputs: True to save build output files - """ - self.commit_count = len(commits) - self.commits = commits - - self.ResetResultSummary(board_selected) - Mkdir(self.base_dir) - self._PrepareWorkingSpace(min(self.num_threads, len(board_selected))) - self._PrepareOutputSpace() - self.SetupBuild(board_selected, commits) - self.ProcessResult(None) - - # Create jobs to build all commits for each board - for brd in board_selected.itervalues(): - job = BuilderJob() - job.board = brd - job.commits = commits - job.keep_outputs = keep_outputs - job.step = self._step - self.queue.put(job) - - # Wait until all jobs are started - self.queue.join() - - # Wait until we have processed all output - self.out_queue.join() - print - self.ClearLine(0) + """Build all commits for a list of boards + + Args: + commits: List of commits to be build, each a Commit object + boards_selected: Dict of selected boards, key is target name, + value is Board object + show_errors: True to show summarised error/warning info + keep_outputs: True to save build output files + """ + self.commit_count = len(commits) + self.commits = commits + + self.ResetResultSummary(board_selected) + Mkdir(self.base_dir) + self._PrepareWorkingSpace(min(self.num_threads, len(board_selected))) + self._PrepareOutputSpace() + self.SetupBuild(board_selected, commits) + self.ProcessResult(None) + + # Create jobs to build all commits for each board + for brd in board_selected.itervalues(): + job = BuilderJob() + job.board = brd + job.commits = commits + job.keep_outputs = keep_outputs + job.step = self._step + self.queue.put(job) + + # Wait until all jobs are started + self.queue.join() + + # Wait until we have processed all output + self.out_queue.join() + print + self.ClearLine(0) diff --git a/tools/buildman/buildman.py b/tools/buildman/buildman.py index 43895b8..de6dd85 100755 --- a/tools/buildman/buildman.py +++ b/tools/buildman/buildman.py @@ -36,15 +36,15 @@ def RunTests(): result = unittest.TestResult() for module in ['toolchain']: - suite = doctest.DocTestSuite(module) - suite.run(result) + suite = doctest.DocTestSuite(module) + suite.run(result) # TODO: Surely we can just 'print' result? print result for test, err in result.errors: - print err + print err for test, err in result.failures: - print err + print err sys.argv = [sys.argv[0]] suite = unittest.TestLoader().loadTestsFromTestCase(test.TestBuild) @@ -54,9 +54,9 @@ def RunTests(): # TODO: Surely we can just 'print' result? print result for test, err in result.errors: - print err + print err for test, err in result.failures: - print err + print err parser = OptionParser() @@ -96,7 +96,7 @@ parser.add_option('-S', '--show-sizes', action='store_true', parser.add_option('--step', type='int', default=1, help='Only build every n commits (0=just first and last)') parser.add_option('-t', '--test', action='store_true', dest='test', - default=False, help='run tests') + default=False, help='run tests') parser.add_option('-T', '--threads', type='int', default=None, help='Number of builder threads to use') parser.add_option('-u', '--show_unknown', action='store_true', @@ -114,7 +114,7 @@ if options.test: elif options.full_help: pager = os.getenv('PAGER') if not pager: - pager = 'more' + pager = 'more' fname = os.path.join(os.path.dirname(sys.argv[0]), 'README') command.Run(pager, fname) diff --git a/tools/buildman/control.py b/tools/buildman/control.py index 8e6a08f..d338568 100644 --- a/tools/buildman/control.py +++ b/tools/buildman/control.py @@ -23,54 +23,54 @@ def GetActionSummary(is_summary, count, selected, options): """Return a string summarising the intended action. Returns: - Summary string. + Summary string. """ count = (count + options.step - 1) / options.step str = '%s %d commit%s for %d boards' % ( - 'Summary of' if is_summary else 'Building', count, GetPlural(count), - len(selected)) + 'Summary of' if is_summary else 'Building', count, GetPlural(count), + len(selected)) str += ' (%d thread%s, %d job%s per thread)' % (options.threads, - GetPlural(options.threads), options.jobs, GetPlural(options.jobs)) + GetPlural(options.threads), options.jobs, GetPlural(options.jobs)) return str def ShowActions(series, why_selected, boards_selected, builder, options): """Display a list of actions that we would take, if not a dry run. Args: - series: Series object - why_selected: Dictionary where each key is a buildman argument - provided by the user, and the value is the boards brought - in by that argument. For example, 'arm' might bring in - 400 boards, so in this case the key would be 'arm' and - the value would be a list of board names. - boards_selected: Dict of selected boards, key is target name, - value is Board object - builder: The builder that will be used to build the commits - options: Command line options object + series: Series object + why_selected: Dictionary where each key is a buildman argument + provided by the user, and the value is the boards brought + in by that argument. For example, 'arm' might bring in + 400 boards, so in this case the key would be 'arm' and + the value would be a list of board names. + boards_selected: Dict of selected boards, key is target name, + value is Board object + builder: The builder that will be used to build the commits + options: Command line options object """ col = terminal.Color() print 'Dry run, so not doing much. But I would do this:' print print GetActionSummary(False, len(series.commits), boards_selected, - options) + options) print 'Build directory: %s' % builder.base_dir for upto in range(0, len(series.commits), options.step): - commit = series.commits[upto] - print ' ', col.Color(col.YELLOW, commit.hash, bright=False), - print commit.subject + commit = series.commits[upto] + print ' ', col.Color(col.YELLOW, commit.hash, bright=False), + print commit.subject print for arg in why_selected: - if arg != 'all': - print arg, ': %d boards' % why_selected[arg] + if arg != 'all': + print arg, ': %d boards' % why_selected[arg] print ('Total boards to build for each commit: %d\n' % - why_selected['all']) + why_selected['all']) def DoBuildman(options, args): """The main control code for buildman Args: - options: Command line options object - args: Command line arguments (list of strings) + options: Command line options object + args: Command line arguments (list of strings) """ gitutil.Setup() @@ -80,9 +80,9 @@ def DoBuildman(options, args): toolchains = toolchain.Toolchains() toolchains.Scan(options.list_tool_chains) if options.list_tool_chains: - toolchains.List() - print - return + toolchains.List() + print + return # Work out how many commits to build. We want to build everything on the # branch. We also build the upstream commit as a control so we can see @@ -90,22 +90,22 @@ def DoBuildman(options, args): col = terminal.Color() count = options.count if count == -1: - if not options.branch: - str = 'Please use -b to specify a branch to build' - print col.Color(col.RED, str) - sys.exit(1) - count = gitutil.CountCommitsInBranch(options.git_dir, options.branch) - if count is None: - str = "Branch '%s' not found or has no upstream" % options.branch - print col.Color(col.RED, str) - sys.exit(1) - count += 1 # Build upstream commit also + if not options.branch: + str = 'Please use -b to specify a branch to build' + print col.Color(col.RED, str) + sys.exit(1) + count = gitutil.CountCommitsInBranch(options.git_dir, options.branch) + if count is None: + str = "Branch '%s' not found or has no upstream" % options.branch + print col.Color(col.RED, str) + sys.exit(1) + count += 1 # Build upstream commit also if not count: - str = ("No commits found to process in branch '%s': " - "set branch's upstream or use -c flag" % options.branch) - print col.Color(col.RED, str) - sys.exit(1) + str = ("No commits found to process in branch '%s': " + "set branch's upstream or use -c flag" % options.branch) + print col.Color(col.RED, str) + sys.exit(1) # Work out what subset of the boards we are building boards = board.Boards() @@ -113,8 +113,8 @@ def DoBuildman(options, args): why_selected = boards.SelectBoards(args) selected = boards.GetSelected() if not len(selected): - print col.Color(col.RED, 'No matching boards found') - sys.exit(1) + print col.Color(col.RED, 'No matching boards found') + sys.exit(1) # Read the metadata from the commits. First look at the upstream commit, # then the ones in the branch. We would like to do something like @@ -124,51 +124,51 @@ def DoBuildman(options, args): range_expr = gitutil.GetRangeInBranch(options.git_dir, options.branch) upstream_commit = gitutil.GetUpstream(options.git_dir, options.branch) series = patchstream.GetMetaDataForList(upstream_commit, options.git_dir, - 1) + 1) # Conflicting tags are not a problem for buildman, since it does not use # them. For example, Series-version is not useful for buildman. On the # other hand conflicting tags will cause an error. So allow later tags # to overwrite earlier ones. series.allow_overwrite = True series = patchstream.GetMetaDataForList(range_expr, options.git_dir, None, - series) + series) # By default we have one thread per CPU. But if there are not enough jobs # we can have fewer threads and use a high '-j' value for make. if not options.threads: - options.threads = min(multiprocessing.cpu_count(), len(selected)) + options.threads = min(multiprocessing.cpu_count(), len(selected)) if not options.jobs: - options.jobs = max(1, (multiprocessing.cpu_count() + - len(selected) - 1) / len(selected)) + options.jobs = max(1, (multiprocessing.cpu_count() + + len(selected) - 1) / len(selected)) if not options.step: - options.step = len(series.commits) - 1 + options.step = len(series.commits) - 1 # Create a new builder with the selected options output_dir = os.path.join('..', options.branch) builder = Builder(toolchains, output_dir, options.git_dir, - options.threads, options.jobs, checkout=True, - show_unknown=options.show_unknown, step=options.step) + options.threads, options.jobs, checkout=True, + show_unknown=options.show_unknown, step=options.step) builder.force_config_on_failure = not options.quick # For a dry run, just show our actions as a sanity check if options.dry_run: - ShowActions(series, why_selected, selected, builder, options) + ShowActions(series, why_selected, selected, builder, options) else: - builder.force_build = options.force_build - - # Work out which boards to build - board_selected = boards.GetSelectedDict() - - print GetActionSummary(options.summary, count, board_selected, options) - - if options.summary: - # We can't show function sizes without board details at present - if options.show_bloat: - options.show_detail = True - builder.ShowSummary(series.commits, board_selected, - options.show_errors, options.show_sizes, - options.show_detail, options.show_bloat) - else: - builder.BuildBoards(series.commits, board_selected, - options.show_errors, options.keep_outputs) + builder.force_build = options.force_build + + # Work out which boards to build + board_selected = boards.GetSelectedDict() + + print GetActionSummary(options.summary, count, board_selected, options) + + if options.summary: + # We can't show function sizes without board details at present + if options.show_bloat: + options.show_detail = True + builder.ShowSummary(series.commits, board_selected, + options.show_errors, options.show_sizes, + options.show_detail, options.show_bloat) + else: + builder.BuildBoards(series.commits, board_selected, + options.show_errors, options.keep_outputs) diff --git a/tools/buildman/test.py b/tools/buildman/test.py index 068784a..4e13bbb 100644 --- a/tools/buildman/test.py +++ b/tools/buildman/test.py @@ -77,93 +77,93 @@ class TestBuild(unittest.TestCase): TODO: Write tests for the rest of the functionality """ def setUp(self): - # Set up commits to build - self.commits = [] - sequence = 0 - for commit_info in commits: - comm = commit.Commit(commit_info[0]) - comm.subject = commit_info[1] - comm.return_code = commit_info[2] - comm.error_list = commit_info[3] - comm.sequence = sequence - sequence += 1 - self.commits.append(comm) - - # Set up boards to build - self.boards = board.Boards() - for brd in boards: - self.boards.AddBoard(board.Board(*brd)) - self.boards.SelectBoards([]) - - # Set up the toolchains - bsettings.Setup() - self.toolchains = toolchain.Toolchains() - self.toolchains.Add('arm-linux-gcc', test=False) - self.toolchains.Add('sparc-linux-gcc', test=False) - self.toolchains.Add('powerpc-linux-gcc', test=False) - self.toolchains.Add('gcc', test=False) + # Set up commits to build + self.commits = [] + sequence = 0 + for commit_info in commits: + comm = commit.Commit(commit_info[0]) + comm.subject = commit_info[1] + comm.return_code = commit_info[2] + comm.error_list = commit_info[3] + comm.sequence = sequence + sequence += 1 + self.commits.append(comm) + + # Set up boards to build + self.boards = board.Boards() + for brd in boards: + self.boards.AddBoard(board.Board(*brd)) + self.boards.SelectBoards([]) + + # Set up the toolchains + bsettings.Setup() + self.toolchains = toolchain.Toolchains() + self.toolchains.Add('arm-linux-gcc', test=False) + self.toolchains.Add('sparc-linux-gcc', test=False) + self.toolchains.Add('powerpc-linux-gcc', test=False) + self.toolchains.Add('gcc', test=False) def Make(self, commit, brd, stage, *args, **kwargs): - result = command.CommandResult() - boardnum = int(brd.target[-1]) - result.return_code = 0 - result.stderr = '' - result.stdout = ('This is the test output for board %s, commit %s' % - (brd.target, commit.hash)) - if boardnum >= 1 and boardnum >= commit.sequence: - result.return_code = commit.return_code - result.stderr = ''.join(commit.error_list) - if stage == 'build': - target_dir = None - for arg in args: - if arg.startswith('O='): - target_dir = arg[2:] - - if not os.path.isdir(target_dir): - os.mkdir(target_dir) - #time.sleep(.2 + boardnum * .2) - - result.combined = result.stdout + result.stderr - return result + result = command.CommandResult() + boardnum = int(brd.target[-1]) + result.return_code = 0 + result.stderr = '' + result.stdout = ('This is the test output for board %s, commit %s' % + (brd.target, commit.hash)) + if boardnum >= 1 and boardnum >= commit.sequence: + result.return_code = commit.return_code + result.stderr = ''.join(commit.error_list) + if stage == 'build': + target_dir = None + for arg in args: + if arg.startswith('O='): + target_dir = arg[2:] + + if not os.path.isdir(target_dir): + os.mkdir(target_dir) + #time.sleep(.2 + boardnum * .2) + + result.combined = result.stdout + result.stderr + return result def testBasic(self): - """Test basic builder operation""" - output_dir = tempfile.mkdtemp() - if not os.path.isdir(output_dir): - os.mkdir(output_dir) - build = builder.Builder(self.toolchains, output_dir, None, 1, 2, - checkout=False, show_unknown=False) - build.do_make = self.Make - board_selected = self.boards.GetSelectedDict() - - #build.BuildCommits(self.commits, board_selected, False) - build.BuildBoards(self.commits, board_selected, False, False) - build.ShowSummary(self.commits, board_selected, True, False, - False, False) + """Test basic builder operation""" + output_dir = tempfile.mkdtemp() + if not os.path.isdir(output_dir): + os.mkdir(output_dir) + build = builder.Builder(self.toolchains, output_dir, None, 1, 2, + checkout=False, show_unknown=False) + build.do_make = self.Make + board_selected = self.boards.GetSelectedDict() + + #build.BuildCommits(self.commits, board_selected, False) + build.BuildBoards(self.commits, board_selected, False, False) + build.ShowSummary(self.commits, board_selected, True, False, + False, False) def _testGit(self): - """Test basic builder operation by building a branch""" - base_dir = tempfile.mkdtemp() - if not os.path.isdir(base_dir): - os.mkdir(base_dir) - options = Options() - options.git = os.getcwd() - options.summary = False - options.jobs = None - options.dry_run = False - #options.git = os.path.join(base_dir, 'repo') - options.branch = 'test-buildman' - options.force_build = False - options.list_tool_chains = False - options.count = -1 - options.git_dir = None - options.threads = None - options.show_unknown = False - options.quick = False - options.show_errors = False - options.keep_outputs = False - args = ['tegra20'] - control.DoBuildman(options, args) + """Test basic builder operation by building a branch""" + base_dir = tempfile.mkdtemp() + if not os.path.isdir(base_dir): + os.mkdir(base_dir) + options = Options() + options.git = os.getcwd() + options.summary = False + options.jobs = None + options.dry_run = False + #options.git = os.path.join(base_dir, 'repo') + options.branch = 'test-buildman' + options.force_build = False + options.list_tool_chains = False + options.count = -1 + options.git_dir = None + options.threads = None + options.show_unknown = False + options.quick = False + options.show_errors = False + options.keep_outputs = False + args = ['tegra20'] + control.DoBuildman(options, args) if __name__ == "__main__": unittest.main() diff --git a/tools/buildman/toolchain.py b/tools/buildman/toolchain.py index a292338..c3ca0ad 100644 --- a/tools/buildman/toolchain.py +++ b/tools/buildman/toolchain.py @@ -14,75 +14,75 @@ class Toolchain: """A single toolchain Public members: - gcc: Full path to C compiler - path: Directory path containing C compiler - cross: Cross compile string, e.g. 'arm-linux-' - arch: Architecture of toolchain as determined from the first - component of the filename. E.g. arm-linux-gcc becomes arm + gcc: Full path to C compiler + path: Directory path containing C compiler + cross: Cross compile string, e.g. 'arm-linux-' + arch: Architecture of toolchain as determined from the first + component of the filename. E.g. arm-linux-gcc becomes arm """ def __init__(self, fname, test, verbose=False): - """Create a new toolchain object. - - Args: - fname: Filename of the gcc component - test: True to run the toolchain to test it - """ - self.gcc = fname - self.path = os.path.dirname(fname) - self.cross = os.path.basename(fname)[:-3] - pos = self.cross.find('-') - self.arch = self.cross[:pos] if pos != -1 else 'sandbox' - - env = self.MakeEnvironment() - - # As a basic sanity check, run the C compiler with --version - cmd = [fname, '--version'] - if test: - result = command.RunPipe([cmd], capture=True, env=env) - self.ok = result.return_code == 0 - if verbose: - print 'Tool chain test: ', - if self.ok: - print 'OK' - else: - print 'BAD' - print 'Command: ', cmd - print result.stdout - print result.stderr - else: - self.ok = True - self.priority = self.GetPriority(fname) + """Create a new toolchain object. + + Args: + fname: Filename of the gcc component + test: True to run the toolchain to test it + """ + self.gcc = fname + self.path = os.path.dirname(fname) + self.cross = os.path.basename(fname)[:-3] + pos = self.cross.find('-') + self.arch = self.cross[:pos] if pos != -1 else 'sandbox' + + env = self.MakeEnvironment() + + # As a basic sanity check, run the C compiler with --version + cmd = [fname, '--version'] + if test: + result = command.RunPipe([cmd], capture=True, env=env) + self.ok = result.return_code == 0 + if verbose: + print 'Tool chain test: ', + if self.ok: + print 'OK' + else: + print 'BAD' + print 'Command: ', cmd + print result.stdout + print result.stderr + else: + self.ok = True + self.priority = self.GetPriority(fname) def GetPriority(self, fname): - """Return the priority of the toolchain. - - Toolchains are ranked according to their suitability by their - filename prefix. - - Args: - fname: Filename of toolchain - Returns: - Priority of toolchain, 0=highest, 20=lowest. - """ - priority_list = ['-elf', '-unknown-linux-gnu', '-linux', '-elf', - '-none-linux-gnueabi', '-uclinux', '-none-eabi', - '-gentoo-linux-gnu', '-linux-gnueabi', '-le-linux', '-uclinux'] - for prio in range(len(priority_list)): - if priority_list[prio] in fname: - return prio - return prio + """Return the priority of the toolchain. + + Toolchains are ranked according to their suitability by their + filename prefix. + + Args: + fname: Filename of toolchain + Returns: + Priority of toolchain, 0=highest, 20=lowest. + """ + priority_list = ['-elf', '-unknown-linux-gnu', '-linux', '-elf', + '-none-linux-gnueabi', '-uclinux', '-none-eabi', + '-gentoo-linux-gnu', '-linux-gnueabi', '-le-linux', '-uclinux'] + for prio in range(len(priority_list)): + if priority_list[prio] in fname: + return prio + return prio def MakeEnvironment(self): - """Returns an environment for using the toolchain. + """Returns an environment for using the toolchain. - Thie takes the current environment, adds CROSS_COMPILE and - augments PATH so that the toolchain will operate correctly. - """ - env = dict(os.environ) - env['CROSS_COMPILE'] = self.cross - env['PATH'] += (':' + self.path) - return env + Thie takes the current environment, adds CROSS_COMPILE and + augments PATH so that the toolchain will operate correctly. + """ + env = dict(os.environ) + env['CROSS_COMPILE'] = self.cross + env['PATH'] += (':' + self.path) + return env class Toolchains: @@ -91,156 +91,156 @@ class Toolchains: We select one toolchain for each architecture type Public members: - toolchains: Dict of Toolchain objects, keyed by architecture name - paths: List of paths to check for toolchains (may contain wildcards) + toolchains: Dict of Toolchain objects, keyed by architecture name + paths: List of paths to check for toolchains (may contain wildcards) """ def __init__(self): - self.toolchains = {} - self.paths = [] - toolchains = bsettings.GetItems('toolchain') - if not toolchains: - print ("Warning: No tool chains - please add a [toolchain] section" - " to your buildman config file %s. See README for details" % - config_fname) - - for name, value in toolchains: - if '*' in value: - self.paths += glob.glob(value) - else: - self.paths.append(value) - self._make_flags = dict(bsettings.GetItems('make-flags')) + self.toolchains = {} + self.paths = [] + toolchains = bsettings.GetItems('toolchain') + if not toolchains: + print ("Warning: No tool chains - please add a [toolchain] section" + " to your buildman config file %s. See README for details" % + config_fname) + + for name, value in toolchains: + if '*' in value: + self.paths += glob.glob(value) + else: + self.paths.append(value) + self._make_flags = dict(bsettings.GetItems('make-flags')) def Add(self, fname, test=True, verbose=False): - """Add a toolchain to our list - - We select the given toolchain as our preferred one for its - architecture if it is a higher priority than the others. - - Args: - fname: Filename of toolchain's gcc driver - test: True to run the toolchain to test it - """ - toolchain = Toolchain(fname, test, verbose) - add_it = toolchain.ok - if toolchain.arch in self.toolchains: - add_it = (toolchain.priority < - self.toolchains[toolchain.arch].priority) - if add_it: - self.toolchains[toolchain.arch] = toolchain + """Add a toolchain to our list + + We select the given toolchain as our preferred one for its + architecture if it is a higher priority than the others. + + Args: + fname: Filename of toolchain's gcc driver + test: True to run the toolchain to test it + """ + toolchain = Toolchain(fname, test, verbose) + add_it = toolchain.ok + if toolchain.arch in self.toolchains: + add_it = (toolchain.priority < + self.toolchains[toolchain.arch].priority) + if add_it: + self.toolchains[toolchain.arch] = toolchain def Scan(self, verbose): - """Scan for available toolchains and select the best for each arch. - - We look for all the toolchains we can file, figure out the - architecture for each, and whether it works. Then we select the - highest priority toolchain for each arch. - - Args: - verbose: True to print out progress information - """ - if verbose: print 'Scanning for tool chains' - for path in self.paths: - if verbose: print " - scanning path '%s'" % path - for subdir in ['.', 'bin', 'usr/bin']: - dirname = os.path.join(path, subdir) - if verbose: print " - looking in '%s'" % dirname - for fname in glob.glob(dirname + '/*gcc'): - if verbose: print " - found '%s'" % fname - self.Add(fname, True, verbose) + """Scan for available toolchains and select the best for each arch. + + We look for all the toolchains we can file, figure out the + architecture for each, and whether it works. Then we select the + highest priority toolchain for each arch. + + Args: + verbose: True to print out progress information + """ + if verbose: print 'Scanning for tool chains' + for path in self.paths: + if verbose: print " - scanning path '%s'" % path + for subdir in ['.', 'bin', 'usr/bin']: + dirname = os.path.join(path, subdir) + if verbose: print " - looking in '%s'" % dirname + for fname in glob.glob(dirname + '/*gcc'): + if verbose: print " - found '%s'" % fname + self.Add(fname, True, verbose) def List(self): - """List out the selected toolchains for each architecture""" - print 'List of available toolchains (%d):' % len(self.toolchains) - if len(self.toolchains): - for key, value in sorted(self.toolchains.iteritems()): - print '%-10s: %s' % (key, value.gcc) - else: - print 'None' + """List out the selected toolchains for each architecture""" + print 'List of available toolchains (%d):' % len(self.toolchains) + if len(self.toolchains): + for key, value in sorted(self.toolchains.iteritems()): + print '%-10s: %s' % (key, value.gcc) + else: + print 'None' def Select(self, arch): - """Returns the toolchain for a given architecture + """Returns the toolchain for a given architecture - Args: - args: Name of architecture (e.g. 'arm', 'ppc_8xx') + Args: + args: Name of architecture (e.g. 'arm', 'ppc_8xx') - returns: - toolchain object, or None if none found - """ - for name, value in bsettings.GetItems('toolchain-alias'): - if arch == name: - arch = value + returns: + toolchain object, or None if none found + """ + for name, value in bsettings.GetItems('toolchain-alias'): + if arch == name: + arch = value - if not arch in self.toolchains: - raise ValueError, ("No tool chain found for arch '%s'" % arch) - return self.toolchains[arch] + if not arch in self.toolchains: + raise ValueError, ("No tool chain found for arch '%s'" % arch) + return self.toolchains[arch] def ResolveReferences(self, var_dict, args): - """Resolve variable references in a string - - This converts ${blah} within the string to the value of blah. - This function works recursively. - - Args: - var_dict: Dictionary containing variables and their values - args: String containing make arguments - Returns: - Resolved string - - >>> bsettings.Setup() - >>> tcs = Toolchains() - >>> tcs.Add('fred', False) - >>> var_dict = {'oblique' : 'OBLIQUE', 'first' : 'fi${second}rst', \ - 'second' : '2nd'} - >>> tcs.ResolveReferences(var_dict, 'this=${oblique}_set') - 'this=OBLIQUE_set' - >>> tcs.ResolveReferences(var_dict, 'this=${oblique}_set${first}nd') - 'this=OBLIQUE_setfi2ndrstnd' - """ - re_var = re.compile('(\$\{[a-z0-9A-Z]{1,}\})') - - while True: - m = re_var.search(args) - if not m: - break - lookup = m.group(0)[2:-1] - value = var_dict.get(lookup, '') - args = args[:m.start(0)] + value + args[m.end(0):] - return args + """Resolve variable references in a string + + This converts ${blah} within the string to the value of blah. + This function works recursively. + + Args: + var_dict: Dictionary containing variables and their values + args: String containing make arguments + Returns: + Resolved string + + >>> bsettings.Setup() + >>> tcs = Toolchains() + >>> tcs.Add('fred', False) + >>> var_dict = {'oblique' : 'OBLIQUE', 'first' : 'fi${second}rst', \ + 'second' : '2nd'} + >>> tcs.ResolveReferences(var_dict, 'this=${oblique}_set') + 'this=OBLIQUE_set' + >>> tcs.ResolveReferences(var_dict, 'this=${oblique}_set${first}nd') + 'this=OBLIQUE_setfi2ndrstnd' + """ + re_var = re.compile('(\$\{[a-z0-9A-Z]{1,}\})') + + while True: + m = re_var.search(args) + if not m: + break + lookup = m.group(0)[2:-1] + value = var_dict.get(lookup, '') + args = args[:m.start(0)] + value + args[m.end(0):] + return args def GetMakeArguments(self, board): - """Returns 'make' arguments for a given board - - The flags are in a section called 'make-flags'. Flags are named - after the target they represent, for example snapper9260=TESTING=1 - will pass TESTING=1 to make when building the snapper9260 board. - - References to other boards can be added in the string also. For - example: - - [make-flags] - at91-boards=ENABLE_AT91_TEST=1 - snapper9260=${at91-boards} BUILD_TAG=442 - snapper9g45=${at91-boards} BUILD_TAG=443 - - This will return 'ENABLE_AT91_TEST=1 BUILD_TAG=442' for snapper9260 - and 'ENABLE_AT91_TEST=1 BUILD_TAG=443' for snapper9g45. - - A special 'target' variable is set to the board target. - - Args: - board: Board object for the board to check. - Returns: - 'make' flags for that board, or '' if none - """ - self._make_flags['target'] = board.target - arg_str = self.ResolveReferences(self._make_flags, - self._make_flags.get(board.target, '')) - args = arg_str.split(' ') - i = 0 - while i < len(args): - if not args[i]: - del args[i] - else: - i += 1 - return args + """Returns 'make' arguments for a given board + + The flags are in a section called 'make-flags'. Flags are named + after the target they represent, for example snapper9260=TESTING=1 + will pass TESTING=1 to make when building the snapper9260 board. + + References to other boards can be added in the string also. For + example: + + [make-flags] + at91-boards=ENABLE_AT91_TEST=1 + snapper9260=${at91-boards} BUILD_TAG=442 + snapper9g45=${at91-boards} BUILD_TAG=443 + + This will return 'ENABLE_AT91_TEST=1 BUILD_TAG=442' for snapper9260 + and 'ENABLE_AT91_TEST=1 BUILD_TAG=443' for snapper9g45. + + A special 'target' variable is set to the board target. + + Args: + board: Board object for the board to check. + Returns: + 'make' flags for that board, or '' if none + """ + self._make_flags['target'] = board.target + arg_str = self.ResolveReferences(self._make_flags, + self._make_flags.get(board.target, '')) + args = arg_str.split(' ') + i = 0 + while i < len(args): + if not args[i]: + del args[i] + else: + i += 1 + return args diff --git a/tools/img2brec.sh b/tools/img2brec.sh index a0601e1..0fcdba2 100755 --- a/tools/img2brec.sh +++ b/tools/img2brec.sh @@ -3,11 +3,11 @@ # This script converts binary files (u-boot.bin) into so called # bootstrap records that are accepted by Motorola's MC9328MX1/L # (a.k.a. DragaonBall i.MX) in "Bootstrap Mode" -# +# # The code for the SynchFlash programming routines is taken from # Bootloader\Bin\SyncFlash\programBoot_b.txt contained in -# Motorolas LINUX_BSP_0_3_8.tar.gz -# +# Motorolas LINUX_BSP_0_3_8.tar.gz +# # The script could easily extended for AMD flash routines. # # 2004-06-23 - steven.scholz@imc-berlin.de @@ -15,34 +15,34 @@ ################################################################################# # From the posting to the U-Boot-Users mailing list, 23 Jun 2004: # =============================================================== -# I just hacked a simple script that converts u-boot.bin into a text file -# containg processor init code, SynchFlash programming code and U-Boot data in +# I just hacked a simple script that converts u-boot.bin into a text file +# containg processor init code, SynchFlash programming code and U-Boot data in # form of so called b-records. -# -# This can be used to programm U-Boot into (Synch)Flash using the Bootstrap +# +# This can be used to programm U-Boot into (Synch)Flash using the Bootstrap # Mode of the MC9328MX1/L -# +# # 0AFE1F3410202E2E2E000000002073756363656564/ # 0AFE1F44102E0A0000206661696C656420210A0000/ # 0AFE100000 # ... # MX1ADS Sync-flash Programming Utility v0.5 2002/08/21 -# +# # Source address (stored in 0x0AFE0000): 0x0A000000 # Target address (stored in 0x0AFE0004): 0x0C000000 # Size (stored in 0x0AFE0008): 0x0001A320 -# +# # Press any key to start programming ... # Erasing ... # Blank checking ... # Programming ... # Verifying flash ... succeed. -# +# # Programming finished. -# +# # So no need for a BDI2000 anymore... ;-) -# -# This is working on my MX1ADS eval board. Hope this could be useful for +# +# This is working on my MX1ADS eval board. Hope this could be useful for # someone. ################################################################################# diff --git a/tools/imls/Makefile b/tools/imls/Makefile index 1be1edb..e371983 100644 --- a/tools/imls/Makefile +++ b/tools/imls/Makefile @@ -39,7 +39,7 @@ LIBFDT_OBJS := $(addprefix $(obj),$(LIBFDT_OBJ_FILES-y)) HOSTCPPFLAGS = -idirafter $(SRCTREE)/include \ -idirafter $(OBJTREE)/include2 \ -idirafter $(OBJTREE)/include \ - -I $(SRCTREE)/lib/libfdt \ + -I $(SRCTREE)/lib/libfdt \ -I $(SRCTREE)/tools \ -DUSE_HOSTCC -D__KERNEL_STRICT_NAMES diff --git a/tools/kernel-doc/docproc.c b/tools/kernel-doc/docproc.c index d4fc42e..a9b49c5 100644 --- a/tools/kernel-doc/docproc.c +++ b/tools/kernel-doc/docproc.c @@ -153,7 +153,7 @@ int symfilecnt = 0; static void add_new_symbol(struct symfile *sym, char * symname) { sym->symbollist = - realloc(sym->symbollist, (sym->symbolcnt + 1) * sizeof(char *)); + realloc(sym->symbollist, (sym->symbolcnt + 1) * sizeof(char *)); sym->symbollist[sym->symbolcnt++].name = strdup(symname); } @@ -214,7 +214,7 @@ static void find_export_symbols(char * filename) char *p; char *e; if (((p = strstr(line, "EXPORT_SYMBOL_GPL")) != NULL) || - ((p = strstr(line, "EXPORT_SYMBOL")) != NULL)) { + ((p = strstr(line, "EXPORT_SYMBOL")) != NULL)) { /* Skip EXPORT_SYMBOL{_GPL} */ while (isalnum(*p) || *p == '_') p++; @@ -290,24 +290,24 @@ static void extfunc(char * filename) { docfunctions(filename, FUNCTION); } static void singfunc(char * filename, char * line) { char *vec[200]; /* Enough for specific functions */ - int i, idx = 0; - int startofsym = 1; + int i, idx = 0; + int startofsym = 1; vec[idx++] = KERNELDOC; vec[idx++] = DOCBOOK; - /* Split line up in individual parameters preceded by FUNCTION */ - for (i=0; line[i]; i++) { - if (isspace(line[i])) { - line[i] = '\0'; - startofsym = 1; - continue; - } - if (startofsym) { - startofsym = 0; - vec[idx++] = FUNCTION; - vec[idx++] = &line[i]; - } - } + /* Split line up in individual parameters preceded by FUNCTION */ + for (i=0; line[i]; i++) { + if (isspace(line[i])) { + line[i] = '\0'; + startofsym = 1; + continue; + } + if (startofsym) { + startofsym = 0; + vec[idx++] = FUNCTION; + vec[idx++] = &line[i]; + } + } for (i = 0; i < idx; i++) { if (strcmp(vec[i], FUNCTION)) continue; @@ -456,14 +456,14 @@ static void parse_file(FILE *infile) break; case 'D': while (*s && !isspace(*s)) s++; - *s = '\0'; - symbolsonly(line+2); - break; + *s = '\0'; + symbolsonly(line+2); + break; case 'F': /* filename */ while (*s && !isspace(*s)) s++; *s++ = '\0'; - /* function names */ + /* function names */ while (isspace(*s)) s++; singlefunctions(line +2, s); @@ -511,11 +511,11 @@ int main(int argc, char *argv[]) } /* Open file, exit on error */ infile = fopen(argv[2], "r"); - if (infile == NULL) { - fprintf(stderr, "docproc: "); - perror(argv[2]); - exit(2); - } + if (infile == NULL) { + fprintf(stderr, "docproc: "); + perror(argv[2]); + exit(2); + } if (strcmp("doc", argv[1]) == 0) { /* Need to do this in two passes. diff --git a/tools/kernel-doc/kernel-doc b/tools/kernel-doc/kernel-doc index 6347418..cbbf34c 100755 --- a/tools/kernel-doc/kernel-doc +++ b/tools/kernel-doc/kernel-doc @@ -432,7 +432,7 @@ sub dump_doc_section { my $contents = join "\n", @_; if ($no_doc_sections) { - return; + return; } if (($function_only == 0) || diff --git a/tools/patman/checkpatch.py b/tools/patman/checkpatch.py index 0d4e935..264f87c 100644 --- a/tools/patman/checkpatch.py +++ b/tools/patman/checkpatch.py @@ -14,49 +14,49 @@ import terminal def FindCheckPatch(): top_level = gitutil.GetTopLevel() try_list = [ - os.getcwd(), - os.path.join(os.getcwd(), '..', '..'), - os.path.join(top_level, 'tools'), - os.path.join(top_level, 'scripts'), - '%s/bin' % os.getenv('HOME'), - ] + os.getcwd(), + os.path.join(os.getcwd(), '..', '..'), + os.path.join(top_level, 'tools'), + os.path.join(top_level, 'scripts'), + '%s/bin' % os.getenv('HOME'), + ] # Look in current dir for path in try_list: - fname = os.path.join(path, 'checkpatch.pl') - if os.path.isfile(fname): - return fname + fname = os.path.join(path, 'checkpatch.pl') + if os.path.isfile(fname): + return fname # Look upwwards for a Chrome OS tree while not os.path.ismount(path): - fname = os.path.join(path, 'src', 'third_party', 'kernel', 'files', - 'scripts', 'checkpatch.pl') - if os.path.isfile(fname): - return fname - path = os.path.dirname(path) + fname = os.path.join(path, 'src', 'third_party', 'kernel', 'files', + 'scripts', 'checkpatch.pl') + if os.path.isfile(fname): + return fname + path = os.path.dirname(path) print >> sys.stderr, ('Cannot find checkpatch.pl - please put it in your ' + - '~/bin directory or use --no-check') + '~/bin directory or use --no-check') sys.exit(1) def CheckPatch(fname, verbose=False): """Run checkpatch.pl on a file. Returns: - namedtuple containing: - ok: False=failure, True=ok - problems: List of problems, each a dict: - 'type'; error or warning - 'msg': text message - 'file' : filename - 'line': line number - errors: Number of errors - warnings: Number of warnings - checks: Number of checks - lines: Number of lines - stdout: Full output of checkpatch + namedtuple containing: + ok: False=failure, True=ok + problems: List of problems, each a dict: + 'type'; error or warning + 'msg': text message + 'file' : filename + 'line': line number + errors: Number of errors + warnings: Number of warnings + checks: Number of checks + lines: Number of lines + stdout: Full output of checkpatch """ fields = ['ok', 'problems', 'errors', 'warnings', 'checks', 'lines', - 'stdout'] + 'stdout'] result = collections.namedtuple('CheckPatchResult', fields) result.ok = False result.errors, result.warning, result.checks = 0, 0, 0 @@ -73,7 +73,7 @@ def CheckPatch(fname, verbose=False): # total: 0 errors, 2 warnings, 7 checks, 473 lines checked re_stats = re.compile('total: (\\d+) errors, (\d+) warnings, (\d+)') re_stats_full = re.compile('total: (\\d+) errors, (\d+) warnings, (\d+)' - ' checks, (\d+)') + ' checks, (\d+)') re_ok = re.compile('.*has no obvious style problems') re_bad = re.compile('.*has style problems, please review') re_error = re.compile('ERROR: (.*)') @@ -82,44 +82,44 @@ def CheckPatch(fname, verbose=False): re_file = re.compile('#\d+: FILE: ([^:]*):(\d+):') for line in result.stdout.splitlines(): - if verbose: - print line + if verbose: + print line - # A blank line indicates the end of a message - if not line and item: - result.problems.append(item) - item = {} - match = re_stats_full.match(line) - if not match: - match = re_stats.match(line) - if match: - result.errors = int(match.group(1)) - result.warnings = int(match.group(2)) - if len(match.groups()) == 4: - result.checks = int(match.group(3)) - result.lines = int(match.group(4)) - else: - result.lines = int(match.group(3)) - elif re_ok.match(line): - result.ok = True - elif re_bad.match(line): - result.ok = False - err_match = re_error.match(line) - warn_match = re_warning.match(line) - file_match = re_file.match(line) - check_match = re_check.match(line) - if err_match: - item['msg'] = err_match.group(1) - item['type'] = 'error' - elif warn_match: - item['msg'] = warn_match.group(1) - item['type'] = 'warning' - elif check_match: - item['msg'] = check_match.group(1) - item['type'] = 'check' - elif file_match: - item['file'] = file_match.group(1) - item['line'] = int(file_match.group(2)) + # A blank line indicates the end of a message + if not line and item: + result.problems.append(item) + item = {} + match = re_stats_full.match(line) + if not match: + match = re_stats.match(line) + if match: + result.errors = int(match.group(1)) + result.warnings = int(match.group(2)) + if len(match.groups()) == 4: + result.checks = int(match.group(3)) + result.lines = int(match.group(4)) + else: + result.lines = int(match.group(3)) + elif re_ok.match(line): + result.ok = True + elif re_bad.match(line): + result.ok = False + err_match = re_error.match(line) + warn_match = re_warning.match(line) + file_match = re_file.match(line) + check_match = re_check.match(line) + if err_match: + item['msg'] = err_match.group(1) + item['type'] = 'error' + elif warn_match: + item['msg'] = warn_match.group(1) + item['type'] = 'warning' + elif check_match: + item['msg'] = check_match.group(1) + item['type'] = 'check' + elif file_match: + item['file'] = file_match.group(1) + item['line'] = int(file_match.group(2)) return result @@ -127,17 +127,17 @@ def GetWarningMsg(col, msg_type, fname, line, msg): '''Create a message for a given file/line Args: - msg_type: Message type ('error' or 'warning') - fname: Filename which reports the problem - line: Line number where it was noticed - msg: Message to report + msg_type: Message type ('error' or 'warning') + fname: Filename which reports the problem + line: Line number where it was noticed + msg: Message to report ''' if msg_type == 'warning': - msg_type = col.Color(col.YELLOW, msg_type) + msg_type = col.Color(col.YELLOW, msg_type) elif msg_type == 'error': - msg_type = col.Color(col.RED, msg_type) + msg_type = col.Color(col.RED, msg_type) elif msg_type == 'check': - msg_type = col.Color(col.MAGENTA, msg_type) + msg_type = col.Color(col.MAGENTA, msg_type) return '%s: %s,%d: %s' % (msg_type, fname, line, msg) def CheckPatches(verbose, args): @@ -146,29 +146,29 @@ def CheckPatches(verbose, args): col = terminal.Color() for fname in args: - result = CheckPatch(fname, verbose) - if not result.ok: - error_count += result.errors - warning_count += result.warnings - check_count += result.checks - print '%d errors, %d warnings, %d checks for %s:' % (result.errors, - result.warnings, result.checks, col.Color(col.BLUE, fname)) - if (len(result.problems) != result.errors + result.warnings + - result.checks): - print "Internal error: some problems lost" - for item in result.problems: - print GetWarningMsg(col, item.get('type', ''), - item.get('file', ''), - item.get('line', 0), item.get('msg', 'message')) - print - #print stdout + result = CheckPatch(fname, verbose) + if not result.ok: + error_count += result.errors + warning_count += result.warnings + check_count += result.checks + print '%d errors, %d warnings, %d checks for %s:' % (result.errors, + result.warnings, result.checks, col.Color(col.BLUE, fname)) + if (len(result.problems) != result.errors + result.warnings + + result.checks): + print "Internal error: some problems lost" + for item in result.problems: + print GetWarningMsg(col, item.get('type', ''), + item.get('file', ''), + item.get('line', 0), item.get('msg', 'message')) + print + #print stdout if error_count or warning_count or check_count: - str = 'checkpatch.pl found %d error(s), %d warning(s), %d checks(s)' - color = col.GREEN - if warning_count: - color = col.YELLOW - if error_count: - color = col.RED - print col.Color(color, str % (error_count, warning_count, check_count)) - return False + str = 'checkpatch.pl found %d error(s), %d warning(s), %d checks(s)' + color = col.GREEN + if warning_count: + color = col.YELLOW + if error_count: + color = col.RED + print col.Color(color, str % (error_count, warning_count, check_count)) + return False return True diff --git a/tools/patman/command.py b/tools/patman/command.py index 449d3d0..e332e74 100644 --- a/tools/patman/command.py +++ b/tools/patman/command.py @@ -12,74 +12,74 @@ class CommandResult: """A class which captures the result of executing a command. Members: - stdout: stdout obtained from command, as a string - stderr: stderr obtained from command, as a string - return_code: Return code from command - exception: Exception received, or None if all ok + stdout: stdout obtained from command, as a string + stderr: stderr obtained from command, as a string + return_code: Return code from command + exception: Exception received, or None if all ok """ def __init__(self): - self.stdout = None - self.stderr = None - self.return_code = None - self.exception = None + self.stdout = None + self.stderr = None + self.return_code = None + self.exception = None def RunPipe(pipe_list, infile=None, outfile=None, - capture=False, capture_stderr=False, oneline=False, - raise_on_error=True, cwd=None, **kwargs): + capture=False, capture_stderr=False, oneline=False, + raise_on_error=True, cwd=None, **kwargs): """ Perform a command pipeline, with optional input/output filenames. Args: - pipe_list: List of command lines to execute. Each command line is - piped into the next, and is itself a list of strings. For - example [ ['ls', '.git'] ['wc'] ] will pipe the output of - 'ls .git' into 'wc'. - infile: File to provide stdin to the pipeline - outfile: File to store stdout - capture: True to capture output - capture_stderr: True to capture stderr - oneline: True to strip newline chars from output - kwargs: Additional keyword arguments to cros_subprocess.Popen() + pipe_list: List of command lines to execute. Each command line is + piped into the next, and is itself a list of strings. For + example [ ['ls', '.git'] ['wc'] ] will pipe the output of + 'ls .git' into 'wc'. + infile: File to provide stdin to the pipeline + outfile: File to store stdout + capture: True to capture output + capture_stderr: True to capture stderr + oneline: True to strip newline chars from output + kwargs: Additional keyword arguments to cros_subprocess.Popen() Returns: - CommandResult object + CommandResult object """ result = CommandResult() last_pipe = None pipeline = list(pipe_list) user_pipestr = '|'.join([' '.join(pipe) for pipe in pipe_list]) while pipeline: - cmd = pipeline.pop(0) - if last_pipe is not None: - kwargs['stdin'] = last_pipe.stdout - elif infile: - kwargs['stdin'] = open(infile, 'rb') - if pipeline or capture: - kwargs['stdout'] = cros_subprocess.PIPE - elif outfile: - kwargs['stdout'] = open(outfile, 'wb') - if capture_stderr: - kwargs['stderr'] = cros_subprocess.PIPE + cmd = pipeline.pop(0) + if last_pipe is not None: + kwargs['stdin'] = last_pipe.stdout + elif infile: + kwargs['stdin'] = open(infile, 'rb') + if pipeline or capture: + kwargs['stdout'] = cros_subprocess.PIPE + elif outfile: + kwargs['stdout'] = open(outfile, 'wb') + if capture_stderr: + kwargs['stderr'] = cros_subprocess.PIPE - try: - last_pipe = cros_subprocess.Popen(cmd, cwd=cwd, **kwargs) - except Exception, err: - result.exception = err - if raise_on_error: - raise Exception("Error running '%s': %s" % (user_pipestr, str)) - result.return_code = 255 - return result + try: + last_pipe = cros_subprocess.Popen(cmd, cwd=cwd, **kwargs) + except Exception, err: + result.exception = err + if raise_on_error: + raise Exception("Error running '%s': %s" % (user_pipestr, str)) + result.return_code = 255 + return result if capture: - result.stdout, result.stderr, result.combined = ( - last_pipe.CommunicateFilter(None)) - if result.stdout and oneline: - result.output = result.stdout.rstrip('\r\n') - result.return_code = last_pipe.wait() + result.stdout, result.stderr, result.combined = ( + last_pipe.CommunicateFilter(None)) + if result.stdout and oneline: + result.output = result.stdout.rstrip('\r\n') + result.return_code = last_pipe.wait() else: - result.return_code = os.waitpid(last_pipe.pid, 0)[1] + result.return_code = os.waitpid(last_pipe.pid, 0)[1] if raise_on_error and result.return_code: - raise Exception("Error running '%s'" % user_pipestr) + raise Exception("Error running '%s'" % user_pipestr) return result def Output(*cmd): @@ -88,8 +88,8 @@ def Output(*cmd): def OutputOneLine(*cmd, **kwargs): raise_on_error = kwargs.pop('raise_on_error', True) return (RunPipe([cmd], capture=True, oneline=True, - raise_on_error=raise_on_error, - **kwargs).stdout.strip()) + raise_on_error=raise_on_error, + **kwargs).stdout.strip()) def Run(*cmd, **kwargs): return RunPipe([cmd], **kwargs).stdout diff --git a/tools/patman/commit.py b/tools/patman/commit.py index 900cfb3..aeea3e8 100644 --- a/tools/patman/commit.py +++ b/tools/patman/commit.py @@ -12,61 +12,61 @@ class Commit: """Holds information about a single commit/patch in the series. Args: - hash: Commit hash (as a string) + hash: Commit hash (as a string) Variables: - hash: Commit hash - subject: Subject line - tags: List of maintainer tag strings - changes: Dict containing a list of changes (single line strings). - The dict is indexed by change version (an integer) - cc_list: List of people to aliases/emails to cc on this commit + hash: Commit hash + subject: Subject line + tags: List of maintainer tag strings + changes: Dict containing a list of changes (single line strings). + The dict is indexed by change version (an integer) + cc_list: List of people to aliases/emails to cc on this commit """ def __init__(self, hash): - self.hash = hash - self.subject = None - self.tags = [] - self.changes = {} - self.cc_list = [] + self.hash = hash + self.subject = None + self.tags = [] + self.changes = {} + self.cc_list = [] def AddChange(self, version, info): - """Add a new change line to the change list for a version. + """Add a new change line to the change list for a version. - Args: - version: Patch set version (integer: 1, 2, 3) - info: Description of change in this version - """ - if not self.changes.get(version): - self.changes[version] = [] - self.changes[version].append(info) + Args: + version: Patch set version (integer: 1, 2, 3) + info: Description of change in this version + """ + if not self.changes.get(version): + self.changes[version] = [] + self.changes[version].append(info) def CheckTags(self): - """Create a list of subject tags in the commit + """Create a list of subject tags in the commit - Subject tags look like this: + Subject tags look like this: - propounder: fort: Change the widget to propound correctly + propounder: fort: Change the widget to propound correctly - Here the tags are propounder and fort. Multiple tags are supported. - The list is updated in self.tag. + Here the tags are propounder and fort. Multiple tags are supported. + The list is updated in self.tag. - Returns: - None if ok, else the name of a tag with no email alias - """ - str = self.subject - m = True - while m: - m = re_subject_tag.match(str) - if m: - tag = m.group(1) - self.tags.append(tag) - str = m.group(2) - return None + Returns: + None if ok, else the name of a tag with no email alias + """ + str = self.subject + m = True + while m: + m = re_subject_tag.match(str) + if m: + tag = m.group(1) + self.tags.append(tag) + str = m.group(2) + return None def AddCc(self, cc_list): - """Add a list of people to Cc when we send this patch. + """Add a list of people to Cc when we send this patch. - Args: - cc_list: List of aliases or email addresses - """ - self.cc_list += cc_list + Args: + cc_list: List of aliases or email addresses + """ + self.cc_list += cc_list diff --git a/tools/patman/cros_subprocess.py b/tools/patman/cros_subprocess.py index 0fc4a06..6ce136d 100644 --- a/tools/patman/cros_subprocess.py +++ b/tools/patman/cros_subprocess.py @@ -39,206 +39,206 @@ class Popen(subprocess.Popen): The class is similar to subprocess.Popen, the equivalent is something like: - Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) But this class has many fewer features, and two enhancement: 1. Rather than getting the output data only at the end, this class sends it - to a provided operation as it arrives. + to a provided operation as it arrives. 2. We use pseudo terminals so that the child will hopefully flush its output - to us as soon as it is produced, rather than waiting for the end of a - line. + to us as soon as it is produced, rather than waiting for the end of a + line. Use CommunicateFilter() to handle output from the subprocess. """ def __init__(self, args, stdin=None, stdout=PIPE_PTY, stderr=PIPE_PTY, - shell=False, cwd=None, env=None, **kwargs): - """Cut-down constructor - - Args: - args: Program and arguments for subprocess to execute. - stdin: See subprocess.Popen() - stdout: See subprocess.Popen(), except that we support the sentinel - value of cros_subprocess.PIPE_PTY. - stderr: See subprocess.Popen(), except that we support the sentinel - value of cros_subprocess.PIPE_PTY. - shell: See subprocess.Popen() - cwd: Working directory to change to for subprocess, or None if none. - env: Environment to use for this subprocess, or None to inherit parent. - kwargs: No other arguments are supported at the moment. Passing other - arguments will cause a ValueError to be raised. - """ - stdout_pty = None - stderr_pty = None - - if stdout == PIPE_PTY: - stdout_pty = pty.openpty() - stdout = os.fdopen(stdout_pty[1]) - if stderr == PIPE_PTY: - stderr_pty = pty.openpty() - stderr = os.fdopen(stderr_pty[1]) - - super(Popen, self).__init__(args, stdin=stdin, - stdout=stdout, stderr=stderr, shell=shell, cwd=cwd, env=env, - **kwargs) - - # If we're on a PTY, we passed the slave half of the PTY to the subprocess. - # We want to use the master half on our end from now on. Setting this here - # does make some assumptions about the implementation of subprocess, but - # those assumptions are pretty minor. - - # Note that if stderr is STDOUT, then self.stderr will be set to None by - # this constructor. - if stdout_pty is not None: - self.stdout = os.fdopen(stdout_pty[0]) - if stderr_pty is not None: - self.stderr = os.fdopen(stderr_pty[0]) - - # Insist that unit tests exist for other arguments we don't support. - if kwargs: - raise ValueError("Unit tests do not test extra args - please add tests") + shell=False, cwd=None, env=None, **kwargs): + """Cut-down constructor + + Args: + args: Program and arguments for subprocess to execute. + stdin: See subprocess.Popen() + stdout: See subprocess.Popen(), except that we support the sentinel + value of cros_subprocess.PIPE_PTY. + stderr: See subprocess.Popen(), except that we support the sentinel + value of cros_subprocess.PIPE_PTY. + shell: See subprocess.Popen() + cwd: Working directory to change to for subprocess, or None if none. + env: Environment to use for this subprocess, or None to inherit parent. + kwargs: No other arguments are supported at the moment. Passing other + arguments will cause a ValueError to be raised. + """ + stdout_pty = None + stderr_pty = None + + if stdout == PIPE_PTY: + stdout_pty = pty.openpty() + stdout = os.fdopen(stdout_pty[1]) + if stderr == PIPE_PTY: + stderr_pty = pty.openpty() + stderr = os.fdopen(stderr_pty[1]) + + super(Popen, self).__init__(args, stdin=stdin, + stdout=stdout, stderr=stderr, shell=shell, cwd=cwd, env=env, + **kwargs) + + # If we're on a PTY, we passed the slave half of the PTY to the subprocess. + # We want to use the master half on our end from now on. Setting this here + # does make some assumptions about the implementation of subprocess, but + # those assumptions are pretty minor. + + # Note that if stderr is STDOUT, then self.stderr will be set to None by + # this constructor. + if stdout_pty is not None: + self.stdout = os.fdopen(stdout_pty[0]) + if stderr_pty is not None: + self.stderr = os.fdopen(stderr_pty[0]) + + # Insist that unit tests exist for other arguments we don't support. + if kwargs: + raise ValueError("Unit tests do not test extra args - please add tests") def CommunicateFilter(self, output): - """Interact with process: Read data from stdout and stderr. - - This method runs until end-of-file is reached, then waits for the - subprocess to terminate. - - The output function is sent all output from the subprocess and must be - defined like this: - - def Output([self,] stream, data) - Args: - stream: the stream the output was received on, which will be - sys.stdout or sys.stderr. - data: a string containing the data - - Note: The data read is buffered in memory, so do not use this - method if the data size is large or unlimited. - - Args: - output: Function to call with each fragment of output. - - Returns: - A tuple (stdout, stderr, combined) which is the data received on - stdout, stderr and the combined data (interleaved stdout and stderr). - - Note that the interleaved output will only be sensible if you have - set both stdout and stderr to PIPE or PIPE_PTY. Even then it depends on - the timing of the output in the subprocess. If a subprocess flips - between stdout and stderr quickly in succession, by the time we come to - read the output from each we may see several lines in each, and will read - all the stdout lines, then all the stderr lines. So the interleaving - may not be correct. In this case you might want to pass - stderr=cros_subprocess.STDOUT to the constructor. - - This feature is still useful for subprocesses where stderr is - rarely used and indicates an error. - - Note also that if you set stderr to STDOUT, then stderr will be empty - and the combined output will just be the same as stdout. - """ - - read_set = [] - write_set = [] - stdout = None # Return - stderr = None # Return - - if self.stdin: - # Flush stdio buffer. This might block, if the user has - # been writing to .stdin in an uncontrolled fashion. - self.stdin.flush() - if input: - write_set.append(self.stdin) - else: - self.stdin.close() - if self.stdout: - read_set.append(self.stdout) - stdout = [] - if self.stderr and self.stderr != self.stdout: - read_set.append(self.stderr) - stderr = [] - combined = [] - - input_offset = 0 - while read_set or write_set: - try: - rlist, wlist, _ = select.select(read_set, write_set, [], 0.2) - except select.error, e: - if e.args[0] == errno.EINTR: - continue - raise - - if not stay_alive: - self.terminate() - - if self.stdin in wlist: - # When select has indicated that the file is writable, - # we can write up to PIPE_BUF bytes without risk - # blocking. POSIX defines PIPE_BUF >= 512 - chunk = input[input_offset : input_offset + 512] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) - - if self.stdout in rlist: - data = "" - # We will get an error on read if the pty is closed - try: - data = os.read(self.stdout.fileno(), 1024) - except OSError: - pass - if data == "": - self.stdout.close() - read_set.remove(self.stdout) - else: - stdout.append(data) - combined.append(data) - if output: - output(sys.stdout, data) - if self.stderr in rlist: - data = "" - # We will get an error on read if the pty is closed - try: - data = os.read(self.stderr.fileno(), 1024) - except OSError: - pass - if data == "": - self.stderr.close() - read_set.remove(self.stderr) - else: - stderr.append(data) - combined.append(data) - if output: - output(sys.stderr, data) - - # All data exchanged. Translate lists into strings. - if stdout is not None: - stdout = ''.join(stdout) - else: - stdout = '' - if stderr is not None: - stderr = ''.join(stderr) - else: - stderr = '' - combined = ''.join(combined) - - # Translate newlines, if requested. We cannot let the file - # object do the translation: It is based on stdio, which is - # impossible to combine with select (unless forcing no - # buffering). - if self.universal_newlines and hasattr(file, 'newlines'): - if stdout: - stdout = self._translate_newlines(stdout) - if stderr: - stderr = self._translate_newlines(stderr) - - self.wait() - return (stdout, stderr, combined) + """Interact with process: Read data from stdout and stderr. + + This method runs until end-of-file is reached, then waits for the + subprocess to terminate. + + The output function is sent all output from the subprocess and must be + defined like this: + + def Output([self,] stream, data) + Args: + stream: the stream the output was received on, which will be + sys.stdout or sys.stderr. + data: a string containing the data + + Note: The data read is buffered in memory, so do not use this + method if the data size is large or unlimited. + + Args: + output: Function to call with each fragment of output. + + Returns: + A tuple (stdout, stderr, combined) which is the data received on + stdout, stderr and the combined data (interleaved stdout and stderr). + + Note that the interleaved output will only be sensible if you have + set both stdout and stderr to PIPE or PIPE_PTY. Even then it depends on + the timing of the output in the subprocess. If a subprocess flips + between stdout and stderr quickly in succession, by the time we come to + read the output from each we may see several lines in each, and will read + all the stdout lines, then all the stderr lines. So the interleaving + may not be correct. In this case you might want to pass + stderr=cros_subprocess.STDOUT to the constructor. + + This feature is still useful for subprocesses where stderr is + rarely used and indicates an error. + + Note also that if you set stderr to STDOUT, then stderr will be empty + and the combined output will just be the same as stdout. + """ + + read_set = [] + write_set = [] + stdout = None # Return + stderr = None # Return + + if self.stdin: + # Flush stdio buffer. This might block, if the user has + # been writing to .stdin in an uncontrolled fashion. + self.stdin.flush() + if input: + write_set.append(self.stdin) + else: + self.stdin.close() + if self.stdout: + read_set.append(self.stdout) + stdout = [] + if self.stderr and self.stderr != self.stdout: + read_set.append(self.stderr) + stderr = [] + combined = [] + + input_offset = 0 + while read_set or write_set: + try: + rlist, wlist, _ = select.select(read_set, write_set, [], 0.2) + except select.error, e: + if e.args[0] == errno.EINTR: + continue + raise + + if not stay_alive: + self.terminate() + + if self.stdin in wlist: + # When select has indicated that the file is writable, + # we can write up to PIPE_BUF bytes without risk + # blocking. POSIX defines PIPE_BUF >= 512 + chunk = input[input_offset : input_offset + 512] + bytes_written = os.write(self.stdin.fileno(), chunk) + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) + + if self.stdout in rlist: + data = "" + # We will get an error on read if the pty is closed + try: + data = os.read(self.stdout.fileno(), 1024) + except OSError: + pass + if data == "": + self.stdout.close() + read_set.remove(self.stdout) + else: + stdout.append(data) + combined.append(data) + if output: + output(sys.stdout, data) + if self.stderr in rlist: + data = "" + # We will get an error on read if the pty is closed + try: + data = os.read(self.stderr.fileno(), 1024) + except OSError: + pass + if data == "": + self.stderr.close() + read_set.remove(self.stderr) + else: + stderr.append(data) + combined.append(data) + if output: + output(sys.stderr, data) + + # All data exchanged. Translate lists into strings. + if stdout is not None: + stdout = ''.join(stdout) + else: + stdout = '' + if stderr is not None: + stderr = ''.join(stderr) + else: + stderr = '' + combined = ''.join(combined) + + # Translate newlines, if requested. We cannot let the file + # object do the translation: It is based on stdio, which is + # impossible to combine with select (unless forcing no + # buffering). + if self.universal_newlines and hasattr(file, 'newlines'): + if stdout: + stdout = self._translate_newlines(stdout) + if stderr: + stderr = self._translate_newlines(stderr) + + self.wait() + return (stdout, stderr, combined) # Just being a unittest.TestCase gives us 14 public methods. Unless we @@ -250,148 +250,148 @@ class TestSubprocess(unittest.TestCase): """Our simple unit test for this module""" class MyOperation: - """Provides a operation that we can pass to Popen""" - def __init__(self, input_to_send=None): - """Constructor to set up the operation and possible input. - - Args: - input_to_send: a text string to send when we first get input. We will - add \r\n to the string. - """ - self.stdout_data = '' - self.stderr_data = '' - self.combined_data = '' - self.stdin_pipe = None - self._input_to_send = input_to_send - if input_to_send: - pipe = os.pipe() - self.stdin_read_pipe = pipe[0] - self._stdin_write_pipe = os.fdopen(pipe[1], 'w') - - def Output(self, stream, data): - """Output handler for Popen. Stores the data for later comparison""" - if stream == sys.stdout: - self.stdout_data += data - if stream == sys.stderr: - self.stderr_data += data - self.combined_data += data - - # Output the input string if we have one. - if self._input_to_send: - self._stdin_write_pipe.write(self._input_to_send + '\r\n') - self._stdin_write_pipe.flush() + """Provides a operation that we can pass to Popen""" + def __init__(self, input_to_send=None): + """Constructor to set up the operation and possible input. + + Args: + input_to_send: a text string to send when we first get input. We will + add \r\n to the string. + """ + self.stdout_data = '' + self.stderr_data = '' + self.combined_data = '' + self.stdin_pipe = None + self._input_to_send = input_to_send + if input_to_send: + pipe = os.pipe() + self.stdin_read_pipe = pipe[0] + self._stdin_write_pipe = os.fdopen(pipe[1], 'w') + + def Output(self, stream, data): + """Output handler for Popen. Stores the data for later comparison""" + if stream == sys.stdout: + self.stdout_data += data + if stream == sys.stderr: + self.stderr_data += data + self.combined_data += data + + # Output the input string if we have one. + if self._input_to_send: + self._stdin_write_pipe.write(self._input_to_send + '\r\n') + self._stdin_write_pipe.flush() def _BasicCheck(self, plist, oper): - """Basic checks that the output looks sane.""" - self.assertEqual(plist[0], oper.stdout_data) - self.assertEqual(plist[1], oper.stderr_data) - self.assertEqual(plist[2], oper.combined_data) + """Basic checks that the output looks sane.""" + self.assertEqual(plist[0], oper.stdout_data) + self.assertEqual(plist[1], oper.stderr_data) + self.assertEqual(plist[2], oper.combined_data) - # The total length of stdout and stderr should equal the combined length - self.assertEqual(len(plist[0]) + len(plist[1]), len(plist[2])) + # The total length of stdout and stderr should equal the combined length + self.assertEqual(len(plist[0]) + len(plist[1]), len(plist[2])) def test_simple(self): - """Simple redirection: Get process list""" - oper = TestSubprocess.MyOperation() - plist = Popen(['ps']).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) + """Simple redirection: Get process list""" + oper = TestSubprocess.MyOperation() + plist = Popen(['ps']).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) def test_stderr(self): - """Check stdout and stderr""" - oper = TestSubprocess.MyOperation() - cmd = 'echo fred >/dev/stderr && false || echo bad' - plist = Popen([cmd], shell=True).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], 'bad\r\n') - self.assertEqual(plist [1], 'fred\r\n') + """Check stdout and stderr""" + oper = TestSubprocess.MyOperation() + cmd = 'echo fred >/dev/stderr && false || echo bad' + plist = Popen([cmd], shell=True).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], 'bad\r\n') + self.assertEqual(plist [1], 'fred\r\n') def test_shell(self): - """Check with and without shell works""" - oper = TestSubprocess.MyOperation() - cmd = 'echo test >/dev/stderr' - self.assertRaises(OSError, Popen, [cmd], shell=False) - plist = Popen([cmd], shell=True).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(len(plist [0]), 0) - self.assertEqual(plist [1], 'test\r\n') + """Check with and without shell works""" + oper = TestSubprocess.MyOperation() + cmd = 'echo test >/dev/stderr' + self.assertRaises(OSError, Popen, [cmd], shell=False) + plist = Popen([cmd], shell=True).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(len(plist [0]), 0) + self.assertEqual(plist [1], 'test\r\n') def test_list_args(self): - """Check with and without shell works using list arguments""" - oper = TestSubprocess.MyOperation() - cmd = ['echo', 'test', '>/dev/stderr'] - plist = Popen(cmd, shell=False).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], ' '.join(cmd[1:]) + '\r\n') - self.assertEqual(len(plist [1]), 0) - - oper = TestSubprocess.MyOperation() - - # this should be interpreted as 'echo' with the other args dropped - cmd = ['echo', 'test', '>/dev/stderr'] - plist = Popen(cmd, shell=True).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], '\r\n') + """Check with and without shell works using list arguments""" + oper = TestSubprocess.MyOperation() + cmd = ['echo', 'test', '>/dev/stderr'] + plist = Popen(cmd, shell=False).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], ' '.join(cmd[1:]) + '\r\n') + self.assertEqual(len(plist [1]), 0) + + oper = TestSubprocess.MyOperation() + + # this should be interpreted as 'echo' with the other args dropped + cmd = ['echo', 'test', '>/dev/stderr'] + plist = Popen(cmd, shell=True).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], '\r\n') def test_cwd(self): - """Check we can change directory""" - for shell in (False, True): - oper = TestSubprocess.MyOperation() - plist = Popen('pwd', shell=shell, cwd='/tmp').CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], '/tmp\r\n') + """Check we can change directory""" + for shell in (False, True): + oper = TestSubprocess.MyOperation() + plist = Popen('pwd', shell=shell, cwd='/tmp').CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], '/tmp\r\n') def test_env(self): - """Check we can change environment""" - for add in (False, True): - oper = TestSubprocess.MyOperation() - env = os.environ - if add: - env ['FRED'] = 'fred' - cmd = 'echo $FRED' - plist = Popen(cmd, shell=True, env=env).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], add and 'fred\r\n' or '\r\n') + """Check we can change environment""" + for add in (False, True): + oper = TestSubprocess.MyOperation() + env = os.environ + if add: + env ['FRED'] = 'fred' + cmd = 'echo $FRED' + plist = Popen(cmd, shell=True, env=env).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], add and 'fred\r\n' or '\r\n') def test_extra_args(self): - """Check we can't add extra arguments""" - self.assertRaises(ValueError, Popen, 'true', close_fds=False) + """Check we can't add extra arguments""" + self.assertRaises(ValueError, Popen, 'true', close_fds=False) def test_basic_input(self): - """Check that incremental input works - - We set up a subprocess which will prompt for name. When we see this prompt - we send the name as input to the process. It should then print the name - properly to stdout. - """ - oper = TestSubprocess.MyOperation('Flash') - prompt = 'What is your name?: ' - cmd = 'echo -n "%s"; read name; echo Hello $name' % prompt - plist = Popen([cmd], stdin=oper.stdin_read_pipe, - shell=True).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(len(plist [1]), 0) - self.assertEqual(plist [0], prompt + 'Hello Flash\r\r\n') + """Check that incremental input works + + We set up a subprocess which will prompt for name. When we see this prompt + we send the name as input to the process. It should then print the name + properly to stdout. + """ + oper = TestSubprocess.MyOperation('Flash') + prompt = 'What is your name?: ' + cmd = 'echo -n "%s"; read name; echo Hello $name' % prompt + plist = Popen([cmd], stdin=oper.stdin_read_pipe, + shell=True).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(len(plist [1]), 0) + self.assertEqual(plist [0], prompt + 'Hello Flash\r\r\n') def test_isatty(self): - """Check that ptys appear as terminals to the subprocess""" - oper = TestSubprocess.MyOperation() - cmd = ('if [ -t %d ]; then echo "terminal %d" >&%d; ' - 'else echo "not %d" >&%d; fi;') - both_cmds = '' - for fd in (1, 2): - both_cmds += cmd % (fd, fd, fd, fd, fd) - plist = Popen(both_cmds, shell=True).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], 'terminal 1\r\n') - self.assertEqual(plist [1], 'terminal 2\r\n') - - # Now try with PIPE and make sure it is not a terminal - oper = TestSubprocess.MyOperation() - plist = Popen(both_cmds, stdout=subprocess.PIPE, stderr=subprocess.PIPE, - shell=True).CommunicateFilter(oper.Output) - self._BasicCheck(plist, oper) - self.assertEqual(plist [0], 'not 1\n') - self.assertEqual(plist [1], 'not 2\n') + """Check that ptys appear as terminals to the subprocess""" + oper = TestSubprocess.MyOperation() + cmd = ('if [ -t %d ]; then echo "terminal %d" >&%d; ' + 'else echo "not %d" >&%d; fi;') + both_cmds = '' + for fd in (1, 2): + both_cmds += cmd % (fd, fd, fd, fd, fd) + plist = Popen(both_cmds, shell=True).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], 'terminal 1\r\n') + self.assertEqual(plist [1], 'terminal 2\r\n') + + # Now try with PIPE and make sure it is not a terminal + oper = TestSubprocess.MyOperation() + plist = Popen(both_cmds, stdout=subprocess.PIPE, stderr=subprocess.PIPE, + shell=True).CommunicateFilter(oper.Output) + self._BasicCheck(plist, oper) + self.assertEqual(plist [0], 'not 1\n') + self.assertEqual(plist [1], 'not 2\n') if __name__ == '__main__': unittest.main() diff --git a/tools/patman/get_maintainer.py b/tools/patman/get_maintainer.py index 00b4939..49ef294 100644 --- a/tools/patman/get_maintainer.py +++ b/tools/patman/get_maintainer.py @@ -11,16 +11,16 @@ def FindGetMaintainer(): """Look for the get_maintainer.pl script. Returns: - If the script is found we'll return a path to it; else None. + If the script is found we'll return a path to it; else None. """ try_list = [ - os.path.join(gitutil.GetTopLevel(), 'scripts'), - ] + os.path.join(gitutil.GetTopLevel(), 'scripts'), + ] # Look in the list for path in try_list: - fname = os.path.join(path, 'get_maintainer.pl') - if os.path.isfile(fname): - return fname + fname = os.path.join(path, 'get_maintainer.pl') + if os.path.isfile(fname): + return fname return None @@ -32,16 +32,16 @@ def GetMaintainer(fname, verbose=False): then we fail silently. Args: - fname: Path to the patch file to run get_maintainer.pl on. + fname: Path to the patch file to run get_maintainer.pl on. Returns: - A list of email addresses to CC to. + A list of email addresses to CC to. """ get_maintainer = FindGetMaintainer() if not get_maintainer: - if verbose: - print "WARNING: Couldn't find get_maintainer.pl" - return [] + if verbose: + print "WARNING: Couldn't find get_maintainer.pl" + return [] stdout = command.Output(get_maintainer, '--norolestats', fname) return stdout.splitlines() diff --git a/tools/patman/gitutil.py b/tools/patman/gitutil.py index 5dcbaa3..ae0b6ca 100644 --- a/tools/patman/gitutil.py +++ b/tools/patman/gitutil.py @@ -21,11 +21,11 @@ def CountCommitsToBranch(): since then. Return: - Number of patches that exist on top of the branch + Number of patches that exist on top of the branch """ pipe = [['git', 'log', '--no-color', '--oneline', '--no-decorate', - '@{upstream}..'], - ['wc', '-l']] + '@{upstream}..'], + ['wc', '-l']] stdout = command.RunPipe(pipe, capture=True, oneline=True).stdout patch_count = int(stdout) return patch_count @@ -34,61 +34,61 @@ def GetUpstream(git_dir, branch): """Returns the name of the upstream for a branch Args: - git_dir: Git directory containing repo - branch: Name of branch + git_dir: Git directory containing repo + branch: Name of branch Returns: - Name of upstream branch (e.g. 'upstream/master') or None if none + Name of upstream branch (e.g. 'upstream/master') or None if none """ try: - remote = command.OutputOneLine('git', '--git-dir', git_dir, 'config', - 'branch.%s.remote' % branch) - merge = command.OutputOneLine('git', '--git-dir', git_dir, 'config', - 'branch.%s.merge' % branch) + remote = command.OutputOneLine('git', '--git-dir', git_dir, 'config', + 'branch.%s.remote' % branch) + merge = command.OutputOneLine('git', '--git-dir', git_dir, 'config', + 'branch.%s.merge' % branch) except: - return None + return None if remote == '.': - return merge + return merge elif remote and merge: - leaf = merge.split('/')[-1] - return '%s/%s' % (remote, leaf) + leaf = merge.split('/')[-1] + return '%s/%s' % (remote, leaf) else: - raise ValueError, ("Cannot determine upstream branch for branch " - "'%s' remote='%s', merge='%s'" % (branch, remote, merge)) + raise ValueError, ("Cannot determine upstream branch for branch " + "'%s' remote='%s', merge='%s'" % (branch, remote, merge)) def GetRangeInBranch(git_dir, branch, include_upstream=False): """Returns an expression for the commits in the given branch. Args: - git_dir: Directory containing git repo - branch: Name of branch + git_dir: Directory containing git repo + branch: Name of branch Return: - Expression in the form 'upstream..branch' which can be used to - access the commits. If the branch does not exist, returns None. + Expression in the form 'upstream..branch' which can be used to + access the commits. If the branch does not exist, returns None. """ upstream = GetUpstream(git_dir, branch) if not upstream: - return None + return None return '%s%s..%s' % (upstream, '~' if include_upstream else '', branch) def CountCommitsInBranch(git_dir, branch, include_upstream=False): """Returns the number of commits in the given branch. Args: - git_dir: Directory containing git repo - branch: Name of branch + git_dir: Directory containing git repo + branch: Name of branch Return: - Number of patches that exist on top of the branch, or None if the - branch does not exist. + Number of patches that exist on top of the branch, or None if the + branch does not exist. """ range_expr = GetRangeInBranch(git_dir, branch, include_upstream) if not range_expr: - return None + return None pipe = [['git', '--git-dir', git_dir, 'log', '--oneline', '--no-decorate', - range_expr], - ['wc', '-l']] + range_expr], + ['wc', '-l']] result = command.RunPipe(pipe, capture=True, oneline=True) patch_count = int(result.stdout) return patch_count @@ -97,12 +97,12 @@ def CountCommits(commit_range): """Returns the number of commits in the given range. Args: - commit_range: Range of commits to count (e.g. 'HEAD..base') + commit_range: Range of commits to count (e.g. 'HEAD..base') Return: - Number of patches that exist on top of the branch + Number of patches that exist on top of the branch """ pipe = [['git', 'log', '--oneline', '--no-decorate', commit_range], - ['wc', '-l']] + ['wc', '-l']] stdout = command.RunPipe(pipe, capture=True, oneline=True).stdout patch_count = int(stdout) return patch_count @@ -111,47 +111,47 @@ def Checkout(commit_hash, git_dir=None, work_tree=None, force=False): """Checkout the selected commit for this build Args: - commit_hash: Commit hash to check out + commit_hash: Commit hash to check out """ pipe = ['git'] if git_dir: - pipe.extend(['--git-dir', git_dir]) + pipe.extend(['--git-dir', git_dir]) if work_tree: - pipe.extend(['--work-tree', work_tree]) + pipe.extend(['--work-tree', work_tree]) pipe.append('checkout') if force: - pipe.append('-f') + pipe.append('-f') pipe.append(commit_hash) result = command.RunPipe([pipe], capture=True, raise_on_error=False) if result.return_code != 0: - raise OSError, 'git checkout (%s): %s' % (pipe, result.stderr) + raise OSError, 'git checkout (%s): %s' % (pipe, result.stderr) def Clone(git_dir, output_dir): """Checkout the selected commit for this build Args: - commit_hash: Commit hash to check out + commit_hash: Commit hash to check out """ pipe = ['git', 'clone', git_dir, '.'] result = command.RunPipe([pipe], capture=True, cwd=output_dir) if result.return_code != 0: - raise OSError, 'git clone: %s' % result.stderr + raise OSError, 'git clone: %s' % result.stderr def Fetch(git_dir=None, work_tree=None): """Fetch from the origin repo Args: - commit_hash: Commit hash to check out + commit_hash: Commit hash to check out """ pipe = ['git'] if git_dir: - pipe.extend(['--git-dir', git_dir]) + pipe.extend(['--git-dir', git_dir]) if work_tree: - pipe.extend(['--work-tree', work_tree]) + pipe.extend(['--work-tree', work_tree]) pipe.append('fetch') result = command.RunPipe([pipe], capture=True) if result.return_code != 0: - raise OSError, 'git fetch: %s' % result.stderr + raise OSError, 'git fetch: %s' % result.stderr def CreatePatches(start, count, series): """Create a series of patches from the top of the current branch. @@ -160,20 +160,20 @@ def CreatePatches(start, count, series): git format-patch. Args: - start: Commit to start from: 0=HEAD, 1=next one, etc. - count: number of commits to include + start: Commit to start from: 0=HEAD, 1=next one, etc. + count: number of commits to include Return: - Filename of cover letter - List of filenames of patch files + Filename of cover letter + List of filenames of patch files """ if series.get('version'): - version = '%s ' % series['version'] + version = '%s ' % series['version'] cmd = ['git', 'format-patch', '-M', '--signoff'] if series.get('cover'): - cmd.append('--cover-letter') + cmd.append('--cover-letter') prefix = series.GetPatchPrefix() if prefix: - cmd += ['--subject-prefix=%s' % prefix] + cmd += ['--subject-prefix=%s' % prefix] cmd += ['HEAD~%d..HEAD~%d' % (start + count, start)] stdout = command.RunList(cmd) @@ -191,31 +191,31 @@ def ApplyPatch(verbose, fname): TODO: Convert these to use command, with stderr option Args: - fname: filename of patch file to apply + fname: filename of patch file to apply """ cmd = ['git', 'am', fname] pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, - stderr=subprocess.PIPE) + stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() re_error = re.compile('^error: patch failed: (.+):(\d+)') for line in stderr.splitlines(): - if verbose: - print line - match = re_error.match(line) - if match: - print GetWarningMsg('warning', match.group(1), int(match.group(2)), - 'Patch failed') + if verbose: + print line + match = re_error.match(line) + if match: + print GetWarningMsg('warning', match.group(1), int(match.group(2)), + 'Patch failed') return pipe.returncode == 0, stdout def ApplyPatches(verbose, args, start_point): """Apply the patches with git am to make sure all is well Args: - verbose: Print out 'git am' output verbatim - args: List of patch files to apply - start_point: Number of commits back from HEAD to start applying. - Normally this is len(args), but it can be larger if a start - offset was given. + verbose: Print out 'git am' output verbatim + args: List of patch files to apply + start_point: Number of commits back from HEAD to start applying. + Normally this is len(args), but it can be larger if a start + offset was given. """ error_count = 0 col = terminal.Color() @@ -225,47 +225,47 @@ def ApplyPatches(verbose, args, start_point): pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE) stdout, stderr = pipe.communicate() if pipe.returncode: - str = 'Could not find current commit name' - print col.Color(col.RED, str) - print stdout - return False + str = 'Could not find current commit name' + print col.Color(col.RED, str) + print stdout + return False old_head = stdout.splitlines()[0] # Checkout the required start point cmd = ['git', 'checkout', 'HEAD~%d' % start_point] pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, - stderr=subprocess.PIPE) + stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() if pipe.returncode: - str = 'Could not move to commit before patch series' - print col.Color(col.RED, str) - print stdout, stderr - return False + str = 'Could not move to commit before patch series' + print col.Color(col.RED, str) + print stdout, stderr + return False # Apply all the patches for fname in args: - ok, stdout = ApplyPatch(verbose, fname) - if not ok: - print col.Color(col.RED, 'git am returned errors for %s: will ' - 'skip this patch' % fname) - if verbose: - print stdout - error_count += 1 - cmd = ['git', 'am', '--skip'] - pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE) - stdout, stderr = pipe.communicate() - if pipe.returncode != 0: - print col.Color(col.RED, 'Unable to skip patch! Aborting...') - print stdout - break + ok, stdout = ApplyPatch(verbose, fname) + if not ok: + print col.Color(col.RED, 'git am returned errors for %s: will ' + 'skip this patch' % fname) + if verbose: + print stdout + error_count += 1 + cmd = ['git', 'am', '--skip'] + pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE) + stdout, stderr = pipe.communicate() + if pipe.returncode != 0: + print col.Color(col.RED, 'Unable to skip patch! Aborting...') + print stdout + break # Return to our previous position cmd = ['git', 'checkout', old_head] pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() if pipe.returncode: - print col.Color(col.RED, 'Could not move back to head commit') - print stdout, stderr + print col.Color(col.RED, 'Could not move back to head commit') + print stdout, stderr return error_count == 0 def BuildEmailList(in_list, tag=None, alias=None, raise_on_error=True): @@ -279,14 +279,14 @@ def BuildEmailList(in_list, tag=None, alias=None, raise_on_error=True): command line parameter) then the email address is quoted. Args: - in_list: List of aliases/email addresses - tag: Text to put before each address - alias: Alias dictionary - raise_on_error: True to raise an error when an alias fails to match, - False to just print a message. + in_list: List of aliases/email addresses + tag: Text to put before each address + alias: Alias dictionary + raise_on_error: True to raise an error when an alias fails to match, + False to just print a message. Returns: - List of email addresses + List of email addresses >>> alias = {} >>> alias['fred'] = ['f.bloggs@napier.co.nz'] @@ -305,33 +305,33 @@ def BuildEmailList(in_list, tag=None, alias=None, raise_on_error=True): quote = '"' if tag and tag[0] == '-' else '' raw = [] for item in in_list: - raw += LookupEmail(item, alias, raise_on_error=raise_on_error) + raw += LookupEmail(item, alias, raise_on_error=raise_on_error) result = [] for item in raw: - if not item in result: - result.append(item) + if not item in result: + result.append(item) if tag: - return ['%s %s%s%s' % (tag, quote, email, quote) for email in result] + return ['%s %s%s%s' % (tag, quote, email, quote) for email in result] return result def EmailPatches(series, cover_fname, args, dry_run, raise_on_error, cc_fname, - self_only=False, alias=None, in_reply_to=None): + self_only=False, alias=None, in_reply_to=None): """Email a patch series. Args: - series: Series object containing destination info - cover_fname: filename of cover letter - args: list of filenames of patch files - dry_run: Just return the command that would be run - raise_on_error: True to raise an error when an alias fails to match, - False to just print a message. - cc_fname: Filename of Cc file for per-commit Cc - self_only: True to just email to yourself as a test - in_reply_to: If set we'll pass this to git as --in-reply-to. - Should be a message ID that this is in reply to. + series: Series object containing destination info + cover_fname: filename of cover letter + args: list of filenames of patch files + dry_run: Just return the command that would be run + raise_on_error: True to raise an error when an alias fails to match, + False to just print a message. + cc_fname: Filename of Cc file for per-commit Cc + self_only: True to just email to yourself as a test + in_reply_to: If set we'll pass this to git as --in-reply-to. + Should be a message ID that this is in reply to. Returns: - Git command that was/would be run + Git command that was/would be run # For the duration of this doctest pretend that we ran patman with ./patman >>> _old_argv0 = sys.argv[0] @@ -348,20 +348,20 @@ def EmailPatches(series, cover_fname, args, dry_run, raise_on_error, cc_fname, >>> series.to = ['fred'] >>> series.cc = ['mary'] >>> EmailPatches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', \ - False, alias) + False, alias) 'git send-email --annotate --to "f.bloggs@napier.co.nz" --cc \ "m.poppins@cloud.net" --cc-cmd "./patman --cc-cmd cc-fname" cover p1 p2' >>> EmailPatches(series, None, ['p1'], True, True, 'cc-fname', False, \ - alias) + alias) 'git send-email --annotate --to "f.bloggs@napier.co.nz" --cc \ "m.poppins@cloud.net" --cc-cmd "./patman --cc-cmd cc-fname" p1' >>> series.cc = ['all'] >>> EmailPatches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', \ - True, alias) + True, alias) 'git send-email --annotate --to "this-is-me@me.com" --cc-cmd "./patman \ --cc-cmd cc-fname" cover p1 p2' >>> EmailPatches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', \ - False, alias) + False, alias) 'git send-email --annotate --to "f.bloggs@napier.co.nz" --cc \ "f.bloggs@napier.co.nz" --cc "j.bloggs@napier.co.nz" --cc \ "m.poppins@cloud.net" --cc-cmd "./patman --cc-cmd cc-fname" cover p1 p2' @@ -371,26 +371,26 @@ def EmailPatches(series, cover_fname, args, dry_run, raise_on_error, cc_fname, """ to = BuildEmailList(series.get('to'), '--to', alias, raise_on_error) if not to: - print ("No recipient, please add something like this to a commit\n" - "Series-to: Fred Bloggs ") - return + print ("No recipient, please add something like this to a commit\n" + "Series-to: Fred Bloggs ") + return cc = BuildEmailList(series.get('cc'), '--cc', alias, raise_on_error) if self_only: - to = BuildEmailList([os.getenv('USER')], '--to', alias, raise_on_error) - cc = [] + to = BuildEmailList([os.getenv('USER')], '--to', alias, raise_on_error) + cc = [] cmd = ['git', 'send-email', '--annotate'] if in_reply_to: - cmd.append('--in-reply-to="%s"' % in_reply_to) + cmd.append('--in-reply-to="%s"' % in_reply_to) cmd += to cmd += cc cmd += ['--cc-cmd', '"%s --cc-cmd %s"' % (sys.argv[0], cc_fname)] if cover_fname: - cmd.append(cover_fname) + cmd.append(cover_fname) cmd += args str = ' '.join(cmd) if not dry_run: - os.system(str) + os.system(str) return str @@ -400,18 +400,18 @@ def LookupEmail(lookup_name, alias=None, raise_on_error=True, level=0): TODO: Why not just use git's own alias feature? Args: - lookup_name: Alias or email address to look up - alias: Dictionary containing aliases (None to use settings default) - raise_on_error: True to raise an error when an alias fails to match, - False to just print a message. + lookup_name: Alias or email address to look up + alias: Dictionary containing aliases (None to use settings default) + raise_on_error: True to raise an error when an alias fails to match, + False to just print a message. Returns: - tuple: - list containing a list of email addresses + tuple: + list containing a list of email addresses Raises: - OSError if a recursive alias reference was found - ValueError if an alias was not found + OSError if a recursive alias reference was found + ValueError if an alias was not found >>> alias = {} >>> alias['fred'] = ['f.bloggs@napier.co.nz'] @@ -448,36 +448,36 @@ def LookupEmail(lookup_name, alias=None, raise_on_error=True, level=0): ['j.bloggs@napier.co.nz', 'm.poppins@cloud.net'] """ if not alias: - alias = settings.alias + alias = settings.alias lookup_name = lookup_name.strip() if '@' in lookup_name: # Perhaps a real email address - return [lookup_name] + return [lookup_name] lookup_name = lookup_name.lower() col = terminal.Color() out_list = [] if level > 10: - msg = "Recursive email alias at '%s'" % lookup_name - if raise_on_error: - raise OSError, msg - else: - print col.Color(col.RED, msg) - return out_list + msg = "Recursive email alias at '%s'" % lookup_name + if raise_on_error: + raise OSError, msg + else: + print col.Color(col.RED, msg) + return out_list if lookup_name: - if not lookup_name in alias: - msg = "Alias '%s' not found" % lookup_name - if raise_on_error: - raise ValueError, msg - else: - print col.Color(col.RED, msg) - return out_list - for item in alias[lookup_name]: - todo = LookupEmail(item, alias, raise_on_error, level + 1) - for new_item in todo: - if not new_item in out_list: - out_list.append(new_item) + if not lookup_name in alias: + msg = "Alias '%s' not found" % lookup_name + if raise_on_error: + raise ValueError, msg + else: + print col.Color(col.RED, msg) + return out_list + for item in alias[lookup_name]: + todo = LookupEmail(item, alias, raise_on_error, level + 1) + for new_item in todo: + if not new_item in out_list: + out_list.append(new_item) #print "No match for alias '%s'" % lookup_name return out_list @@ -486,12 +486,12 @@ def GetTopLevel(): """Return name of top-level directory for this git repo. Returns: - Full path to git top-level directory + Full path to git top-level directory This test makes sure that we are running tests in the right subdir >>> os.path.realpath(os.path.dirname(__file__)) == \ - os.path.join(GetTopLevel(), 'tools', 'patman') + os.path.join(GetTopLevel(), 'tools', 'patman') True """ return command.OutputOneLine('git', 'rev-parse', '--show-toplevel') @@ -500,19 +500,19 @@ def GetAliasFile(): """Gets the name of the git alias file. Returns: - Filename of git alias file, or None if none + Filename of git alias file, or None if none """ fname = command.OutputOneLine('git', 'config', 'sendemail.aliasesfile', - raise_on_error=False) + raise_on_error=False) if fname: - fname = os.path.join(GetTopLevel(), fname.strip()) + fname = os.path.join(GetTopLevel(), fname.strip()) return fname def GetDefaultUserName(): """Gets the user.name from .gitconfig file. Returns: - User name found in .gitconfig file, or None if none + User name found in .gitconfig file, or None if none """ uname = command.OutputOneLine('git', 'config', '--global', 'user.name') return uname @@ -521,7 +521,7 @@ def GetDefaultUserEmail(): """Gets the user.email from the global .gitconfig file. Returns: - User's email found in .gitconfig file, or None if none + User's email found in .gitconfig file, or None if none """ uemail = command.OutputOneLine('git', 'config', '--global', 'user.email') return uemail @@ -531,13 +531,13 @@ def Setup(): # Check for a git alias file also alias_fname = GetAliasFile() if alias_fname: - settings.ReadGitAliases(alias_fname) + settings.ReadGitAliases(alias_fname) def GetHead(): """Get the hash of the current HEAD Returns: - Hash of HEAD + Hash of HEAD """ return command.OutputOneLine('git', 'show', '-s', '--pretty=format:%H') diff --git a/tools/patman/patchstream.py b/tools/patman/patchstream.py index c204523..a824a73 100644 --- a/tools/patman/patchstream.py +++ b/tools/patman/patchstream.py @@ -56,304 +56,304 @@ class PatchStream: phases of processing. """ def __init__(self, series, name=None, is_log=False): - self.skip_blank = False # True to skip a single blank line - self.found_test = False # Found a TEST= line - self.lines_after_test = 0 # MNumber of lines found after TEST= - self.warn = [] # List of warnings we have collected - self.linenum = 1 # Output line number we are up to - self.in_section = None # Name of start...END section we are in - self.notes = [] # Series notes - self.section = [] # The current section...END section - self.series = series # Info about the patch series - self.is_log = is_log # True if indent like git log - self.in_change = 0 # Non-zero if we are in a change list - self.blank_count = 0 # Number of blank lines stored up - self.state = STATE_MSG_HEADER # What state are we in? - self.tags = [] # Tags collected, like Tested-by... - self.signoff = [] # Contents of signoff line - self.commit = None # Current commit + self.skip_blank = False # True to skip a single blank line + self.found_test = False # Found a TEST= line + self.lines_after_test = 0 # MNumber of lines found after TEST= + self.warn = [] # List of warnings we have collected + self.linenum = 1 # Output line number we are up to + self.in_section = None # Name of start...END section we are in + self.notes = [] # Series notes + self.section = [] # The current section...END section + self.series = series # Info about the patch series + self.is_log = is_log # True if indent like git log + self.in_change = 0 # Non-zero if we are in a change list + self.blank_count = 0 # Number of blank lines stored up + self.state = STATE_MSG_HEADER # What state are we in? + self.tags = [] # Tags collected, like Tested-by... + self.signoff = [] # Contents of signoff line + self.commit = None # Current commit def AddToSeries(self, line, name, value): - """Add a new Series-xxx tag. - - When a Series-xxx tag is detected, we come here to record it, if we - are scanning a 'git log'. - - Args: - line: Source line containing tag (useful for debug/error messages) - name: Tag name (part after 'Series-') - value: Tag value (part after 'Series-xxx: ') - """ - if name == 'notes': - self.in_section = name - self.skip_blank = False - if self.is_log: - self.series.AddTag(self.commit, line, name, value) + """Add a new Series-xxx tag. + + When a Series-xxx tag is detected, we come here to record it, if we + are scanning a 'git log'. + + Args: + line: Source line containing tag (useful for debug/error messages) + name: Tag name (part after 'Series-') + value: Tag value (part after 'Series-xxx: ') + """ + if name == 'notes': + self.in_section = name + self.skip_blank = False + if self.is_log: + self.series.AddTag(self.commit, line, name, value) def CloseCommit(self): - """Save the current commit into our commit list, and reset our state""" - if self.commit and self.is_log: - self.series.AddCommit(self.commit) - self.commit = None + """Save the current commit into our commit list, and reset our state""" + if self.commit and self.is_log: + self.series.AddCommit(self.commit) + self.commit = None def FormatTags(self, tags): - out_list = [] - for tag in sorted(tags): - if tag.startswith('Cc:'): - tag_list = tag[4:].split(',') - out_list += gitutil.BuildEmailList(tag_list, 'Cc:') - else: - out_list.append(tag) - return out_list + out_list = [] + for tag in sorted(tags): + if tag.startswith('Cc:'): + tag_list = tag[4:].split(',') + out_list += gitutil.BuildEmailList(tag_list, 'Cc:') + else: + out_list.append(tag) + return out_list def ProcessLine(self, line): - """Process a single line of a patch file or commit log - - This process a line and returns a list of lines to output. The list - may be empty or may contain multiple output lines. - - This is where all the complicated logic is located. The class's - state is used to move between different states and detect things - properly. - - We can be in one of two modes: - self.is_log == True: This is 'git log' mode, where most output is - indented by 4 characters and we are scanning for tags - - self.is_log == False: This is 'patch' mode, where we already have - all the tags, and are processing patches to remove junk we - don't want, and add things we think are required. - - Args: - line: text line to process - - Returns: - list of output lines, or [] if nothing should be output - """ - # Initially we have no output. Prepare the input line string - out = [] - line = line.rstrip('\n') - if self.is_log: - if line[:4] == ' ': - line = line[4:] - - # Handle state transition and skipping blank lines - series_match = re_series.match(line) - commit_match = re_commit.match(line) if self.is_log else None - cover_cc_match = re_cover_cc.match(line) - tag_match = None - if self.state == STATE_PATCH_HEADER: - tag_match = re_tag.match(line) - is_blank = not line.strip() - if is_blank: - if (self.state == STATE_MSG_HEADER - or self.state == STATE_PATCH_SUBJECT): - self.state += 1 - - # We don't have a subject in the text stream of patch files - # It has its own line with a Subject: tag - if not self.is_log and self.state == STATE_PATCH_SUBJECT: - self.state += 1 - elif commit_match: - self.state = STATE_MSG_HEADER - - # If we are in a section, keep collecting lines until we see END - if self.in_section: - if line == 'END': - if self.in_section == 'cover': - self.series.cover = self.section - elif self.in_section == 'notes': - if self.is_log: - self.series.notes += self.section - else: - self.warn.append("Unknown section '%s'" % self.in_section) - self.in_section = None - self.skip_blank = True - self.section = [] - else: - self.section.append(line) - - # Detect the commit subject - elif not is_blank and self.state == STATE_PATCH_SUBJECT: - self.commit.subject = line - - # Detect the tags we want to remove, and skip blank lines - elif re_remove.match(line): - self.skip_blank = True - - # TEST= should be the last thing in the commit, so remove - # everything after it - if line.startswith('TEST='): - self.found_test = True - elif self.skip_blank and is_blank: - self.skip_blank = False - - # Detect the start of a cover letter section - elif re_cover.match(line): - self.in_section = 'cover' - self.skip_blank = False - - elif cover_cc_match: - value = cover_cc_match.group(1) - self.AddToSeries(line, 'cover-cc', value) - - # If we are in a change list, key collected lines until a blank one - elif self.in_change: - if is_blank: - # Blank line ends this change list - self.in_change = 0 - elif line == '---' or re_signoff.match(line): - self.in_change = 0 - out = self.ProcessLine(line) - else: - if self.is_log: - self.series.AddChange(self.in_change, self.commit, line) - self.skip_blank = False - - # Detect Series-xxx tags - elif series_match: - name = series_match.group(1) - value = series_match.group(2) - if name == 'changes': - # value is the version number: e.g. 1, or 2 - try: - value = int(value) - except ValueError as str: - raise ValueError("%s: Cannot decode version info '%s'" % - (self.commit.hash, line)) - self.in_change = int(value) - else: - self.AddToSeries(line, name, value) - self.skip_blank = True - - # Detect the start of a new commit - elif commit_match: - self.CloseCommit() - # TODO: We should store the whole hash, and just display a subset - self.commit = commit.Commit(commit_match.group(1)[:8]) - - # Detect tags in the commit message - elif tag_match: - # Remove Tested-by self, since few will take much notice - if (tag_match.group(1) == 'Tested-by' and - tag_match.group(2).find(os.getenv('USER') + '@') != -1): - self.warn.append("Ignoring %s" % line) - elif tag_match.group(1) == 'Cc': - self.commit.AddCc(tag_match.group(2).split(',')) - else: - self.tags.append(line); - - # Well that means this is an ordinary line - else: - pos = 1 - # Look for ugly ASCII characters - for ch in line: - # TODO: Would be nicer to report source filename and line - if ord(ch) > 0x80: - self.warn.append("Line %d/%d ('%s') has funny ascii char" % - (self.linenum, pos, line)) - pos += 1 - - # Look for space before tab - m = re_space_before_tab.match(line) - if m: - self.warn.append('Line %d/%d has space before tab' % - (self.linenum, m.start())) - - # OK, we have a valid non-blank line - out = [line] - self.linenum += 1 - self.skip_blank = False - if self.state == STATE_DIFFS: - pass - - # If this is the start of the diffs section, emit our tags and - # change log - elif line == '---': - self.state = STATE_DIFFS - - # Output the tags (signeoff first), then change list - out = [] - log = self.series.MakeChangeLog(self.commit) - out += self.FormatTags(self.tags) - out += [line] + log - elif self.found_test: - if not re_allowed_after_test.match(line): - self.lines_after_test += 1 - - return out + """Process a single line of a patch file or commit log + + This process a line and returns a list of lines to output. The list + may be empty or may contain multiple output lines. + + This is where all the complicated logic is located. The class's + state is used to move between different states and detect things + properly. + + We can be in one of two modes: + self.is_log == True: This is 'git log' mode, where most output is + indented by 4 characters and we are scanning for tags + + self.is_log == False: This is 'patch' mode, where we already have + all the tags, and are processing patches to remove junk we + don't want, and add things we think are required. + + Args: + line: text line to process + + Returns: + list of output lines, or [] if nothing should be output + """ + # Initially we have no output. Prepare the input line string + out = [] + line = line.rstrip('\n') + if self.is_log: + if line[:4] == ' ': + line = line[4:] + + # Handle state transition and skipping blank lines + series_match = re_series.match(line) + commit_match = re_commit.match(line) if self.is_log else None + cover_cc_match = re_cover_cc.match(line) + tag_match = None + if self.state == STATE_PATCH_HEADER: + tag_match = re_tag.match(line) + is_blank = not line.strip() + if is_blank: + if (self.state == STATE_MSG_HEADER + or self.state == STATE_PATCH_SUBJECT): + self.state += 1 + + # We don't have a subject in the text stream of patch files + # It has its own line with a Subject: tag + if not self.is_log and self.state == STATE_PATCH_SUBJECT: + self.state += 1 + elif commit_match: + self.state = STATE_MSG_HEADER + + # If we are in a section, keep collecting lines until we see END + if self.in_section: + if line == 'END': + if self.in_section == 'cover': + self.series.cover = self.section + elif self.in_section == 'notes': + if self.is_log: + self.series.notes += self.section + else: + self.warn.append("Unknown section '%s'" % self.in_section) + self.in_section = None + self.skip_blank = True + self.section = [] + else: + self.section.append(line) + + # Detect the commit subject + elif not is_blank and self.state == STATE_PATCH_SUBJECT: + self.commit.subject = line + + # Detect the tags we want to remove, and skip blank lines + elif re_remove.match(line): + self.skip_blank = True + + # TEST= should be the last thing in the commit, so remove + # everything after it + if line.startswith('TEST='): + self.found_test = True + elif self.skip_blank and is_blank: + self.skip_blank = False + + # Detect the start of a cover letter section + elif re_cover.match(line): + self.in_section = 'cover' + self.skip_blank = False + + elif cover_cc_match: + value = cover_cc_match.group(1) + self.AddToSeries(line, 'cover-cc', value) + + # If we are in a change list, key collected lines until a blank one + elif self.in_change: + if is_blank: + # Blank line ends this change list + self.in_change = 0 + elif line == '---' or re_signoff.match(line): + self.in_change = 0 + out = self.ProcessLine(line) + else: + if self.is_log: + self.series.AddChange(self.in_change, self.commit, line) + self.skip_blank = False + + # Detect Series-xxx tags + elif series_match: + name = series_match.group(1) + value = series_match.group(2) + if name == 'changes': + # value is the version number: e.g. 1, or 2 + try: + value = int(value) + except ValueError as str: + raise ValueError("%s: Cannot decode version info '%s'" % + (self.commit.hash, line)) + self.in_change = int(value) + else: + self.AddToSeries(line, name, value) + self.skip_blank = True + + # Detect the start of a new commit + elif commit_match: + self.CloseCommit() + # TODO: We should store the whole hash, and just display a subset + self.commit = commit.Commit(commit_match.group(1)[:8]) + + # Detect tags in the commit message + elif tag_match: + # Remove Tested-by self, since few will take much notice + if (tag_match.group(1) == 'Tested-by' and + tag_match.group(2).find(os.getenv('USER') + '@') != -1): + self.warn.append("Ignoring %s" % line) + elif tag_match.group(1) == 'Cc': + self.commit.AddCc(tag_match.group(2).split(',')) + else: + self.tags.append(line); + + # Well that means this is an ordinary line + else: + pos = 1 + # Look for ugly ASCII characters + for ch in line: + # TODO: Would be nicer to report source filename and line + if ord(ch) > 0x80: + self.warn.append("Line %d/%d ('%s') has funny ascii char" % + (self.linenum, pos, line)) + pos += 1 + + # Look for space before tab + m = re_space_before_tab.match(line) + if m: + self.warn.append('Line %d/%d has space before tab' % + (self.linenum, m.start())) + + # OK, we have a valid non-blank line + out = [line] + self.linenum += 1 + self.skip_blank = False + if self.state == STATE_DIFFS: + pass + + # If this is the start of the diffs section, emit our tags and + # change log + elif line == '---': + self.state = STATE_DIFFS + + # Output the tags (signeoff first), then change list + out = [] + log = self.series.MakeChangeLog(self.commit) + out += self.FormatTags(self.tags) + out += [line] + log + elif self.found_test: + if not re_allowed_after_test.match(line): + self.lines_after_test += 1 + + return out def Finalize(self): - """Close out processing of this patch stream""" - self.CloseCommit() - if self.lines_after_test: - self.warn.append('Found %d lines after TEST=' % - self.lines_after_test) + """Close out processing of this patch stream""" + self.CloseCommit() + if self.lines_after_test: + self.warn.append('Found %d lines after TEST=' % + self.lines_after_test) def ProcessStream(self, infd, outfd): - """Copy a stream from infd to outfd, filtering out unwanting things. - - This is used to process patch files one at a time. - - Args: - infd: Input stream file object - outfd: Output stream file object - """ - # Extract the filename from each diff, for nice warnings - fname = None - last_fname = None - re_fname = re.compile('diff --git a/(.*) b/.*') - while True: - line = infd.readline() - if not line: - break - out = self.ProcessLine(line) - - # Try to detect blank lines at EOF - for line in out: - match = re_fname.match(line) - if match: - last_fname = fname - fname = match.group(1) - if line == '+': - self.blank_count += 1 - else: - if self.blank_count and (line == '-- ' or match): - self.warn.append("Found possible blank line(s) at " - "end of file '%s'" % last_fname) - outfd.write('+\n' * self.blank_count) - outfd.write(line + '\n') - self.blank_count = 0 - self.Finalize() + """Copy a stream from infd to outfd, filtering out unwanting things. + + This is used to process patch files one at a time. + + Args: + infd: Input stream file object + outfd: Output stream file object + """ + # Extract the filename from each diff, for nice warnings + fname = None + last_fname = None + re_fname = re.compile('diff --git a/(.*) b/.*') + while True: + line = infd.readline() + if not line: + break + out = self.ProcessLine(line) + + # Try to detect blank lines at EOF + for line in out: + match = re_fname.match(line) + if match: + last_fname = fname + fname = match.group(1) + if line == '+': + self.blank_count += 1 + else: + if self.blank_count and (line == '-- ' or match): + self.warn.append("Found possible blank line(s) at " + "end of file '%s'" % last_fname) + outfd.write('+\n' * self.blank_count) + outfd.write(line + '\n') + self.blank_count = 0 + self.Finalize() def GetMetaDataForList(commit_range, git_dir=None, count=None, - series = Series()): + series = Series()): """Reads out patch series metadata from the commits This does a 'git log' on the relevant commits and pulls out the tags we are interested in. Args: - commit_range: Range of commits to count (e.g. 'HEAD..base') - git_dir: Path to git repositiory (None to use default) - count: Number of commits to list, or None for no limit - series: Series object to add information into. By default a new series - is started. + commit_range: Range of commits to count (e.g. 'HEAD..base') + git_dir: Path to git repositiory (None to use default) + count: Number of commits to list, or None for no limit + series: Series object to add information into. By default a new series + is started. Returns: - A Series object containing information about the commits. + A Series object containing information about the commits. """ params = ['git', 'log', '--no-color', '--reverse', '--no-decorate', - commit_range] + commit_range] if count is not None: - params[2:2] = ['-n%d' % count] + params[2:2] = ['-n%d' % count] if git_dir: - params[1:1] = ['--git-dir', git_dir] + params[1:1] = ['--git-dir', git_dir] pipe = [params] stdout = command.RunPipe(pipe, capture=True).stdout ps = PatchStream(series, is_log=True) for line in stdout.splitlines(): - ps.ProcessLine(line) + ps.ProcessLine(line) ps.Finalize() return series @@ -364,8 +364,8 @@ def GetMetaData(start, count): are interested in. Args: - start: Commit to start from: 0=HEAD, 1=next one, etc. - count: Number of commits to list + start: Commit to start from: 0=HEAD, 1=next one, etc. + count: Number of commits to list """ return GetMetaDataForList('HEAD~%d' % start, None, count) @@ -378,11 +378,11 @@ def FixPatch(backup_dir, fname, series, commit): A backup file is put into backup_dir (if not None). Args: - fname: Filename to patch file to process - series: Series information about this patch set - commit: Commit object for this patch file + fname: Filename to patch file to process + series: Series information about this patch set + commit: Commit object for this patch file Return: - A list of errors, or [] if all ok. + A list of errors, or [] if all ok. """ handle, tmpname = tempfile.mkstemp() outfd = os.fdopen(handle, 'w') @@ -395,7 +395,7 @@ def FixPatch(backup_dir, fname, series, commit): # Create a backup file if required if backup_dir: - shutil.copy(fname, os.path.join(backup_dir, os.path.basename(fname))) + shutil.copy(fname, os.path.join(backup_dir, os.path.basename(fname))) shutil.move(tmpname, fname) return ps.warn @@ -405,22 +405,22 @@ def FixPatches(series, fnames): The patch files are processed in place, and overwritten. Args: - series: The series object - fnames: List of patch files to process + series: The series object + fnames: List of patch files to process """ # Current workflow creates patches, so we shouldn't need a backup backup_dir = None #tempfile.mkdtemp('clean-patch') count = 0 for fname in fnames: - commit = series.commits[count] - commit.patch = fname - result = FixPatch(backup_dir, fname, series, commit) - if result: - print '%d warnings for %s:' % (len(result), fname) - for warn in result: - print '\t', warn - print - count += 1 + commit = series.commits[count] + commit.patch = fname + result = FixPatch(backup_dir, fname, series, commit) + if result: + print '%d warnings for %s:' % (len(result), fname) + for warn in result: + print '\t', warn + print + count += 1 print 'Cleaned %d patches' % count return series @@ -428,9 +428,9 @@ def InsertCoverLetter(fname, series, count): """Inserts a cover letter with the required info into patch 0 Args: - fname: Input / output filename of the cover letter file - series: Series object - count: Number of patches in the series + fname: Input / output filename of the cover letter file + series: Series object + count: Number of patches in the series """ fd = open(fname, 'r') lines = fd.readlines() @@ -440,19 +440,19 @@ def InsertCoverLetter(fname, series, count): text = series.cover prefix = series.GetPatchPrefix() for line in lines: - if line.startswith('Subject:'): - # TODO: if more than 10 patches this should save 00/xx, not 0/xx - line = 'Subject: [%s 0/%d] %s\n' % (prefix, count, text[0]) - - # Insert our cover letter - elif line.startswith('*** BLURB HERE ***'): - # First the blurb test - line = '\n'.join(text[1:]) + '\n' - if series.get('notes'): - line += '\n'.join(series.notes) + '\n' - - # Now the change list - out = series.MakeChangeLog(None) - line += '\n' + '\n'.join(out) - fd.write(line) + if line.startswith('Subject:'): + # TODO: if more than 10 patches this should save 00/xx, not 0/xx + line = 'Subject: [%s 0/%d] %s\n' % (prefix, count, text[0]) + + # Insert our cover letter + elif line.startswith('*** BLURB HERE ***'): + # First the blurb test + line = '\n'.join(text[1:]) + '\n' + if series.get('notes'): + line += '\n'.join(series.notes) + '\n' + + # Now the change list + out = series.MakeChangeLog(None) + line += '\n' + '\n'.join(out) + fd.write(line) fd.close() diff --git a/tools/patman/patman.py b/tools/patman/patman.py index c60aa5a..89b268a 100755 --- a/tools/patman/patman.py +++ b/tools/patman/patman.py @@ -26,8 +26,8 @@ import test parser = OptionParser() parser.add_option('-a', '--no-apply', action='store_false', - dest='apply_patches', default=True, - help="Don't test-apply patches with git am") + dest='apply_patches', default=True, + help="Don't test-apply patches with git am") parser.add_option('-H', '--full-help', action='store_true', dest='full_help', default=False, help='Display the README file') parser.add_option('-c', '--count', dest='count', type='int', @@ -38,25 +38,25 @@ parser.add_option('-i', '--ignore-errors', action='store_true', parser.add_option('-n', '--dry-run', action='store_true', dest='dry_run', default=False, help="Do a dry run (create but don't email patches)") parser.add_option('-p', '--project', default=project.DetectProject(), - help="Project name; affects default option values and " - "aliases [default: %default]") + help="Project name; affects default option values and " + "aliases [default: %default]") parser.add_option('-r', '--in-reply-to', type='string', action='store', - help="Message ID that this series is in reply to") + help="Message ID that this series is in reply to") parser.add_option('-s', '--start', dest='start', type='int', default=0, help='Commit to start creating patches from (0 = HEAD)') parser.add_option('-t', '--ignore-bad-tags', action='store_true', - default=False, help='Ignore bad tags / aliases') + default=False, help='Ignore bad tags / aliases') parser.add_option('--test', action='store_true', dest='test', - default=False, help='run tests') + default=False, help='run tests') parser.add_option('-v', '--verbose', action='store_true', dest='verbose', default=False, help='Verbose output of errors and warnings') parser.add_option('--cc-cmd', dest='cc_cmd', type='string', action='store', default=None, help='Output cc list for patch file (used by git)') parser.add_option('--no-check', action='store_false', dest='check_patch', - default=True, - help="Don't check for patch compliance") + default=True, + help="Don't check for patch compliance") parser.add_option('--no-tags', action='store_false', dest='process_tags', - default=True, help="Don't process subject tags as aliaes") + default=True, help="Don't process subject tags as aliaes") parser.usage = """patman [options] @@ -80,15 +80,15 @@ if options.test: suite.run(result) for module in ['gitutil', 'settings']: - suite = doctest.DocTestSuite(module) - suite.run(result) + suite = doctest.DocTestSuite(module) + suite.run(result) # TODO: Surely we can just 'print' result? print result for test, err in result.errors: - print err + print err for test, err in result.failures: - print err + print err # Called from git with a patch filename as argument # Printout a list of additional CC recipients for this patch @@ -96,18 +96,18 @@ elif options.cc_cmd: fd = open(options.cc_cmd, 'r') re_line = re.compile('(\S*) (.*)') for line in fd.readlines(): - match = re_line.match(line) - if match and match.group(1) == args[0]: - for cc in match.group(2).split(', '): - cc = cc.strip() - if cc: - print cc + match = re_line.match(line) + if match and match.group(1) == args[0]: + for cc in match.group(2).split(', '): + cc = cc.strip() + if cc: + print cc fd.close() elif options.full_help: pager = os.getenv('PAGER') if not pager: - pager = 'more' + pager = 'more' fname = os.path.join(os.path.dirname(sys.argv[0]), 'README') command.Run(pager, fname) @@ -116,51 +116,51 @@ else: gitutil.Setup() if options.count == -1: - # Work out how many patches to send if we can - options.count = gitutil.CountCommitsToBranch() - options.start + # Work out how many patches to send if we can + options.count = gitutil.CountCommitsToBranch() - options.start col = terminal.Color() if not options.count: - str = 'No commits found to process - please use -c flag' - print col.Color(col.RED, str) - sys.exit(1) + str = 'No commits found to process - please use -c flag' + print col.Color(col.RED, str) + sys.exit(1) # Read the metadata from the commits if options.count: - series = patchstream.GetMetaData(options.start, options.count) - cover_fname, args = gitutil.CreatePatches(options.start, options.count, - series) + series = patchstream.GetMetaData(options.start, options.count) + cover_fname, args = gitutil.CreatePatches(options.start, options.count, + series) # Fix up the patch files to our liking, and insert the cover letter series = patchstream.FixPatches(series, args) if series and cover_fname and series.get('cover'): - patchstream.InsertCoverLetter(cover_fname, series, options.count) + patchstream.InsertCoverLetter(cover_fname, series, options.count) # Do a few checks on the series series.DoChecks() # Check the patches, and run them through 'git am' just to be sure if options.check_patch: - ok = checkpatch.CheckPatches(options.verbose, args) + ok = checkpatch.CheckPatches(options.verbose, args) else: - ok = True + ok = True if options.apply_patches: - if not gitutil.ApplyPatches(options.verbose, args, - options.count + options.start): - ok = False + if not gitutil.ApplyPatches(options.verbose, args, + options.count + options.start): + ok = False cc_file = series.MakeCcFile(options.process_tags, cover_fname, - not options.ignore_bad_tags) + not options.ignore_bad_tags) # Email the patches out (giving the user time to check / cancel) cmd = '' if ok or options.ignore_errors: - cmd = gitutil.EmailPatches(series, cover_fname, args, - options.dry_run, not options.ignore_bad_tags, cc_file, - in_reply_to=options.in_reply_to) + cmd = gitutil.EmailPatches(series, cover_fname, args, + options.dry_run, not options.ignore_bad_tags, cc_file, + in_reply_to=options.in_reply_to) # For a dry run, just show our actions as a sanity check if options.dry_run: - series.ShowActions(args, cmd, options.process_tags) + series.ShowActions(args, cmd, options.process_tags) os.remove(cc_file) diff --git a/tools/patman/project.py b/tools/patman/project.py index e05ff11..3e0f120 100644 --- a/tools/patman/project.py +++ b/tools/patman/project.py @@ -14,14 +14,14 @@ def DetectProject(): in the given project. Returns: - The name of the project, like "linux" or "u-boot". Returns "unknown" - if we can't detect the project. + The name of the project, like "linux" or "u-boot". Returns "unknown" + if we can't detect the project. """ top_level = gitutil.GetTopLevel() if os.path.exists(os.path.join(top_level, "include", "u-boot")): - return "u-boot" + return "u-boot" elif os.path.exists(os.path.join(top_level, "kernel")): - return "linux" + return "linux" return "unknown" diff --git a/tools/patman/series.py b/tools/patman/series.py index 88c0d87..e5676b4 100644 --- a/tools/patman/series.py +++ b/tools/patman/series.py @@ -12,256 +12,256 @@ import terminal # Series-xxx tags that we understand valid_series = ['to', 'cc', 'version', 'changes', 'prefix', 'notes', 'name', - 'cover-cc', 'process_log'] + 'cover-cc', 'process_log'] class Series(dict): """Holds information about a patch series, including all tags. Vars: - cc: List of aliases/emails to Cc all patches to - commits: List of Commit objects, one for each patch - cover: List of lines in the cover letter - notes: List of lines in the notes - changes: (dict) List of changes for each version, The key is - the integer version number - allow_overwrite: Allow tags to overwrite an existing tag + cc: List of aliases/emails to Cc all patches to + commits: List of Commit objects, one for each patch + cover: List of lines in the cover letter + notes: List of lines in the notes + changes: (dict) List of changes for each version, The key is + the integer version number + allow_overwrite: Allow tags to overwrite an existing tag """ def __init__(self): - self.cc = [] - self.to = [] - self.cover_cc = [] - self.commits = [] - self.cover = None - self.notes = [] - self.changes = {} - self.allow_overwrite = False - - # Written in MakeCcFile() - # key: name of patch file - # value: list of email addresses - self._generated_cc = {} + self.cc = [] + self.to = [] + self.cover_cc = [] + self.commits = [] + self.cover = None + self.notes = [] + self.changes = {} + self.allow_overwrite = False + + # Written in MakeCcFile() + # key: name of patch file + # value: list of email addresses + self._generated_cc = {} # These make us more like a dictionary def __setattr__(self, name, value): - self[name] = value + self[name] = value def __getattr__(self, name): - return self[name] + return self[name] def AddTag(self, commit, line, name, value): - """Add a new Series-xxx tag along with its value. - - Args: - line: Source line containing tag (useful for debug/error messages) - name: Tag name (part after 'Series-') - value: Tag value (part after 'Series-xxx: ') - """ - # If we already have it, then add to our list - name = name.replace('-', '_') - if name in self and not self.allow_overwrite: - values = value.split(',') - values = [str.strip() for str in values] - if type(self[name]) != type([]): - raise ValueError("In %s: line '%s': Cannot add another value " - "'%s' to series '%s'" % - (commit.hash, line, values, self[name])) - self[name] += values - - # Otherwise just set the value - elif name in valid_series: - self[name] = value - else: - raise ValueError("In %s: line '%s': Unknown 'Series-%s': valid " - "options are %s" % (commit.hash, line, name, - ', '.join(valid_series))) + """Add a new Series-xxx tag along with its value. + + Args: + line: Source line containing tag (useful for debug/error messages) + name: Tag name (part after 'Series-') + value: Tag value (part after 'Series-xxx: ') + """ + # If we already have it, then add to our list + name = name.replace('-', '_') + if name in self and not self.allow_overwrite: + values = value.split(',') + values = [str.strip() for str in values] + if type(self[name]) != type([]): + raise ValueError("In %s: line '%s': Cannot add another value " + "'%s' to series '%s'" % + (commit.hash, line, values, self[name])) + self[name] += values + + # Otherwise just set the value + elif name in valid_series: + self[name] = value + else: + raise ValueError("In %s: line '%s': Unknown 'Series-%s': valid " + "options are %s" % (commit.hash, line, name, + ', '.join(valid_series))) def AddCommit(self, commit): - """Add a commit into our list of commits + """Add a commit into our list of commits - We create a list of tags in the commit subject also. + We create a list of tags in the commit subject also. - Args: - commit: Commit object to add - """ - commit.CheckTags() - self.commits.append(commit) + Args: + commit: Commit object to add + """ + commit.CheckTags() + self.commits.append(commit) def ShowActions(self, args, cmd, process_tags): - """Show what actions we will/would perform - - Args: - args: List of patch files we created - cmd: The git command we would have run - process_tags: Process tags as if they were aliases - """ - col = terminal.Color() - print 'Dry run, so not doing much. But I would do this:' - print - print 'Send a total of %d patch%s with %scover letter.' % ( - len(args), '' if len(args) == 1 else 'es', - self.get('cover') and 'a ' or 'no ') - - # TODO: Colour the patches according to whether they passed checks - for upto in range(len(args)): - commit = self.commits[upto] - print col.Color(col.GREEN, ' %s' % args[upto]) - cc_list = list(self._generated_cc[commit.patch]) - - # Skip items in To list - if 'to' in self: - try: - map(cc_list.remove, gitutil.BuildEmailList(self.to)) - except ValueError: - pass - - for email in cc_list: - if email == None: - email = col.Color(col.YELLOW, "" - % tag) - if email: - print ' Cc: ',email - print - for item in gitutil.BuildEmailList(self.get('to', '')): - print 'To:\t ', item - for item in gitutil.BuildEmailList(self.cc): - print 'Cc:\t ', item - print 'Version: ', self.get('version') - print 'Prefix:\t ', self.get('prefix') - if self.cover: - print 'Cover: %d lines' % len(self.cover) - cover_cc = gitutil.BuildEmailList(self.get('cover_cc', '')) - all_ccs = itertools.chain(cover_cc, *self._generated_cc.values()) - for email in set(all_ccs): - print ' Cc: ',email - if cmd: - print 'Git command: %s' % cmd + """Show what actions we will/would perform + + Args: + args: List of patch files we created + cmd: The git command we would have run + process_tags: Process tags as if they were aliases + """ + col = terminal.Color() + print 'Dry run, so not doing much. But I would do this:' + print + print 'Send a total of %d patch%s with %scover letter.' % ( + len(args), '' if len(args) == 1 else 'es', + self.get('cover') and 'a ' or 'no ') + + # TODO: Colour the patches according to whether they passed checks + for upto in range(len(args)): + commit = self.commits[upto] + print col.Color(col.GREEN, ' %s' % args[upto]) + cc_list = list(self._generated_cc[commit.patch]) + + # Skip items in To list + if 'to' in self: + try: + map(cc_list.remove, gitutil.BuildEmailList(self.to)) + except ValueError: + pass + + for email in cc_list: + if email == None: + email = col.Color(col.YELLOW, "" + % tag) + if email: + print ' Cc: ',email + print + for item in gitutil.BuildEmailList(self.get('to', '')): + print 'To:\t ', item + for item in gitutil.BuildEmailList(self.cc): + print 'Cc:\t ', item + print 'Version: ', self.get('version') + print 'Prefix:\t ', self.get('prefix') + if self.cover: + print 'Cover: %d lines' % len(self.cover) + cover_cc = gitutil.BuildEmailList(self.get('cover_cc', '')) + all_ccs = itertools.chain(cover_cc, *self._generated_cc.values()) + for email in set(all_ccs): + print ' Cc: ',email + if cmd: + print 'Git command: %s' % cmd def MakeChangeLog(self, commit): - """Create a list of changes for each version. - - Return: - The change log as a list of strings, one per line - - Changes in v4: - - Jog the dial back closer to the widget - - Changes in v3: None - Changes in v2: - - Fix the widget - - Jog the dial - - etc. - """ - final = [] - process_it = self.get('process_log', '').split(',') - process_it = [item.strip() for item in process_it] - need_blank = False - for change in sorted(self.changes, reverse=True): - out = [] - for this_commit, text in self.changes[change]: - if commit and this_commit != commit: - continue - if 'uniq' not in process_it or text not in out: - out.append(text) - line = 'Changes in v%d:' % change - have_changes = len(out) > 0 - if 'sort' in process_it: - out = sorted(out) - if have_changes: - out.insert(0, line) - else: - out = [line + ' None'] - if need_blank: - out.insert(0, '') - final += out - need_blank = have_changes - if self.changes: - final.append('') - return final + """Create a list of changes for each version. + + Return: + The change log as a list of strings, one per line + + Changes in v4: + - Jog the dial back closer to the widget + + Changes in v3: None + Changes in v2: + - Fix the widget + - Jog the dial + + etc. + """ + final = [] + process_it = self.get('process_log', '').split(',') + process_it = [item.strip() for item in process_it] + need_blank = False + for change in sorted(self.changes, reverse=True): + out = [] + for this_commit, text in self.changes[change]: + if commit and this_commit != commit: + continue + if 'uniq' not in process_it or text not in out: + out.append(text) + line = 'Changes in v%d:' % change + have_changes = len(out) > 0 + if 'sort' in process_it: + out = sorted(out) + if have_changes: + out.insert(0, line) + else: + out = [line + ' None'] + if need_blank: + out.insert(0, '') + final += out + need_blank = have_changes + if self.changes: + final.append('') + return final def DoChecks(self): - """Check that each version has a change log - - Print an error if something is wrong. - """ - col = terminal.Color() - if self.get('version'): - changes_copy = dict(self.changes) - for version in range(1, int(self.version) + 1): - if self.changes.get(version): - del changes_copy[version] - else: - if version > 1: - str = 'Change log missing for v%d' % version - print col.Color(col.RED, str) - for version in changes_copy: - str = 'Change log for unknown version v%d' % version - print col.Color(col.RED, str) - elif self.changes: - str = 'Change log exists, but no version is set' - print col.Color(col.RED, str) + """Check that each version has a change log + + Print an error if something is wrong. + """ + col = terminal.Color() + if self.get('version'): + changes_copy = dict(self.changes) + for version in range(1, int(self.version) + 1): + if self.changes.get(version): + del changes_copy[version] + else: + if version > 1: + str = 'Change log missing for v%d' % version + print col.Color(col.RED, str) + for version in changes_copy: + str = 'Change log for unknown version v%d' % version + print col.Color(col.RED, str) + elif self.changes: + str = 'Change log exists, but no version is set' + print col.Color(col.RED, str) def MakeCcFile(self, process_tags, cover_fname, raise_on_error): - """Make a cc file for us to use for per-commit Cc automation - - Also stores in self._generated_cc to make ShowActions() faster. - - Args: - process_tags: Process tags as if they were aliases - cover_fname: If non-None the name of the cover letter. - raise_on_error: True to raise an error when an alias fails to match, - False to just print a message. - Return: - Filename of temp file created - """ - # Look for commit tags (of the form 'xxx:' at the start of the subject) - fname = '/tmp/patman.%d' % os.getpid() - fd = open(fname, 'w') - all_ccs = [] - for commit in self.commits: - list = [] - if process_tags: - list += gitutil.BuildEmailList(commit.tags, - raise_on_error=raise_on_error) - list += gitutil.BuildEmailList(commit.cc_list, - raise_on_error=raise_on_error) - list += get_maintainer.GetMaintainer(commit.patch) - all_ccs += list - print >>fd, commit.patch, ', '.join(list) - self._generated_cc[commit.patch] = list - - if cover_fname: - cover_cc = gitutil.BuildEmailList(self.get('cover_cc', '')) - print >>fd, cover_fname, ', '.join(set(cover_cc + all_ccs)) - - fd.close() - return fname + """Make a cc file for us to use for per-commit Cc automation + + Also stores in self._generated_cc to make ShowActions() faster. + + Args: + process_tags: Process tags as if they were aliases + cover_fname: If non-None the name of the cover letter. + raise_on_error: True to raise an error when an alias fails to match, + False to just print a message. + Return: + Filename of temp file created + """ + # Look for commit tags (of the form 'xxx:' at the start of the subject) + fname = '/tmp/patman.%d' % os.getpid() + fd = open(fname, 'w') + all_ccs = [] + for commit in self.commits: + list = [] + if process_tags: + list += gitutil.BuildEmailList(commit.tags, + raise_on_error=raise_on_error) + list += gitutil.BuildEmailList(commit.cc_list, + raise_on_error=raise_on_error) + list += get_maintainer.GetMaintainer(commit.patch) + all_ccs += list + print >>fd, commit.patch, ', '.join(list) + self._generated_cc[commit.patch] = list + + if cover_fname: + cover_cc = gitutil.BuildEmailList(self.get('cover_cc', '')) + print >>fd, cover_fname, ', '.join(set(cover_cc + all_ccs)) + + fd.close() + return fname def AddChange(self, version, commit, info): - """Add a new change line to a version. + """Add a new change line to a version. - This will later appear in the change log. + This will later appear in the change log. - Args: - version: version number to add change list to - info: change line for this version - """ - if not self.changes.get(version): - self.changes[version] = [] - self.changes[version].append([commit, info]) + Args: + version: version number to add change list to + info: change line for this version + """ + if not self.changes.get(version): + self.changes[version] = [] + self.changes[version].append([commit, info]) def GetPatchPrefix(self): - """Get the patch version string - - Return: - Patch string, like 'RFC PATCH v5' or just 'PATCH' - """ - version = '' - if self.get('version'): - version = ' v%s' % self['version'] - - # Get patch name prefix - prefix = '' - if self.get('prefix'): - prefix = '%s ' % self['prefix'] - return '%sPATCH%s' % (prefix, version) + """Get the patch version string + + Return: + Patch string, like 'RFC PATCH v5' or just 'PATCH' + """ + version = '' + if self.get('version'): + version = ' v%s' % self['version'] + + # Get patch name prefix + prefix = '' + if self.get('prefix'): + prefix = '%s ' % self['prefix'] + return '%sPATCH%s' % (prefix, version) diff --git a/tools/patman/settings.py b/tools/patman/settings.py index 122e8fd..0b233ff 100644 --- a/tools/patman/settings.py +++ b/tools/patman/settings.py @@ -18,7 +18,7 @@ the "dest" of the option parser from patman.py. _default_settings = { "u-boot": {}, "linux": { - "process_tags": "False", + "process_tags": "False", } } @@ -71,78 +71,78 @@ class _ProjectConfigParser(ConfigParser.SafeConfigParser): [('am_hero', 'True')] """ def __init__(self, project_name): - """Construct _ProjectConfigParser. - - In addition to standard SafeConfigParser initialization, this also loads - project defaults. - - Args: - project_name: The name of the project. - """ - self._project_name = project_name - ConfigParser.SafeConfigParser.__init__(self) - - # Update the project settings in the config based on - # the _default_settings global. - project_settings = "%s_settings" % project_name - if not self.has_section(project_settings): - self.add_section(project_settings) - project_defaults = _default_settings.get(project_name, {}) - for setting_name, setting_value in project_defaults.iteritems(): - self.set(project_settings, setting_name, setting_value) + """Construct _ProjectConfigParser. + + In addition to standard SafeConfigParser initialization, this also loads + project defaults. + + Args: + project_name: The name of the project. + """ + self._project_name = project_name + ConfigParser.SafeConfigParser.__init__(self) + + # Update the project settings in the config based on + # the _default_settings global. + project_settings = "%s_settings" % project_name + if not self.has_section(project_settings): + self.add_section(project_settings) + project_defaults = _default_settings.get(project_name, {}) + for setting_name, setting_value in project_defaults.iteritems(): + self.set(project_settings, setting_name, setting_value) def get(self, section, option, *args, **kwargs): - """Extend SafeConfigParser to try project_section before section. - - Args: - See SafeConfigParser. - Returns: - See SafeConfigParser. - """ - try: - return ConfigParser.SafeConfigParser.get( - self, "%s_%s" % (self._project_name, section), option, - *args, **kwargs - ) - except (ConfigParser.NoSectionError, ConfigParser.NoOptionError): - return ConfigParser.SafeConfigParser.get( - self, section, option, *args, **kwargs - ) + """Extend SafeConfigParser to try project_section before section. + + Args: + See SafeConfigParser. + Returns: + See SafeConfigParser. + """ + try: + return ConfigParser.SafeConfigParser.get( + self, "%s_%s" % (self._project_name, section), option, + *args, **kwargs + ) + except (ConfigParser.NoSectionError, ConfigParser.NoOptionError): + return ConfigParser.SafeConfigParser.get( + self, section, option, *args, **kwargs + ) def items(self, section, *args, **kwargs): - """Extend SafeConfigParser to add project_section to section. - - Args: - See SafeConfigParser. - Returns: - See SafeConfigParser. - """ - project_items = [] - has_project_section = False - top_items = [] - - # Get items from the project section - try: - project_items = ConfigParser.SafeConfigParser.items( - self, "%s_%s" % (self._project_name, section), *args, **kwargs - ) - has_project_section = True - except ConfigParser.NoSectionError: - pass - - # Get top-level items - try: - top_items = ConfigParser.SafeConfigParser.items( - self, section, *args, **kwargs - ) - except ConfigParser.NoSectionError: - # If neither section exists raise the error on... - if not has_project_section: - raise - - item_dict = dict(top_items) - item_dict.update(project_items) - return item_dict.items() + """Extend SafeConfigParser to add project_section to section. + + Args: + See SafeConfigParser. + Returns: + See SafeConfigParser. + """ + project_items = [] + has_project_section = False + top_items = [] + + # Get items from the project section + try: + project_items = ConfigParser.SafeConfigParser.items( + self, "%s_%s" % (self._project_name, section), *args, **kwargs + ) + has_project_section = True + except ConfigParser.NoSectionError: + pass + + # Get top-level items + try: + top_items = ConfigParser.SafeConfigParser.items( + self, section, *args, **kwargs + ) + except ConfigParser.NoSectionError: + # If neither section exists raise the error on... + if not has_project_section: + raise + + item_dict = dict(top_items) + item_dict.update(project_items) + return item_dict.items() def ReadGitAliases(fname): """Read a git alias file. This is in the form used by git: @@ -151,31 +151,31 @@ def ReadGitAliases(fname): alias wd Wolfgang Denk Args: - fname: Filename to read + fname: Filename to read """ try: - fd = open(fname, 'r') + fd = open(fname, 'r') except IOError: - print "Warning: Cannot find alias file '%s'" % fname - return + print "Warning: Cannot find alias file '%s'" % fname + return re_line = re.compile('alias\s+(\S+)\s+(.*)') for line in fd.readlines(): - line = line.strip() - if not line or line[0] == '#': - continue - - m = re_line.match(line) - if not m: - print "Warning: Alias file line '%s' not understood" % line - continue - - list = alias.get(m.group(1), []) - for item in m.group(2).split(','): - item = item.strip() - if item: - list.append(item) - alias[m.group(1)] = list + line = line.strip() + if not line or line[0] == '#': + continue + + m = re_line.match(line) + if not m: + print "Warning: Alias file line '%s' not understood" % line + continue + + list = alias.get(m.group(1), []) + for item in m.group(2).split(','): + item = item.strip() + if item: + list.append(item) + alias[m.group(1)] = list fd.close() @@ -183,25 +183,25 @@ def CreatePatmanConfigFile(config_fname): """Creates a config file under $(HOME)/.patman if it can't find one. Args: - config_fname: Default config filename i.e., $(HOME)/.patman + config_fname: Default config filename i.e., $(HOME)/.patman Returns: - None + None """ name = gitutil.GetDefaultUserName() if name == None: - name = raw_input("Enter name: ") + name = raw_input("Enter name: ") email = gitutil.GetDefaultUserEmail() if email == None: - email = raw_input("Enter email: ") + email = raw_input("Enter email: ") try: - f = open(config_fname, 'w') + f = open(config_fname, 'w') except IOError: - print "Couldn't create patman config file\n" - raise + print "Couldn't create patman config file\n" + raise print >>f, "[alias]\nme: %s <%s>" % (name, email) f.close(); @@ -218,44 +218,44 @@ def _UpdateDefaults(parser, config): say. Args: - parser: An instance of an OptionParser whose defaults will be - updated. - config: An instance of _ProjectConfigParser that we will query - for settings. + parser: An instance of an OptionParser whose defaults will be + updated. + config: An instance of _ProjectConfigParser that we will query + for settings. """ defaults = parser.get_default_values() for name, val in config.items('settings'): - if hasattr(defaults, name): - default_val = getattr(defaults, name) - if isinstance(default_val, bool): - val = config.getboolean('settings', name) - elif isinstance(default_val, int): - val = config.getint('settings', name) - parser.set_default(name, val) - else: - print "WARNING: Unknown setting %s" % name + if hasattr(defaults, name): + default_val = getattr(defaults, name) + if isinstance(default_val, bool): + val = config.getboolean('settings', name) + elif isinstance(default_val, int): + val = config.getint('settings', name) + parser.set_default(name, val) + else: + print "WARNING: Unknown setting %s" % name def Setup(parser, project_name, config_fname=''): """Set up the settings module by reading config files. Args: - parser: The parser to update - project_name: Name of project that we're working on; we'll look - for sections named "project_section" as well. - config_fname: Config filename to read ('' for default) + parser: The parser to update + project_name: Name of project that we're working on; we'll look + for sections named "project_section" as well. + config_fname: Config filename to read ('' for default) """ config = _ProjectConfigParser(project_name) if config_fname == '': - config_fname = '%s/.patman' % os.getenv('HOME') + config_fname = '%s/.patman' % os.getenv('HOME') if not os.path.exists(config_fname): - print "No config file found ~/.patman\nCreating one...\n" - CreatePatmanConfigFile(config_fname) + print "No config file found ~/.patman\nCreating one...\n" + CreatePatmanConfigFile(config_fname) config.read(config_fname) for name, value in config.items('alias'): - alias[name] = value.split(',') + alias[name] = value.split(',') _UpdateDefaults(parser, config) diff --git a/tools/patman/terminal.py b/tools/patman/terminal.py index 597d526..f00c132 100644 --- a/tools/patman/terminal.py +++ b/tools/patman/terminal.py @@ -28,10 +28,10 @@ class Color(object): Args: enabled: True if color output should be enabled. If False then this - class will not add color codes at all. + class will not add color codes at all. """ self._enabled = (colored == COLOR_ALWAYS or - (colored == COLOR_IF_TERMINAL and os.isatty(sys.stdout.fileno()))) + (colored == COLOR_IF_TERMINAL and os.isatty(sys.stdout.fileno()))) def Start(self, color, bright=True): """Returns a start color code. @@ -44,8 +44,8 @@ class Color(object): otherwise returns empty string """ if self._enabled: - base = self.BRIGHT_START if bright else self.NORMAL_START - return base % (color + 30) + base = self.BRIGHT_START if bright else self.NORMAL_START + return base % (color + 30) return '' def Stop(self): @@ -56,7 +56,7 @@ class Color(object): returns empty string """ if self._enabled: - return self.RESET + return self.RESET return '' def Color(self, color, text, bright=True): @@ -71,10 +71,10 @@ class Color(object): returns text with color escape sequences based on the value of color. """ if not self._enabled: - return text + return text if color == self.BOLD: - start = self.BOLD_START + start = self.BOLD_START else: - base = self.BRIGHT_START if bright else self.NORMAL_START - start = base % (color + 30) + base = self.BRIGHT_START if bright else self.NORMAL_START + start = base % (color + 30) return start + text + self.RESET diff --git a/tools/patman/test.py b/tools/patman/test.py index 8fcfe53..a433535 100644 --- a/tools/patman/test.py +++ b/tools/patman/test.py @@ -21,8 +21,8 @@ class TestPatch(unittest.TestCase): """ def testBasic(self): - """Test basic filter operation""" - data=''' + """Test basic filter operation""" + data=''' From 656c9a8c31fa65859d924cd21da920d6ba537fad Mon Sep 17 00:00:00 2001 From: Simon Glass @@ -44,7 +44,7 @@ Signed-off-by: Simon Glass arch/arm/cpu/armv7/tegra2/ap20.c | 57 ++---- arch/arm/cpu/armv7/tegra2/clock.c | 163 +++++++++++++++++ ''' - expected=''' + expected=''' From 656c9a8c31fa65859d924cd21da920d6ba537fad Mon Sep 17 00:00:00 2001 From: Simon Glass @@ -59,26 +59,26 @@ Signed-off-by: Simon Glass arch/arm/cpu/armv7/tegra2/ap20.c | 57 ++---- arch/arm/cpu/armv7/tegra2/clock.c | 163 +++++++++++++++++ ''' - out = '' - inhandle, inname = tempfile.mkstemp() - infd = os.fdopen(inhandle, 'w') - infd.write(data) - infd.close() + out = '' + inhandle, inname = tempfile.mkstemp() + infd = os.fdopen(inhandle, 'w') + infd.write(data) + infd.close() - exphandle, expname = tempfile.mkstemp() - expfd = os.fdopen(exphandle, 'w') - expfd.write(expected) - expfd.close() + exphandle, expname = tempfile.mkstemp() + expfd = os.fdopen(exphandle, 'w') + expfd.write(expected) + expfd.close() - patchstream.FixPatch(None, inname, series.Series(), None) - rc = os.system('diff -u %s %s' % (inname, expname)) - self.assertEqual(rc, 0) + patchstream.FixPatch(None, inname, series.Series(), None) + rc = os.system('diff -u %s %s' % (inname, expname)) + self.assertEqual(rc, 0) - os.remove(inname) - os.remove(expname) + os.remove(inname) + os.remove(expname) def GetData(self, data_type): - data=''' + data=''' From 4924887af52713cabea78420eff03badea8f0035 Mon Sep 17 00:00:00 2001 From: Simon Glass Date: Thu, 7 Apr 2011 10:14:41 -0700 @@ -168,73 +168,73 @@ index 0000000..2234c87 -- 1.7.3.1 ''' - signoff = 'Signed-off-by: Simon Glass \n' - tab = ' ' - indent = ' ' - if data_type == 'good': - pass - elif data_type == 'no-signoff': - signoff = '' - elif data_type == 'spaces': - tab = ' ' - elif data_type == 'indent': - indent = tab - else: - print 'not implemented' - return data % (signoff, tab, indent, tab) + signoff = 'Signed-off-by: Simon Glass \n' + tab = ' ' + indent = ' ' + if data_type == 'good': + pass + elif data_type == 'no-signoff': + signoff = '' + elif data_type == 'spaces': + tab = ' ' + elif data_type == 'indent': + indent = tab + else: + print 'not implemented' + return data % (signoff, tab, indent, tab) def SetupData(self, data_type): - inhandle, inname = tempfile.mkstemp() - infd = os.fdopen(inhandle, 'w') - data = self.GetData(data_type) - infd.write(data) - infd.close() - return inname + inhandle, inname = tempfile.mkstemp() + infd = os.fdopen(inhandle, 'w') + data = self.GetData(data_type) + infd.write(data) + infd.close() + return inname def testGood(self): - """Test checkpatch operation""" - inf = self.SetupData('good') - result = checkpatch.CheckPatch(inf) - self.assertEqual(result.ok, True) - self.assertEqual(result.problems, []) - self.assertEqual(result.errors, 0) - self.assertEqual(result.warnings, 0) - self.assertEqual(result.checks, 0) - self.assertEqual(result.lines, 67) - os.remove(inf) + """Test checkpatch operation""" + inf = self.SetupData('good') + result = checkpatch.CheckPatch(inf) + self.assertEqual(result.ok, True) + self.assertEqual(result.problems, []) + self.assertEqual(result.errors, 0) + self.assertEqual(result.warnings, 0) + self.assertEqual(result.checks, 0) + self.assertEqual(result.lines, 67) + os.remove(inf) def testNoSignoff(self): - inf = self.SetupData('no-signoff') - result = checkpatch.CheckPatch(inf) - self.assertEqual(result.ok, False) - self.assertEqual(len(result.problems), 1) - self.assertEqual(result.errors, 1) - self.assertEqual(result.warnings, 0) - self.assertEqual(result.checks, 0) - self.assertEqual(result.lines, 67) - os.remove(inf) + inf = self.SetupData('no-signoff') + result = checkpatch.CheckPatch(inf) + self.assertEqual(result.ok, False) + self.assertEqual(len(result.problems), 1) + self.assertEqual(result.errors, 1) + self.assertEqual(result.warnings, 0) + self.assertEqual(result.checks, 0) + self.assertEqual(result.lines, 67) + os.remove(inf) def testSpaces(self): - inf = self.SetupData('spaces') - result = checkpatch.CheckPatch(inf) - self.assertEqual(result.ok, False) - self.assertEqual(len(result.problems), 1) - self.assertEqual(result.errors, 0) - self.assertEqual(result.warnings, 1) - self.assertEqual(result.checks, 0) - self.assertEqual(result.lines, 67) - os.remove(inf) + inf = self.SetupData('spaces') + result = checkpatch.CheckPatch(inf) + self.assertEqual(result.ok, False) + self.assertEqual(len(result.problems), 1) + self.assertEqual(result.errors, 0) + self.assertEqual(result.warnings, 1) + self.assertEqual(result.checks, 0) + self.assertEqual(result.lines, 67) + os.remove(inf) def testIndent(self): - inf = self.SetupData('indent') - result = checkpatch.CheckPatch(inf) - self.assertEqual(result.ok, False) - self.assertEqual(len(result.problems), 1) - self.assertEqual(result.errors, 0) - self.assertEqual(result.warnings, 0) - self.assertEqual(result.checks, 1) - self.assertEqual(result.lines, 67) - os.remove(inf) + inf = self.SetupData('indent') + result = checkpatch.CheckPatch(inf) + self.assertEqual(result.ok, False) + self.assertEqual(len(result.problems), 1) + self.assertEqual(result.errors, 0) + self.assertEqual(result.warnings, 0) + self.assertEqual(result.checks, 1) + self.assertEqual(result.lines, 67) + os.remove(inf) if __name__ == "__main__": diff --git a/tools/reformat.py b/tools/reformat.py index 7e03890..61306d0 100755 --- a/tools/reformat.py +++ b/tools/reformat.py @@ -49,7 +49,7 @@ try: ["ignore-case","default","split="]) except getopt.GetoptError as err: print str(err) # will print something like "option -a not recognized" - sys.exit(2) + sys.exit(2) for o, a in opts: if o in ("-s", "--split"): diff --git a/tools/scripts/make-asm-offsets b/tools/scripts/make-asm-offsets index c686976..4c33756 100755 --- a/tools/scripts/make-asm-offsets +++ b/tools/scripts/make-asm-offsets @@ -22,6 +22,6 @@ SED_CMD="/^->/{s:->#\(.*\):/* \1 */:; \ echo " *" echo " */" echo "" - sed -ne "${SED_CMD}" $1 + sed -ne "${SED_CMD}" $1 echo "" echo "#endif" ) > $2 diff --git a/tools/ubsha1.c b/tools/ubsha1.c index c003f9a..1041588 100644 --- a/tools/ubsha1.c +++ b/tools/ubsha1.c @@ -63,10 +63,10 @@ int main (int argc, char **argv) sha1_csum ((unsigned char *) data, len, (unsigned char *)output); printf ("U-Boot sum:\n"); - for (i = 0; i < 20 ; i++) { - printf ("%02X ", output[i]); - } - printf ("\n"); + for (i = 0; i < 20 ; i++) { + printf ("%02X ", output[i]); + } + printf ("\n"); /* overwrite the sum in the bin file, with the actual */ lseek (ifd, SHA1_SUM_POS, SEEK_END); if (write (ifd, output, SHA1_SUM_LEN) != SHA1_SUM_LEN) {