Re: [Question] About the Gator and the Power Probe

2012-08-31 Thread Paul Larson
There's documentation here:
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0482h/ch13.html
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0482h/ch13.html

And there's a video of the process here:
http://www.youtube.com/watch?v=aDStdtopy_g

You'll need a real license for ds5, not just the community edition.  I was
never able to work out my licensing issues (even though it said my license
was valid) so I wasn't able to get this up and running, but I think if you
can get past that, you should be able to do a capture with it just fine.

Thanks,
Paul Larson

On Fri, Aug 31, 2012 at 5:19 AM, Hongbo Zhang hongbo.zh...@linaro.orgwrote:

 Hi all,
 I want to set up the power probe only without any other streamline
 functions, can I make it?
 It seems that the IP Address in the Capture Options window must be filled,
 so the gator driver and daemon should be compiled then, but I have
 compiling error.

 Can I set up power probe only?
 If not, do we have proper gator without compiling error?

 Thanks.


 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Can't view the lava information from android-build for some builds

2012-08-20 Thread Paul Larson
I'm seeing the same
On Aug 20, 2012 9:25 PM, YongQin Liu yongqin@linaro.org wrote:

 Hi, Paul

 I have tried with the Shift+Reload and Ctrl+R about 10 times for each,
 but the problem still exists.
 I have reported it as a bug here:
 https://bugs.launchpad.net/linaro-android-infrastructure/+bug/1039319


 Thanks,
 Yongqin Liu

 On 17 August 2012 19:11, Paul Sokolovsky paul.sokolov...@linaro.org
 wrote:
  Hello,
 
  On Fri, 17 Aug 2012 11:18:36 +0800
  YongQin Liu yongqin@linaro.org wrote:
 
  Hi,
 
  Just found that I can't view the lava result on android-build page for
  some build.
  but not all of them, I still can view some the information of some
  builds before.
  say the builds of
 
 https://android-build.linaro.org/builds/~linaro-android/snowball-jb-gcc47-igloo-stable-blob/
 .
  For the build #28 even I click the  login to LAVA  link and logged
  in, I still get this information like the NG.png attachment file.
  But for the build #26, I can see the lava information listed. Like the
  attached OK.png file
 
  I was not logged into LAVA when I started, logged in from build details
  page, and am able to see test results for both #26 and #28 (also tried
  #25-#29). So, my guess would be that browser caches content - I saw
  such issue on few occasions on android-build.linaro.org . So, please
  try Shift+Reload in browser (i.e. click reload button with shift held),
  or Ctrl+R few times in row and see if it helps. If you still see that
  issue, I'd suggest opening bug at
  https://bugs.launchpad.net/linaro-android-infrastructure and Infra's
  maintenance engineers will look into it.
 
 
  --
  Best Regards,
  Paul
 
  Linaro.org | Open source software for ARM SoCs
  Follow Linaro: http://www.facebook.com/pages/Linaro
  http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog



 --
 Thanks,
 Yongqin Liu
 ---
 #mailing list
 linaro-andr...@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-android
 linaro-validat...@lists.linaro.org
 http://lists.linaro.org/pipermail/linaro-validation

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Nexus 7/Jellybean Kernel Config

2012-07-24 Thread Paul Larson
+Zach +Andy
Last I heard we were waiting on them to make the first batch and get them
to us, but it had to be a decent sized run so it was going to be split with
someone else. Zach or Andy may have more recent news though.
On Jul 24, 2012 2:54 PM, Christian Robottom Reis k...@linaro.org wrote:

 On Sun, Jul 08, 2012 at 09:40:46AM +0100, Dave Pigott wrote:
 
   Shouldn't require a kernel change. Just a configuration with a really
 annoyingly low default. I am concerned with the explosion of the numbers
  mmc partitions in android though...
  
  
  So am I. It means that, until the sd mux lands, we will have to have a
  special kernel build of the lava master images to support more
  partitions. We already ran out on imx53 and origen so we can't support
  the userdata partition.

 What's the status of this fabled SD MUX thingamajig?
 --
 Christian Robottom Reis, Engineering VP
 Brazil (GMT-3) | [+55] 16 9112 6430 | [+1] 612 216 4935
 Linaro.org: Open Source Software for ARM SoCs

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Using ARM Energy probe and PandaBoard

2012-07-12 Thread Paul Larson
I've been looking a bit at how to get this running under DS-5, but I
haven't found much documentation.  Best thing I've found so far has been
http://www.youtube.com/watch?v=aDStdtopy_g
I tried going through this, and even found caiman on my box at
/usr/local/DS-5/bin/caiman
However, when I try to replicate what they did here, I get a trace ok, but
no power data in my graph.  You may want to give it a try though, maybe it
will work for you.

On Thu, Jul 12, 2012 at 11:51 AM, Dechesne, Nicolas n-deche...@ti.comwrote:

 hi there,

 at HK connect i learned about the ARM energy probe (
 http://www.arm.com/about/newsroom/arm-enables-energy-aware-coding-with-the-arm-energy-probe-and-ds-5-toolchain.php).
 Since that we ordered some probes, and I was wondering if someone is
 working in linaro on enabling power measurements? I would be interested in
 DS5 or even command line tools to retrieve information, ideally from
 Panda...

 thx

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Nexus 7/Jellybean Kernel Config

2012-07-07 Thread Paul Larson
Shouldn't require a kernel change. Just a configuration with a really
annoyingly low default. I am concerned with the explosion of the numbers
mmc partitions in android though...
On Jul 7, 2012 12:09 PM, Christian Robottom Reis k...@linaro.org wrote:

 On Wed, Jul 04, 2012 at 10:31:30PM +0200, Marcin Juszkiewicz wrote:
  W dniu 04.07.2012 22:19, Christian Robottom Reis pisze:
   Hey there,
  
   I stumbled onto a thread on xdadevelopers where they are looking at
   the Nexus 7; apart from partition and df output, there's a 3.1.10
 kernel
   config, which I'm attaching here for convenience and analysis.
  
   ls -l /dev/block/platform/sdhci-tegra.3/by-name/
  ..
   lrwxrwxrwx root root 2012-06-28 11:51 MDA - /dev/block/mmcblk0p8
   lrwxrwxrwx root root 2012-06-28 11:51 UDA - /dev/block/mmcblk0p9
 
  Patched kernel to have 9 partitions on mmc...

 Why is this not upstream?
 --
 Christian Robottom Reis, Engineering VP
 Brazil (GMT-3) | [+55] 16 9112 6430 | [+1] 612 216 4935
 Linaro.org: Open Source Software for ARM SoCs

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH] pm-qa: run sanity check before running test cases

2012-06-27 Thread Paul Larson
 +#!/bin/bash

I suspect that stuff like this is going to be unfriendly to the
androidification work that is going on for pmqa.  Hongbo, do all the
scripts need to be converted over to run under the minimal shell that
android supports?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Can we use mmcblk0p8?

2012-06-18 Thread Paul Larson
On Mon, Jun 18, 2012 at 8:32 AM, Marcin Juszkiewicz 
marcin.juszkiew...@linaro.org wrote:

 W dniu 18.06.2012 15:02, YongQin Liu pisze:
  Hi, Saugata
 
  We have a problem to use the mmcblk0p8 partition, do you know if we
  can use it, and how can we use it?
 
  I created the 8th partition with fdisk command and the 8th partition
  is listed in the output of fdisk -l command but it is not listed in
  /proc/partitions file, even after reboot And also I can't use
  mkfs.vfat -n sdcard /dev/mmcblk0p8 to format.
 
  Could you give me some help about this?

 Check /dev/mmcblk1p0 node number for answer. IIRC there is limited
 amount of partitions on MMC cards which Linux handles (a bit stupid
 limit imho).

Yes it is stupid, and it's bit us before [1].  The better thing is to
rethink your partitioning and see if you can come up with a better way to
lay it out.  It's worked for us so far, but it's been tight on the lava
side due to the number of partitions that android wants, and especially
with the annoying hardcoded partition numbers that android uses.
The max partitions can be changed but requires rebuilding your kernel
MMC_BLOCK_MINORS on *every* machine that you will ever need to mess with
that mmc card on.  So if you are building a master image on your laptop,
you'll need to rebuild that one, the kernels for all platform images you
want to boot on it, etc.

[1] http://linuxtesting.blogspot.com/2011/03/max-partitions-on-mmc.html

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [Linaro-validation] Can we use mmcblk0p8?

2012-06-18 Thread Paul Larson
On Mon, Jun 18, 2012 at 5:03 PM, Michael Hudson-Doyle 
michael.hud...@linaro.org wrote:

 Paul Larson paul.lar...@linaro.org writes:

  On Mon, Jun 18, 2012 at 8:32 AM, Marcin Juszkiewicz 
  marcin.juszkiew...@linaro.org wrote:
 
  W dniu 18.06.2012 15:02, YongQin Liu pisze:
   Hi, Saugata
  
   We have a problem to use the mmcblk0p8 partition, do you know if we
   can use it, and how can we use it?
  
   I created the 8th partition with fdisk command and the 8th partition
   is listed in the output of fdisk -l command but it is not listed in
   /proc/partitions file, even after reboot And also I can't use
   mkfs.vfat -n sdcard /dev/mmcblk0p8 to format.
  
   Could you give me some help about this?
 
  Check /dev/mmcblk1p0 node number for answer. IIRC there is limited
  amount of partitions on MMC cards which Linux handles (a bit stupid
  limit imho).
 
  Yes it is stupid, and it's bit us before [1].  The better thing is to
  rethink your partitioning and see if you can come up with a better way to
  lay it out.  It's worked for us so far, but it's been tight on the lava
  side due to the number of partitions that android wants, and especially
  with the annoying hardcoded partition numbers that android uses.
  The max partitions can be changed but requires rebuilding your kernel
  MMC_BLOCK_MINORS on *every* machine that you will ever need to mess with
  that mmc card on.  So if you are building a master image on your laptop,
  you'll need to rebuild that one, the kernels for all platform images you
  want to boot on it, etc.

 Gah.  I don't think insisting every system involved in LAVA has a custom
 kernel can really work :/

 One thing I _think_ we could do would be to share the boot and testboot
 partitions -- we could install the kernels to be tested in
 /boot/test-kernel or something and carefully tweak the commands we pass
 to uboot to load that kernel.  Sounds like a fiddle, but might work?

Yes, we've discussed that very thing before but it was a lower priority
because we didn't need to extend the partition numbers that far.  Not sure
why we are needing to do it now, but it was always assumed that if we hit a
situation that made it necessary, this would be the most straightforward
thing to do until we have the necessary hardware/software in place to image
a complete, recoverable system onto the SD externally in a generic way.
Basically, we just replace the reformatting of the boot partition with
removal of a testboot directory, and point at the directory for grabbing
the new kernel and initrd.

One thing I was thinking, is whether we could munge the boot script when we
install it to this partition and replace the paths in some automagic way.
This would solve the issue of making sure we always have the latest
bootargs and dtb files for booting, and get us one step closer to parity
with a manually imaged system.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [RFC PATCH v1] AudiVal (Audio Validation Suite for Linux)

2012-05-15 Thread Paul Larson
Cool, does this replace the existing e2daudiotest I guess? Also, has it
already been shown to work on all of the listed board?

On Tue, May 15, 2012 at 9:00 AM, Harsh Prateek Bora
harsh.b...@linaro.orgwrote:

AudiVal (Audio Validation Suite for Linux)

 This is an attempt to automate and integrate various audio related tests
 which
 can help validate audio on various boards supported by Linaro. Motivation
 for
 this project comes from various audio tests listed at:
 https://wiki.linaro.org/TomGall/LinaroAudioFeatureIndex

 The git repo for this project can be cloned from:
 git://git.linaro.org/people/harshbora/audival.git

 Various files required by this script are available in the git repo.

 Requesting all to test this script on various boards that you may have
 access
 to and share feedback to make it better.

 TODO: Add tests for Audio over USB, Bluetooth.

 Signed-off-by: Harsh Prateek Bora harsh.b...@linaro.org

 diff --git a/AudiVal.py b/AudiVal.py
 new file mode 100755
 index 000..7d56c4e
 --- /dev/null
 +++ b/AudiVal.py
 @@ -0,0 +1,247 @@
 +#!/usr/bin/env python
 +#
 +# Audival : Audio Validation Suite for Linux.
 +#
 +# Author: Harsh Prateek Bora
 +# harsh.b...@linaro.org
 +# ha...@linux.vnet.ibm.com
 +
 +import os
 +import sys
 +import getopt
 +from subprocess import * # for calling external programs
 +import commands # deprecated since python 2.6, Python 3.0 uses subprocess
 +
 +def usage():
 +print =
 +print AudiVal: Audio Validation Suite for Linux
 +print =
 +print Usage:
 +print sys.argv[0], [--check-info]
 +print
 +print Supported HW: PandaBoard (ES), BeagleBoard (XM), i.MX53,
 i.MX6, Origen, Snowball
 +sys.exit(1)
 +
 +SpeakerJack = {
 +'GenuineIntel': 'Analog',
 +'Panda':'Headset',
 +'Beagle':   'TWL4030',
 +'i.MX53':   'HiFi',
 +'i.MX6':'HiFi',
 +'ORIGEN':   'Pri_Dai',
 +'Snowball': 'Headset'
 +}
 +
 +MicJack = {
 +'GenuineIntel': 'Analog',
 +'Panda':'Headset', # not sure though, arecord doesnt work
 for me!
 +'Beagle':   'TWL4030',
 +'i.MX53':   'HiFi',
 +'i.MX6':'HiFi',
 +'ORIGEN':   'Pri_Dai',
 +'Snowball': 'Headset'
 +}
 +
 +# As and when HDMI out/in device string differs, we'll need 2
 dictionaries.
 +HDMI_Audio = {
 +'GenuineIntel': 'HDMI',
 +'Panda':'HDMI',
 +'Beagle':   'HDMI',
 +'i.MX53':   'HDMI',
 +'i.MX6':'HDMI', # audio out only supported, audio in not
 supported.
 +'ORIGEN':   'HDMI',
 +'Snowball': 'hdmi'  # odd one, lowercase
 +}
 +
 +Audio_Devices = {
 +'playback':'aplay-l.log',
 +'capture': 'arecord-l.log'
 +}
 +
 +def get_device_list(logfile):
 +fobj = open(logfile)
 +return fobj
 +
 +def get_card_device_info_by_name(devicestr, fobj):
 +Helper routine to get card, device number by interface name
 +optxt = fobj.readlines()
 +card = 
 +device = 
 +for line in optxt:
 +if devicestr in line:
 +cardtext, colon, text = line.partition(':')
 +pre, sep, card = cardtext.partition(' ')
 +devtext, colon, text = text.partition(':')
 +pre, sep, device = devtext.partition('device ')
 +break
 +hwstr = 'plughw:' + card + ',' + device
 +if card is  and device is :
 +return None
 +return hwstr
 +
 +def speaker_test_audio_jack(board):
 +fobj = get_device_list(Audio_Devices['playback'])
 +print speaker-test over audio jack ..
 +headset_jack_id = get_card_device_info_by_name(SpeakerJack[board],
 fobj)
 +fobj.close()
 +if headset_jack_id is None:
 +print No Audio Jack found !
 +return
 +call(date  speaker-test-jack.log, shell=True)
 +cmdstr = speaker-test -D  + headset_jack_id +  -t s -c 2 -l 1 
 speaker-test-jack.log 21
 +call(cmdstr, shell=True)
 +print If you heard beeps from left and right speaker, test passed.
 +
 +def aplay_over_jack(board):
 +fobj = get_device_list(Audio_Devices['playback'])
 +headset_jack_id = get_card_device_info_by_name(SpeakerJack[board],
 fobj)
 +print Testing aplay over audio jack ..
 +fobj.close()
 +if headset_jack_id is None:
 +print No Audio Jack found !
 +return
 +cmdstr = aplay login.wav -D  + headset_jack_id
 +call(cmdstr, shell=True)
 +print If you heard a stereo sound, test passed.
 +# Check dmesg to see if there is an error or I2C error
 +call(dmesg | tail | grep error, shell=True)
 +print Note: Error, if any, shall be displayed above.
 +
 +def arecord_and_aplay_over_audio_jack(board):
 +# Record the WAV stream and play it back.
 +fobj = get_device_list(Audio_Devices['capture'])
 +mic_jack_id = 

Re: Linaro recommended (tm) brand of SD card?

2012-05-07 Thread Paul Larson
In the lab, we use the sandisk extreme cards iirc.  What I hear from others
and see personally is that most class 10 cards seem to work ok, but
kingston are generally avoided.

On Mon, May 7, 2012 at 7:15 PM, Michael Hudson-Doyle 
michael.hud...@linaro.org wrote:

 Hi,

 The SD card I routinely use for testing which I got at some Linaro
 meeting or other has fallen apart (physically), so I'm on the hunt for a
 new one.  Does anyone have a recommendation of a brand of card I should
 be looking for?  For LAVA stuff, it needs to be at least 8 gigs.

 Cheers,
 mwh

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Call for testing Linaro Android 12.04 candidates

2012-04-10 Thread Paul Larson
Looking through the spreadsheets linked to below, it doesn't appear any
results have been logged so far.  Please log results in the spreadsheet
ASAP, and ping me and/or the Android team if you run into issues.

Thanks,
Paul Larson

On Mon, Apr 9, 2012 at 10:15 AM, Fathi Boudra fathi.bou...@linaro.orgwrote:

 We have prepared Linaro Android 12.04 candidates. Spreadsheets have
 been updated.
 Please, run the tests that have a 'w' after them. Thanks!

 Testers, builds and spreadsheets
 

 Jon

 https://android-build.linaro.org/builds/~linaro-android/vexpress-ics-gcc46-armlt-stable-open-12.04-release/

 https://docs.google.com/a/linaro.org/spreadsheet/ccc?key=0AkxwyUNxNaAadExQdHNxTnR5SFZCQzJnN1ZtQ2ZhWkE

 Chenglie

 https://android-build.linaro.org/builds/~linaro-android/origen-ics-gcc46-samsunglt-stable-blob-12.04-release/

 https://docs.google.com/a/linaro.org/spreadsheet/ccc?key=0AkxwyUNxNaAadDRDVl9TSHUweUk3eG9ndk9sNGxUVnc

 Bernard

 https://android-build.linaro.org/builds/~linaro-android/imx6-ics-gcc46-freescalelt-stable-open-12.04-release/

 https://docs.google.com/a/linaro.org/spreadsheet/ccc?key=0AkxwyUNxNaAadDE1MlJmY19PQzlJOUY5OXk0SWJYT1E

 Vishal

 https://android-build.linaro.org/builds/~linaro-android/panda-ics-gcc46-tilt-tracking-blob-12.04-release/

 https://docs.google.com/a/linaro.org/spreadsheet/ccc?key=0AkxwyUNxNaAadGVWd3pZazdaRUU0MnRnWmgwbVhTR0E

 Botao

 https://android-build.linaro.org/builds/~linaro-android/imx53-ics-gcc46-freescalelt-stable-open-12.04-release/

 https://docs.google.com/a/linaro.org/spreadsheet/ccc?key=0AkxwyUNxNaAadDE1MlJmY19PQzlJOUY5OXk0SWJYT1E

 Mathieu

 https://android-build.linaro.org/builds/~linaro-android/snowball-ics-gcc46-igloo-stable-blob-12.04-release/

 https://docs.google.com/a/linaro.org/spreadsheet/ccc?key=0AkxwyUNxNaAadEF1NXVhT3dQWnZsTHBydnpiWVB4Umc

 Amit

 https://android-build.linaro.org/builds/~linaro-android/vexpress-rtsm-ics-gcc46-armlt-stable-open-12.04-release/
 No spreadsheet for vexpress-rtsm build.

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [Linaro-validation] Toolchain builds in LAVA

2012-03-28 Thread Paul Larson
On Wed, Mar 28, 2012 at 10:03 AM, Andrew Stubbs andrew.stu...@linaro.orgwrote:

 On 28/03/12 03:26, Michael Hope wrote:

 Hi there.  The GCC build time is approaching 24 hours and our five
 Panda boards can't keep up.  Sounds like a good time to LAVAise the
 toolchain build process a bit more...


 As you know, I've been doing some experiments with this over the last few
 months. I was blocked by a LAVA bug for a while, but that's been fixed now.

 Here's the latest test run (done by Le Chi Thu):

 http://validation.linaro.org/**lava-server/dashboard/**permalink/bundle/**
 d73af579ed77957615bd3db2d9055d**82bb14299e/http://validation.linaro.org/lava-server/dashboard/permalink/bundle/d73af579ed77957615bd3db2d9055d82bb14299e/

 The test fails due to a GCC build failure:

 //usr/include/linux/errno.h:4:**23: fatal error: asm/errno.h: No such
 file or directory

 This surprised me, because I thought it was using the same rootfs you had
 on the ursa boards, but I've not had time to do anything about it yet.

Looks like something that can likely be resolved by adding a dependency for
the test.  However, if you need, or if it would be more convenient to have
a custom rootfs for this, that's certainly an option.  Nothing says we
necessarily have to run these tests on nano, developer, etc... but if it's
interesting to make it possible for later running this as part of a
platform release test, it might be better to make them generic so that they
don't depend on a custom rootfs.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [Linaro-validation] Toolchain builds in LAVA

2012-03-27 Thread Paul Larson
On Tue, Mar 27, 2012 at 9:26 PM, Michael Hope michael.h...@linaro.orgwrote:

 Hi there.  The GCC build time is approaching 24 hours and our five
 Panda boards can't keep up.  Sounds like a good time to LAVAise the
 toolchain build process a bit more...

 Mike Hudson and I talked about doing a hybrid LAVA/cbuild as a first
 step where LAVA manages the boards and cbuild does the build.  The
 idea is:

  * There's a image with the standard toolchain kernel, rootfs, build
 environment, and build scripts
  * The LAVA scheduler manages access to the boards
  * The LAVA dispatcher spins up the board and then invokes the cbuild
 scripts to build and test GCC

 This gives me a fixed environment and re-uses the existing build
 scripts.  In the future these can be replaced with different LAVA
 components.  There's a few details:

  * cbuild self updates on start so the image can stay fixed
  * Full results are pushed by cbuild to 'control'
  * The dispatcher records the top level log and an overall pass/fail
  * No other bundles are generated

 Most of this fits in with the existing LAVA features.  There's a few
 additions like a 'run command' JSON action, passing the board name to
 the instance, and passing the job name as an argument.  These seem OK.

 I'd like some of the boards to be fitted with fast USB flash drives
 for temporary storage.  SD cards are slow and unreliable especially
 when GCC is involved.

 It just so happens that some of the panda boards already have usb drives
attached, and have the tag usb-flash-drive so that jobs can be scheduled in
such a way that you can basically say just give me some panda with a usb
flash drive.  Right now, it's just a few, but the plan is to add the usb
sticks to all boards soon.  Dave, do we have them now and we're just
planning to put this in as part of the move?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


LAVA Downtime Planned

2012-03-15 Thread Paul Larson
This is just an early notification that the Linaro validation farm will be
physically moving to a new site next week.  Unfortunately, the network
cables won't stretch that far, so it will mean some downtime. :)

Here's the tentative plan:

Wednesday, March 21st - the rack with the toolchain server will be shut
down, crated and sent off
Thursday, March 22nd - pretty much everything else will go offline (main
LAVA server, development boards)
Friday, March 23rd - physical move will happen, and if all goes well (what
could possibly go wrong?!) the main site and as many of the boards as
possible will go back online in their new home
Saturday, March 24th to Sunday, March 25th - finish configuring and
bringing up cloud services, toolchain server, etc.

There's another bit of good news here, that the lab will be getting a much
needed upgrade in the internet connection. A partial (but significant one)
at first, and a bigger one in about a month or so with redundant
connections.

Bear in mind that this is just the tentative plan, Dave or I will send out
a reminder and update as the date gets closer, or if something changes.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: How to participate in the connect remotely

2012-02-06 Thread Paul Larson
Connect.linaro.org/events/remote-participation has links to each of the
rooms with their google hangout sessions.  You may want to try the blue
room one in advance.  It may need a bit of fine tuning today, and it will
be interesting to see what kind of load we get and the effect of it, but it
was working pretty well in the tests we did with it last night.

Thanks,
Paul Larson
On Feb 6, 2012 5:54 AM, Zygmunt Krynicki zygmunt.kryni...@linaro.org
wrote:

 Hi.

 I'm curious how/if remote participation is going to work during this
 connect. Unlike past events we will not have the advantage of the Canonical
 IS teams that made everything just work. This probably includes the
 microphones in each room, the IRC channels, the projectors and the magic
 wifi.

 Is there any general way to participate remotely during this event or
 should I just bug my friends to place a laptop on a chair and run google
 hangouts?

 Thanks
 ZK

 --
 Zygmunt Krynicki
 Linaro Validation Team

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Validation at the Linaro Connect

2012-02-03 Thread Paul Larson
Interested in Validation and going to the Linaro connect?  Here are some of
the sessions already scheduled that you might be interested in on the
Validation track:

Automated bootloader testing in LAVA
https://blueprints.launchpad.net/lava-dispatcher/+spec/linaro-validation-q112-bootloader-testing

Discuss what needs to be done still on celery, and usage
https://blueprints.launchpad.net/lava-lab/+spec/linaro-validation-q112-lava-celery

LAVA Server Admin tools
https://blueprints.launchpad.net/lava-server/+spec/linaro-validation-q112-admin-tool

Discuss test plans for big.LITTLE
https://blueprints.launchpad.net/lava-test/+spec/linaro-validation-q112-big-little-testplan

Monitoring and managing device health in LAVA
https://blueprints.launchpad.net/lava-lab/+spec/linaro-validation-q112-device-health-monitoring

Additionally, here are some sessions in other tracks that you may like:

Ubuntu LEB and LAVA: Current status and future planning for proper
image testing and validation
https://blueprints.launchpad.net/linaro-ubuntu/+spec/linaro-platforms-q112-lava-and-ubuntu-leb-testing-validation

Linaro's Kernel CI Process discussion and Feedback
https://blueprints.launchpad.net/linaro/+spec/linaro-kernel-ci-q112-discussion

End to End Demo of Enabling Something for LAVA
https://blueprints.launchpad.net/linaro/+spec/linaro-training-q112-end-to-end-lava-demo

Make sure to check
http://summit.linaro.org/lcq1-12/track/linaro-validation/throughout
the week for changes and additions.

If you are around in the afternoons and want to help out on LAVA
development, or want help getting started in it, swing by the validation
hacking sessions and talk to us.

See you next week!
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: DisableSuspend.apk

2012-01-16 Thread Paul Larson
 The shell service call power ... is probably just a runtime setting?
 so we have to run it on every boot, no?

 Can we add this to a generic place in lava-android-test (or
 dispatcher) to ensure we call this first thing every time when booting
 up a unit?

 I shoved it temporarily into my dispatcher running locally.  A better
solution going forward is to probably allow for some board-defined extra
setup steps that run right after boot.  In this case, I also had to add a
sleep for about 15 seconds because this service doesn't seem to be
available as soon as the board is booted.

It does work great though, and I'm able to keep the board up and running
with it!  Thanks for figuring this out Yongqin!

-Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: snowball does not compile on linux-next

2012-01-03 Thread Paul Larson
On Tue, Jan 3, 2012 at 7:20 AM, Christian Robottom Reis k...@linaro.orgwrote:

 On Tue, Jan 03, 2012 at 08:19:37AM +0100, Linus Walleij wrote:
  On Mon, Jan 2, 2012 at 4:25 PM, Christian Robottom Reis k...@linaro.org
 wrote:
   On Mon, Jan 02, 2012 at 01:59:10PM +0100, Linus Walleij wrote:
   On Mon, Jan 2, 2012 at 12:49 PM, Daniel Lezcano
   daniel.lezc...@linaro.org wrote:
  
the kernel does not compile when
CONFIG_CLKSRC_DBX500_PRCMU_SCHED_CLOCK is set.
  
   Should this be in our default configs, or one of our tested configs, so
   we can catch this in a CI loop?
 
  It should be in the defconfig already, the problem showed up
  for Daniel exactly because it was configured in.

It doesn't seem to be one of the currently tested configs - it sounds like
it should be added though (infrastructure added to cc list).


 Okay, then the question is what do we need to do to make a box go red
 and have people notified when it breaks?

The box would go red if it were tested, but the mechanisms for registering,
subscribing to, and getting notifications on events does not yet exist so
it would require someone to check for things like this.  It's on our radar
as something that needs to be added.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Powering up snowball over USB

2011-12-13 Thread Paul Larson
Possibly a kernel bug?  Lee, anything on snowball that would cause it to
not go through with the reboot when on usb power?

Thanks,
Paul Larson

On Tue, Dec 13, 2011 at 9:50 AM, Dave Pigott dave.pig...@linaro.org wrote:

 Hi all,

 Sorry for the wide distribution, but I've got a rather curious problem.

 I'm trying to get a Snowball V11 PDK working in the validation lab, and
 the only way to get it to boot on power-on is to power it over USB and
 control the power through a USB power adaptor. While this works admirably,
 the problem is that when I try to soft-reboot I get the usual:

 Broadcast message from root@master
 (/dev/ttyAMA2) at 15:48 ...

 The system is going down for reboot NOW!
  * Asking all remaining processes to terminate...[
 OK ]
  * All processes ended within 1 seconds  [
 OK ]
  * Deconfiguring network interfaces...   [
 OK ]
  * Deactivating swap...  [
 OK ]
 umount: /run/lock: not mounted
  * Will now restart
 [ 1279.346801] Restarting system.


 and then nothing. It just hangs and never comes back until I power cycle
 it.

 BTW: Sometimes I get the whole of the Restarting system message,
 sometimes just half of it.

 Has anybody any idea why this might be the case and, if so, what I can do
 about it?

 Thanks

 Dave

  Dave Pigott
 Validation Engineer
 T: +44 1223 45 00 24 | M +44 7940 45 93 44
 Linaro.org http://www.linaro.org/* **│ *Open source software for ARM
 SoCs
 Follow *Linaro: *Facebook http://www.facebook.com/pages/Linaro | 
 Twitterhttp://twitter.com/#%21/linaroorg
  | Blog http://www.linaro.org/linaro-blog/


 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [Samsung] Origen boards in Lava

2011-12-13 Thread Paul Larson
On Thu, Dec 8, 2011 at 10:40 PM, Sree kumar sreekuma...@samsung.com wrote:

  Thanks a lot for the update.

 We are interested in the number of test cases and the quality of the test
 cases.

 1.Can LAVA detect critical bugs related to UMM?

The Multimedia Working Group is currently working on some tests for UMM on
their CMA test.  We'll be looking at getting some regular test runs with
that when it's complete.  More information on that is available here:
https://wiki.linaro.org/OfficeofCTO/MemoryManagement/Validation/CMATest


 2.Is it useful in testing power management?

For power management we have the ability to run tests that don't require
any specific
instrumentation, but we do not yet have devices in the lab to allow
advanced power
consumption measurements in our lava lab instance.  Once we have such
devices, we can take a look at the board level modifications needed to
connect them, and the extensions in LAVA that will be needed to make use of
them. We have been working with the Powermanagement team to
find a suitable solution for that payload that matches our needs in
scalability and usability in the lab.


 Do you have the details of test coverage? If “yes”, please share.

 Kernel- how many test cases? which area is covered?

 Based on the good feedback you gave us during connect, the kernel WG has
started to
look into this more thoroughly on the kernel test front, as have the
landing teams, but
they will not yet have those tests in the test pool before end of December.

LAVA is the validation *infrastructure* to automatically deploy Linux
images to Linaro supported development boards, run tests on them, and
collect the results into a human readable dashboard system.  For the actual
tests, we *strongly* encourage contributions from members and working
groups so that we can ensure their components and devices are well-tested
under LAVA.  Additionally, we plan to make test cases a focus topic at the
next Linaro Connect, and will do what we can to continue to pick up new
tests that we know will be generally useful.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: lava-deployment-tool is now (also) on github

2011-12-08 Thread Paul Larson
On Thu, Dec 8, 2011 at 5:46 AM, Alexander Sack a...@linaro.org wrote:

 On Wed, Dec 7, 2011 at 9:15 PM, Zygmunt Krynicki 
 zygmunt.kryni...@linaro.org wrote:

 Hi folks.

 So there's been some justified uproar about me moving some stuff to
 github without asking. I wanted to let you all know that we have a
 Linaro organization on github (ping danilos to get your github account
 included there). I've created the validation team there and allowed it
 to push/pull to https://github.com/Linaro/lava-deployment-tool I also
 have my own fork.


 LAVA is hosted on launchpad. unless you can convince the whole team to move
 everything to git, please don't do one man shows like this and put this to
 bzr as well...

 Thanks for applying common sense.

Zygmunt and I discussed this yesterday and I was very clear that we are not
splitting up development of components across lp and github without more
compelling reasons, and communication and decision in advance.

To be very clear, the official repository for this project[1], as all other
LAVA related projects[2] will be on Launchpad in bzr.  I don't see any
reason to change that in the foreseeable future, but if it does, it will be
communicated well in advance.

Thanks,
Paul Larson

[1] http://launchpad.net/lava-deployment-tool
[2] http://launchpad.net/lava
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Master images are a mess

2011-12-08 Thread Paul Larson
On Wed, Dec 7, 2011 at 7:27 PM, Michael Hudson-Doyle 
michael.hud...@canonical.com wrote:

 On Wed, 7 Dec 2011 11:44:05 -0600, Paul Larson paul.lar...@linaro.org
 wrote:
  Sure, we could provide a command line tool for looking up those things
  in
  the lava database, and give admins an easy interface to just say take me
  to the console of this machine, or hardreset this machine.  If we did
  that, and also added attached serial multiplexing, we will have...
  rewritten conmux. :)

 We don't need multiplexing though, and having a persistent daemon seems
 to me to be the source of much of the (well, my) angst with conmux.

This has come in handy for me before when debugging locally, or dealing
with boards that need serial or usb attachment.
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Master images are a mess

2011-12-07 Thread Paul Larson
 anyone without a good plan of how to get them
standing again.


 6) We should drop conmux. As in the lab we already have TCP/IP sockets
 for the serial lines we could just provide my example serial-tcp
 script as lava-serial service that people with directly attached
 boards would use. We could get a similar lava-power service if that
 would make sense. The lava-serial service could be started as an
 instance for all USB/SERIAL adapters plugged in if we really wanted
 (hello upstart!). The lava-power service would be custom and would
 require some config but it is very rare. Only lab and me have
 something like that. Again it should be instance based IMHO so I can
 say: 'start lava-power CONF=/etc/lava-power/magic-hack.conf' and see
 LAVA know about a power service. One could then say that a particular
 board uses a particular serial and power services.

Another good topic for the connect I think.  It needs a lot of fleshing
out, but it sounds like you are basically suggesting we should recreate the
functionality of conmux in lava.  Conmux does two primary things for us:
1. It gives us a single interface for dealing with a variety of boards
2. It provides a safer, more convenient way of dealing with console and
power on boards.  To console in to a board, and hardreset the power in
conmux:
 $ conmux-console panda01
 $ ~$hardreset

Without it?
 [lookup wiki or database to find the console server/port]
 $ telnet console01 7001
 [notice it's hung]
 ^] quit
 [lookup wiki or db to find the pdu server/port]
 $ telnet pdu01
 [go through long menu driven interface to tell it to reset the port]
 ^] quit
 $ telnet console01 7001
 [still hung... notice that you accidently reset some other board that
someone was running a test on]

Sure, we could provide a command line tool for looking up those things  in
the lava database, and give admins an easy interface to just say take me
to the console of this machine, or hardreset this machine.  If we did
that, and also added attached serial multiplexing, we will have...
rewritten conmux. :)

That beings said, I think it could still be useful because it can give us
an API to call from within lava, rather than having to call out to a
command line tool.  However, I don't see a huge urgency for it at the
moment.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Phasing out Ubuntu packages for LAVA

2011-12-06 Thread Paul Larson
This mostly affects the server side components, yes.  I'm open to the idea
of continuing packages for the client-side tools, if it would be useful to
others.

Thanks,
Paul Larson

On Tue, Dec 6, 2011 at 5:12 AM, Alexander Sack a...@linaro.org wrote:

 On Tue, Dec 6, 2011 at 9:46 AM, Alexandros Frantzis 
 alexandros.frant...@linaro.org wrote:

 On Tue, Dec 06, 2011 at 10:16:22AM +0200, Fathi Boudra wrote:
  The deployment of LAVA using Ubuntu packages is deprecated in favor of
 the
  dedicated LAVA deployment tool [1]. LAVA installation is tested and
 supported
  on Ubuntu host from version 10.10 to 11.10 (Maverick to Oneiric) [2].
 
  From 11.12 cycle, the following conditions will be applied:
   * Linaro Validation PPA [3] won't be updated with latest LAVA releases.
   * Bugs affecting packages won't be fixed.
 
  We plan to provide LAVA deployment tool from Linaro Validation PPA.
 
  [1] http://launchpad.net/lava-deployment-tool
  [2] Ubuntu 10.04 LTS (Lucid) isn't supported due to a bug with upstart
  [3] 
  http://launchpad.net/~linaro-validation/+archive/ppahttp://launchpad.net/%7Elinaro-validation/+archive/ppa
 

 Does this affect all LAVA packages (eg lava-test, lava-dashboard-tool),
 or just the ones pertaining to the server?


 I would hope that client side tools that have reached a certain degree of
 maturity (read: you don't need to SRU stuff to ubuntu released version
 regularly due to changes on server side) would be still be made available
 in
 packaging form.

 --
 Alexander Sack
 Technical Director, Linaro Platform Teams
 http://www.linaro.org | Open source software for ARM SoCs
 http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog



 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Phasing out Ubuntu packages for LAVA

2011-12-06 Thread Paul Larson
We only have things in the PPA right now, so SRUs aren't needed anyway.
Are you talking about continuing packages for everything, or just for the
client side tools?

On Tue, Dec 6, 2011 at 9:46 AM, Alexander Sack a...@linaro.org wrote:

 On Tue, Dec 6, 2011 at 3:24 PM, Paul Larson paul.lar...@linaro.orgwrote:

 This mostly affects the server side components, yes.  I'm open to the
 idea of continuing packages for the client-side tools, if it would be
 useful to others.

 At best we would continue packaging for the ubuntu development releases,
 but not need to do SRUs regularly to keep them working with new servers...
 you think that's possible to maintain at this point?

 --
 Alexander Sack
 Technical Director, Linaro Platform Teams
 http://www.linaro.org | Open source software for ARM SoCs
 http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog



___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA upgrade and deployment change

2011-12-05 Thread Paul Larson
Great work, many thanks for moving this along!  This should help us
streamline the process, and also make it easier to set up production as
well as testing environments.

Thanks,
Paul Larson

On Mon, Dec 5, 2011 at 9:10 PM, Michael Hudson-Doyle 
michael.hud...@canonical.com wrote:

 On Mon, 5 Dec 2011 13:04:26 +0100, Zygmunt Krynicki 
 zygmunt.kryni...@linaro.org wrote:
  Hi LAVA users,
 
  The Linaro Validation Team is preparing to change the way we deploy
  LAVA in production.  We expect this will require 1 hour of downtime or
  less on Monday beginning at 22:00 UTC.
  This upgrade will streamline our deployment process, allowing us to
  deploy changes more frequently and quickly deliver incremental
  improvements. This change will also allow others to deploy LAVA in the
  same way, for development or production purposes.
 
  As the API will not be working during the downtime, there is the
  chance that some automatically-submitted jobs (e.g. from
  android-build) will be lost.  We hope to work soon on an approach that
  will avoid this risk.

 This has now been completed.  There are a few bug fixes deployed in the
 new rollout, but the real change of course is that we should be able to
 bring you improvements much more quickly than before.

 The down time ended up being very short, maybe 10 minutes + a couple of
 Apache restarts so we should not have lost many submitted jobs.

 Cheers,
 mwh

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


lava-deployment-tool

2011-11-16 Thread Paul Larson
As part of an effort to streamline deployment of LAVA, and allow for a more
continuous testing and deployment process, the Linaro Validation Team has
been working on a tool to assist with deployment of LAVA components.  The
intent is to create something that is suitable for production deployments
as well as development/evaluation environments, and even allow for separate
instances running on the same machine.

Not everything is implemented just yet, but we're interested in putting
this out there for wider testing and feedback.  Because this is a very new
tool, and needs to do things like add configuration to apache and add
databases to your system, we highly recommend using a VM or cloud instance
to test this.

To get started you'll need to get the source from bzr and start by checking
out the readme:
bzr branch lp:lava-deployment-tool

Special thanks to Zygmunt Krynicki for getting this started, and Michael
Hudson-Doyle for being an early guinea pig.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA validation status

2011-11-14 Thread Paul Larson
On Mon, Nov 14, 2011 at 5:18 AM, Alexander Sack a...@linaro.org wrote:

 Problem is that with CI we cannot really hold off submitting jobs because
 that happens ad-hoc when a build finishes.

Agree, this isn't always easy, but neat to see that infrastructure has a
workaround for it already.


 I understand the need to have a clean lab pipe. However, for that we need
 to work on a mechanism that allows you
 to put submitted jobs into a queue that isn't processed etc.

 Would also be useful to put jobs there during maintenance so the frontend
 doesn't fail submitting it's job.

 There are a few issues to resolve around this.  It would have to be a
separate piece running somewhere else (like a cloud instance).  To make it
more reliable, we'd really need to run more than one of these as backups
for one another so that when/if we need to reboot it, it could be phased.
It would also lack the ability to tell you the job number, since it can't
maintain a connection and give you that information, so there would be no
direct way to reference the job without coming up with another unique
identifier to go by.

A simpler approach in the short term that's been requested already, is the
ability to resubmit a failed job.  This covers more situations than the one
described here.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Lava testing not specific to a pack/board/release

2011-11-07 Thread Paul Larson
Hi David, first off, thanks for bringing this forward.  We really
appreciate getting additional tests into lava, especially those that the
engineers really care about.

Is this test part of an existing test suite? For instance, ltp?  If so,
then we just need to make sure we are running the right version of ltp to
pick it up.

If it's a whole new test suite, then we have 2 routes we could go.  In the
short term, I'd look to help you look at at creating a wrapper so that
lava-test can run it.  Ignore the json sections you posted below, that's
not something you need to mess with for this step, and in fact, is just
something we will need to add to the daily template that gets run on all
images.  So it isn't anything you need to worry about at all.  The test
wrapper is just a small bit of python code that will tell lava-test how to
install the test, run it, and parse the results.  Most of them are pretty
simple.

Long term, I believe both the kernel and dev platform teams are working on
some test suites to go into lava that will be for the express purpose of
sanity and regression tests in linaro images.

Could you point me at where I might find these pthread tests right now so
that I can have a look?

Thanks,
Paul Larson

On Mon, Nov 7, 2011 at 5:14 AM, David Gilbert david.gilb...@linaro.orgwrote:

 On 7 November 2011 09:57, Zygmunt Krynicki zygmunt.kryni...@linaro.org
 wrote:
  W dniu 04.11.2011 15:35, David Gilbert pisze:
 
  Hi,
I've got a pthread test that is the fall out of a bug fix which is a
  good test
  of kernel and libc and would like to add it into Lava.
 
I'm told that I currently have to pick a hardware pack and image - but
  for
  this case what I really want to do is to create a test that can be used
  as a regression test that gets run on whatever the latest kernel is on
  any of a set of platforms (basically anything of Panda speed or
  similar/faster).
 
  LAVA is happy to run your test on any hardware pack and root fs you want.
  The first step is to make your test visible to the stack. To do that you
  should wrap it with our lava-test framework. If your test can be
 compiled to
  a simple executable and invoked to check for correctness then wrapping it
  should be easy as pie. Look at LAVA test documentation lava:
  http://lava-test.readthedocs.org/en/latest/index.html (and report back
 on
  missing pieces).
 
  The next step is to determine when your test should run. I think that
 there
  is nothing wrong with simply running it daily on all the hardware packs.

 Hi,
  Thanks for the response.
  My question was started off by reading some notes from ams; in those
 the example
 has a .json file that has a section that looks like:

   {
  command: deploy_linaro_image,
  parameters:
{
  rootfs: ROOTFS-URL-HERE,
  hwpack: HWPACK-URL-HERE
}
},

 Now you say 'I think that there is nothing wrong with simply running
 it daily on all
 the hardware packs' - what do I change that section to, to indicate that?
 (all hw packs is a bit over the top, it would be good to restrict it a
 little to save
 time, but still, it would be a good start).

 I don't see references to that json file in the docs you link to.

 Dave

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


validation.linaro.org scheduler working again

2011-10-24 Thread Paul Larson
Upon coming in this morning I had reports of failures to schedule jobs in
LAVA, and was noticing about that same time that the daily jobs that
normally run seemed to be missing.  There was a lingering issue with the
network upgrade on Friday that was causing the problem, but once discovered
it was quickly corrected.  This would have also affect any CI jobs since
then, so if you have any CI jobs with missing results, this will be the
reason for it.  Feel free to re-submit those jobs now if you need them.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Oneiric image directory on snapshots.linaro.org

2011-10-07 Thread Paul Larson
On Fri, Oct 7, 2011 at 11:35 AM, James Westby james.wes...@canonical.comwrote:

 Note that Paul requests keeping the suffixes, but I guess that's
 predicated on renaming the top level.

The main reason I like having a suffix of some kind is so that it's obvious
which series it's from by the filename.  If someone includes that filename
(without path) in a bug, or just has it lying around their system, it's good
to be able to determine that kind of information having just the name of the
file.

Another thing that this brings up is landing team hwpacks.  Am I correct in
assuming that those kind of exist outside of any series?  Would it, perhaps,
make more sense to separate hwpacks and images out at the top directory, and
break it down by series under that?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Oneiric image directory on snapshots.linaro.org

2011-10-06 Thread Paul Larson
I know this has come up before, but I'd like to raise it again: Is there any
reason for the lack of consistency between directory layouts for:
http://snapshots.linaro.org/oneiric/
vs.
http://snapshots.linaro.org/11.05-daily/

Would it be possible to rename the oneiric directory to 11.11-daily and
split the hwpacks off into a linaro-hwpacks directory like the 11.04 images?
Also, keeping some kind of -oneiric or -11.11 on the image/hwpack dirs
probably makes sense, but are we planning on changing anything after these
become the default LEB images?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: cpu hotplug test scenarii

2011-09-28 Thread Paul Larson
On Wed, Sep 28, 2011 at 5:18 AM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 Test 6: do some stress test
 ===

 for each cpu
  * for i in 1 .. 50
   * set cpuX offline
   * set cpuX back online

Not sure how evil you want to get with this, but several years back I wrote
a simple, but nasty test that picked a cpu at random, and set the state to
the opposite of whatever it happened to be at the time.  It was completely
random, I didn't have anything builtin for record/playback, or doing it in
the same sequence for reproducibilty, but it didn't really need it.  When it
broke things, it usually broke them within just a few minutes.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Reboot not working with the newly built upstream custom kernel

2011-09-27 Thread Paul Larson
Just out of curiosity, have you tried building an image locally using this
hwpack and booting it?

Thanks,
Paul Larson

On Tue, Sep 27, 2011 at 8:55 AM, Deepti Kalakeri deepti.kalak...@linaro.org
 wrote:


 Hello,

 I am trying to build the upstream kernel like linux-linaro-3.0,
 linux(linus), linux-next, linux-arm-soc for next builds with omap2plus
 defconfig,
 Create a kernel debian package and use this debian package to verify the
 build. The builds are enabled with ext3 file system.

 The scripts that is used to build the kernel for example is
 http://bazaar.launchpad.net/~linaro-infrastructure/linaro-ci/lci-build-tools/view/head:/jenkins_kernel_build_inst
 .
 The builddeb used contains modifications submitted to work for cross
 compilation and the changes are available at
 http://permalink.gmane.org/gmane.linux.linaro.devel/7443.
 Other than that there is no hand made changes to any files.

 An example hwpack that I created using the linaro-hwpack-replace can be
 downloaded from
 http://ci.linaro.org/kernel_hwpack/panda/hwpack_linaro-panda_20110927-0741_armel_supported.tar.gz
 .
 This consists of the kernel debian package built from linux-linaro-3.0 with
 omap2 defconfig  to work on a panda board.

 [PS: linaro-hwpack-replace which is part of the linaro-image-tools is a
 tool that replaces the old kernel debian package in a given hwpack with the
 supplied kernel debian package ]

 I see that the kernel builds successfully, and the hwpack containing this
 newly built kernel debian package boots fine on the boards, but it fails to
 execute the reboot command and it hangs
 with the following message:

 #

 The system is going down for reboot NOW!
 root@linaro-nano:~# init: tty4 main process (3205) killed by TERM signal



 init: tty5 main process (3211) killed by TERM signal
 init: auto-serial-console main process (3213) killed by TERM signal
 init: tty2 main process (3215) killed by TERM signal
 init: tty3 main process (3218) killed by TERM signal



 init: tty6 main process (3224) killed by TERM signal
 init: openvt main process (3276) killed by TERM signal
 init: plymouth-upstart-bridge main process (3341) terminated with status 1
 [ OK ]ing all remaining processes to terminate...



 [ OK ] processes ended within 1 seconds
 [ OK ]onfiguring network interfaces...
 [ OK ]ctivating swap...
 [ OK ]ounting weak filesystems...
  * Will now restart
 [ 2216.347167] Restarting system.


 #

 The board needs to be hard reset to boot back to the system.
 Has anyone seen this issue before, or do you suspect some problem with the
 kernel I built ??
 I would be very thankful if someone can help me triage the problem.

 Thanks and Regards,
 Deepti.


 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Hi guys, could you help me to test busybox on your board?

2011-09-21 Thread Paul Larson
Hi Botao, it's really late right now and I don't have time to look at it
right now, but I played with it a bit on Panda recently and it seemed to
mostly work.  However I had to run 'busybox [COMMAND]' to use it, links for
running the commands directly under busybox did not seem to exist.  If this
is still the case, that would be useful to fix.  Also, I noticed that grep
was not enabled.  That's one of the things that would *really* be useful to
have.  Any way we could get grep support enabled?

Thanks,
Paul Larson

On Wed, Sep 21, 2011 at 3:44 AM, Fathi Boudra fathi.bou...@linaro.orgwrote:

 -- Forwarded message --
 From: Botao Sun botao@linaro.org
 Date: 21 September 2011 11:42
 Subject: Hi guys, could you help me to test busybox on your board?

 Hi,

 The busybox source code has been integrated to Linaro Android platform
 for all boards, so I'd like to ask help from you to test it. I have
 tested it on my Origen board, and it works well.

 Thank you!


 BR
 Botao Sun

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Bootchart results

2011-09-12 Thread Paul Larson
I started working on a results view in LAVA for the bootchart results since
they are now part of the daily runs.  This is using a partial copy of the
data I have locally, so please don't concern yourself too much with the
actual data in it.  A couple of things to point out:

1. legend is not placed well, that's something I'm not yet sure how to fix
as my javascript-fu is lacking here (zyga, any ideas on this?)
2. The set of results here is not very large.  That's adjustable and when
it's using live data, should have a lot more data points to better see
trends
3. This is purposefully restricted to a single board.  We are not running
benchmarks in a controlled enough manner as to make comparison of boards to
one another reasonable, nor is it encouraged due to the collaborative nature
of Linaro.

In particular I'm looking for opinions on how it would be most useful to
display this data.  This view shows all 4 image types on a single chart.  I
did a previous version that had them separate.  Is there a preference? It
would also be easy to do both on the same page, but perhaps a bit redundant.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Bootchart results

2011-09-12 Thread Paul Larson
On Mon, Sep 12, 2011 at 3:32 PM, Jesse Barker jesse.bar...@linaro.orgwrote:

 Is this time to boot to a prompt?

Someone can correct me if I'm wrong here, but iirc it looks for getty or the
display manager (or greeter) depending on which runlevel it hits.
http://www.bootchart.org/ has more information on the benchmark itself.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: edge.validation.linaro.org

2011-08-31 Thread Paul Larson
On Wed, Aug 31, 2011 at 6:57 AM, Zygmunt Krynicki 
zygmunt.kryni...@linaro.org wrote:

 Edge is just a point on the roadmap. If staging is fast enough to
 scrap/deploy then we may just use staging all the time.


I was actually thinking it might be nice to have staging as more of a fluid
concept using VMs.  We could set up a semi-permanent staging VM that gets
updated with trunk of everything every {day, week, whatever}, but also have
other VMs available for upgrade testing, and testing individual changes.
 That way, for instance, if you need to test a dispatcher change that
affects a certain type of board you don't have at home, you can simply wait
for one of those boards to become available in the lab, mark it offline,
bring up a vm and do some testing with it.  Likewise, it could be used to
setup and test an invasive change to the dashboard.  All that would be
required is a few static port mappings on the firewall, and a server capable
of handling a few KVM sessions (which I'd like to look at getting soon for
this purpose and others).

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: edge.validation.linaro.org

2011-08-31 Thread Paul Larson

 One thing that worries me is KVMs lack of (can anyone correct me if I'm
 wrong) USB pass-through support. This means we may have a hard time reaching
 USB serial adapters or USB-to-device links in general. For that reason my
 testing will use headless virtualbox instances.

usb pass through should not be a problem for kvm.  I've never tried it
personally but it looks doable [1].  Worst case scenario, we could always
expose the usb serial consoles over the network using a tool like we did for
the demo in budapest, but I don't think it would be necessary.

-Paul Larson
[1]  http://libvirt.org/formatdomain.html#elementsUSB
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: ANN: ASTER - Android System Testing Environment and Runtime

2011-08-26 Thread Paul Larson
We've talked about it with Jim already, but it's unclear at the moment
whether it would be suitable as a full replacement for the android test
runner, or something that our test runner would run as a test, and would
help us get tests into the android test runner quickly (since hopefully it
will support quite a few).  Yongqin is taking a look at it now.

Thanks,
Paul Larson

On Fri, Aug 26, 2011 at 7:07 AM, Alexander Sack a...@linaro.org wrote:

 Hi Jim,

 sounds interesting. Would be cool to have this integrated/available in
 linaro android. I guess this would help us design and execute testcases for
 android to be run in LAVA? wonder how much overlap this has with the
 testrunner framework discussed/developed in pauls team for android+lava.


 On Thu, Aug 25, 2011 at 2:56 PM, Jim Huang js...@0xlab.org wrote:

 Hello list,

 0xlab announced the new system testing utility for Android.

 -- Forwarded message --
 From: Kan-Ru Chen ka...@0xlab.org
 Date: 2011/8/24
 Subject: [0xlab-discuss] [ANN] ASTER System Testing Environment and
 Runtime
 To: 0xlab-disc...@googlegroups.com


 Hi everyone,

 I am glad to announce our new project, ASTER System Testing Environment
 and Runtime, an automated functional testing tool. This is a tool aimed
 for testers, it includes an easy to use IDE for generating UI test cases
 and a script runner for batch running test cases. The tool is built upon
 the concept of Sikuli desktop automation tool and the Android monkey
 runner engine.

 We gave a talk at COSCUP[1] introducing this project and demonstrated
 the capability of the tool. You can grab the slides from here[2]. The
 project page is here[3] and you can also download the prebuilt preview
 binary at the download page[4]. The documentation is currently lacking
 but I hope the IDE is simple enough for everyone to figure out how to
 use it. In the other hand you always can checkout the source code and
 explore the underlying implementation.

 [1]: http://coscup.org/2011/en/
 [2]: http://0xlab.org/~kanru/COSCUP_aster.svg
 [3]: https://code.google.com/p/aster
 [4]: https://code.google.com/p/aster/downloads/list

 Cheers,
 --
 http://0xlab.org
 Kanru

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev




 --

  - Alexander


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: pm-qa integration script

2011-08-17 Thread Paul Larson
Hi, I haven't had a chance to look at it more in depth yet, but a couple of
problems when I tried running it on my system (just my laptop for now,
haven't tried on any board yet)

1. no results were parsed

2. ### cpufreq_09:
### test the load of the cpu does not affect the frequency with 'powersave'
###
cpufreq_09.0/cpu0: checking 'powersave' sets frequency to 1000.0 MHz...
pass
cpufreq_09.1/cpu0: checking 'powersave' frequency 1000.0 MHz is fixed...
 pass
cpufreq_09.0/cpu1: checking 'powersave' sets frequency to 1000.0 MHz...
pass
cpufreq_09.1/cpu1: checking 'powersave' frequency 1000.0 MHz is fixed...
 pass
make[1]: Leaving directory
`/home/plars/.local/share/abrek/installed-tests/pwrmgmt/pm-qa/cpufreq'
make[1]: Entering directory
`/home/plars/.local/share/abrek/installed-tests/pwrmgmt/pm-qa/sched_mc'
rm -f sched_01.log sched_02.log sched_03.log sched_04.log
-e ###
### sched_01:
### test the presence of the 'sched_mc_power_savings' file
###
sched_01.0: checking sched_mc_power_savings exists...
fail
make[1]: *** [sched_01.log] Error 1
make[1]: Leaving directory
`/home/plars/.local/share/abrek/installed-tests/pwrmgmt/pm-qa/sched_mc'
ABREK TEST RUN COMPLETE: Result id is 'pwrmgmt1313589327.0'

Is this just because it stops as soon as it hits one failure?

On Tue, Aug 16, 2011 at 5:42 PM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi Paul,

 in attachment the patch to modify the pwrmgmnt script for lava in order
 to take into account the new pm-qa scripts.

 Let me know if everything is ok or not.

 Thanks for your help !
  -- Daniel

 - --
  http://www.linaro.org/ Linaro.org ? Open source software for ARM SoCs

 Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
 http://twitter.com/#!/linaroorg Twitter |
 http://www.linaro.org/linaro-blog/ Blog

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iQEcBAEBAgAGBQJOSvJQAAoJEAKBbMCpUGYA85sH/3IlfWWunRuLtOcLenOosM73
 EkNSFPY5+HN1txtL0rIY//P4TRFiVoKciV6+Muzp6KIhqytRLNNkh0tpz6rfOM7A
 G5mv6O2qQGwTpWN7GyqWUMkewxtTUCXmzy+ITkgK0RIDeUXiLftDtW5KZyW2W0GT
 cnYe8mUv5/d5zxucCoMScOcpxLfTRIVh5+1EvkcRtD5TSoTX5eqzV29bb+EchTun
 ZCg25fDTFCa5+kaqdfaygdgNGsn4Q7wgIunduN/hEs4mQAftU2BQJ7JtM7XGje90
 2MaBDcX/GMpjV2FQ8DWcXg9U/9IVLHfRWcM6+PAY6cesPAtpsreVHL7loqgZENY=
 =8Fkx
 -END PGP SIGNATURE-

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: pm-qa integration script

2011-08-17 Thread Paul Larson
On Wed, Aug 17, 2011 at 10:16 AM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 08/17/2011 04:57 PM, Paul Larson wrote:
  Hi, I haven't had a chance to look at it more in depth yet, but a couple
 of
  problems when I tried running it on my system (just my laptop for now,
  haven't tried on any board yet)

 Ok, thanks for looking at.

  1. no results were parsed

 I am not sure to get it. What is the expected result ? Does it mean the
 regular expression in the lava script could be wrong ?

Yes, I've fixed part of it, will get the rest fixed later tonight at when
I'm back at the hotel.


  Is this just because it stops as soon as it hits one failure?

 Oh, right. That can be easily fixed by replacing make check by make
 - -k check ?

 so should it always run as make - -k check?
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: pm-qa integration script

2011-08-17 Thread Paul Larson
Ok, I sorted this out, but I noticed a couple of problems still.
1. even with the -k, it errors out after a while
2. There are still some test results with duplicate test_case_id:
cpufreq_05.0/cpu1: checking 'ondemand' directory exists...
 pass
cpufreq_05.0/cpu1: checking 'conservative' directory exists...
 pass
cpufreq_05.0/cpu1: checking 'ondemand' directory is not there...
 pass

(I don't know if those are the only ones, just some I happened to spot by
chance)

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: lava-dispatcher extensions

2011-07-22 Thread Paul Larson
On Fri, Jul 22, 2011 at 6:05 PM, David Schwarz david.schw...@calxeda.comwrote:


   Several of us at Calxeda have been working on extensions to
 lava-dispatcher with an eye toward using it as a general test management
 engine not just for individual platforms, but for whole clusters of
 machines as well.  We've broken our extensions into three feature
 branches, which can be found at https://code.launchpad.net/~david-schwarz:

Hi David, good to hear from you again.  I'm glad Calxeda is finding LAVA
useful.  There's some good stuff here, we'll get it merged quickly.  I
 would like to talk more with you sometime about some of the other changes
we have coming, and some of the things you are working on as well.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


LAVA 2011.07 Release

2011-07-21 Thread Paul Larson
The Linaro Validation team is pleased to announce the latest release of
LAVA, for the 2011.07 milestone.

LAVA is the Linaro Automated Validation Architecture that Linaro is
deploying to automate the testing of Linaro images and components on
supported development boards.

One of the biggest changes you'll see this month, is the UI for the
dashboard got an overhaul.  You can now view entire bundles that were
submitted, with the test runs organized underneath.  You can also sort
columns to easily see failures, filter large result tables, and change the
number of items displayed per page.  On the scheduler, we added a basic UI
to let you see the status of boards and jobs, and also the ability to
scheduler jobs by device type.  The dispatcher has better error handling and
preliminary support for Snowball boards added, and lava-test now streams
results while the test is running.  The list of bugs and blueprints that
were completed for this release can be found here:
https://launchpad.net/lava/+milestone/2011.07

The release pages with release notes, highlights, changelogs, and downloads
can be found at:

 * lava-dashboard -
https://launchpad.net/lava-dashboard/linaro-11.11/2011.07
 * lava-dashboard-tool -
https://launchpad.net/lava-dashboard-tool/linaro-11.11/2011.07
 * lava-dispatcher -
https://launchpad.net/lava-dispatcher/linaro-11.11/2011.07
 * lava-scheduler -
https://launchpad.net/lava-scheduler/linaro-11.11/2011.07
 * lava-server - https://launchpad.net/lava-server/linaro-11.11/2011.07
 * lava-test - https://launchpad.net/lava-test/linaro-11.11/2011.07
 * lava-tool - https://launchpad.net/lava-tool/linaro-11.11/2011.07
 * linaro-python-dashboard-bundle -
https://launchpad.net/linaro-python-dashboard-bundle/linaro-11.11/2011.07
 * linaro-django-xmlrpc -
https://launchpad.net/linaro-django-xmlrpc/+milestone/2011.07

For more information about installing, running, and developing on LAVA,
see: https://wiki.linaro.org/Platform/Validation/LAVA/Documentation

To get a preview of what's coming next month take a look at:
https://launchpad.net/lava/+milestone/2011.08
We have some good things coming soon, such as out-of-tree test support in
lava-test, subscription to be notified of test results, improvements in the
scheduler UI, and the website will be getting a facelift to give a make
current testing and results more visible.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: ppas, version numbers, releases and validation.linaro.org

2011-07-11 Thread Paul Larson
(If you reply, reply to this one, not the previous message I sent, this one
fixes the linaro-dev email address)
On Mon, Jul 11, 2011 at 4:35 AM, Zygmunt Krynicki 
zygmunt.kryni...@linaro.org wrote:

 In short: ~zygaN is the thing we can increment. We should KEEP and perhaps
 change the name to ~lava (but this has to be coordinated as ~lava  ~zyga.
 There are three possible scenarios which this system correctly handles:

 Right, but we can make that change anytime the upstream version gets
bumped.  So if it has an upstream version component that gets bumped, then
feel free to change over to the ~lava designation as soon as you make a
release that bumps the upstream version.  Otherwise, if it's a component
that uses .MM http://.mm/ only, then we will wait until the
2011.07 release later this month to switch.

-Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Please test Android RC builds

2011-07-04 Thread Paul Larson
On Sun, Jul 3, 2011 at 9:13 PM, Zach Pfeffer zach.pfef...@linaro.orgwrote:

 Yeah. Those results seem rational. Would you test:


 https://android-build.linaro.org/builds/~linaro-android/panda-11.06-release/#build=3

This one is currently running




 https://android-build.linaro.org/builds/~linaro-android/leb-panda-11.06-release/

This one is identical to the above




 https://android-build.linaro.org/builds/~linaro-android/beagle-11.06-release/#build=2

This one is complete, results can be easily found at the usual place based
on the url:
http://validation.linaro.org/launch-control/dashboard/reports/android-runs/


 ...and update the build note on each page with the validation results?

What is this build note and where is it displayed?


 Here's how to update the description from Paul Sokolovsky:

 1. Make sure you're registered with Jenkins:
 https://android-build.linaro.org/jenkins/people/
 2. If you aren't, sign up at
 https://android-build.linaro.org/jenkins/signup and let me know your
 username, so I can give you appropriate permissions (Zach, this point
 is for you)
 3. Browse to needed job from Jenkins frontpage:
 https://android-build.linaro.org/jenkins/
 4. Once you at job's page (e.g.
 https://android-build.linaro.org/jenkins/job/linaro-android_beagle/),
 click add description/edit description in the top right corner, and
 edit the description. Full HTML is allowed.
 5. Click Submit and see if it looks OK. Repeat editing if needed.
 Otherwise, the description is immediately available at frontend's page
 for the job. (Though for your own browser, you sometime need to Ctrl+R
 it to refresh cache).

Is there an easier way of doing this?  I don't think anyone wants to be
doing this for every single test.  Our goal was to automate this, so we
really need a way to link the results back in some automated fashion, or
they need to just be searched by the person looking for them.

Please add the validation stuff to the bottom.

I held off on this for the time being, pending the answer to my question
above.  This appeared to be a pretty generic place associated with the whole
build, not that specific build id.  Is there a better place to add it?


 Would you do this for all validation results from builds we manual
 request tested from here on out?

If it's one or two a month, sure.  Or better yet, we'll soon be able to let
you schedule your own jobs.  We're deploying the new
lava-server/lava-dashboard this week, and on top of that will be the
scheduler as well.  This would enable you to kick off your own jobs for
one-offs like this.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Please test Android RC builds

2011-07-04 Thread Paul Larson
On Mon, Jul 4, 2011 at 12:34 PM, Paul Sokolovsky paul.sokolov...@linaro.org
 wrote:

 On Mon, 4 Jul 2011 12:29:19 -0500
 Paul Larson paul.lar...@linaro.org wrote:

 []
  Is there an easier way of doing this?  I don't think anyone wants to
  be doing this for every single test.  Our goal was to automate this,
  so we really need a way to link the results back in some automated
  fashion, or they need to just be searched by the person looking for
  them.

 My idea was that LAVA will be our CI Central. I.e., if one wants just
 builds, one would look at android-build, but if one wants all info at
 all, and validation results in particular, one would look into LAVA. Is
 that viable idea at all?

But LAVA doesn't have information about the builds.  So it seems to me we
want to somehow get that information back to the location that does have
information about the builds -or- make it easy for people to find results
for the build they care about.  Can you give more detail about what you are
suggesting?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [RFC pm-qa 0/2] tests for cpufreq

2011-07-01 Thread Paul Larson
On Fri, Jul 1, 2011 at 9:08 AM, Daniel Lezcano daniel.lezc...@linaro.orgwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 07/01/2011 02:24 AM, Paul Larson wrote:
  On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano
  daniel.lezc...@linaro.orgwrote:
 
  When all tests will be finished I wish to switch the new way test suite
  execution in lava, if it is possible.
 
  Given that the old tests are broken at the moment and disabled, any
 reason
  we shouldn't switchover now?

 Ok, let's switch. Expect the messages output I have to change, is there
 something I should change to integrate with lava (eg. for the 'make
 check' invocation) ?

 Nope, that should be just fine.  Point me at a version when you make those
changes, and I'll make a test definition for it in lava-test and we can try
it out.
Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [RFC pm-qa 0/2] tests for cpufreq

2011-06-30 Thread Paul Larson
On Wed, Jun 29, 2011 at 11:35 PM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 These tests are used to test the cpufreq driver on ARM architecture.
 As the cpufreq is not yet complete, the test suite is based on the cpufreq
 sysfs API exported on intel architecture, assuming it is consistent across
 architecture.

 Hi Daniel, are these built on top of the previous pmqa testsuite so that
they will work automatically?  Or do we need to make updates to the test
definition in lava-test?  As long as they are going into the same git tree,
and don't do anything that changes the way the results were parsed, they
should be fine, but I wanted to make sure.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [RFC pm-qa 0/2] tests for cpufreq

2011-06-30 Thread Paul Larson
On Thu, Jun 30, 2011 at 10:44 AM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 I added in attachment the result of these scripts on a fully working
 cpufreq framework on my intel box. That will show the ouput of the tests.

 But the cpufreq is not complete on a pandaboard, so results won't be
 really nice.

 Looking at your results answered some of my questions at least, it seems to
have a very different format for the output than the previous tests.  Are
the older ones replaced by this, extended by it, or is this just a
completely separate testsuite?  If it's to be considered a different
testsuite, that makes some sense, as the type of tests here seem to be
consistent pass/fail sorts of tests.  However I have a few concerns for
having it be easily parsed into results that can be stored automatically:

for 'cpu0':
 checking scaling_available_governors file presence ...PASS
 checking scaling_governor file presence ...   PASS
 for 'cpu1':
 checking scaling_available_governors file presence ...PASS
 checking scaling_governor file presence ...   PASS

 ...
Heading1
 test_id1
 test_id2
Heading2
 test_id1
 test_id2
This is notoriously a bit tricky to deal with.  It can be done, but the
parsing has to track which heading it's under, and modify the test_id (or
some attribute of it) to designate how it differs from other testcases with
the exact same name.  It can be done, but since you have complete control
over how you output results, it can easily be changed in such a way that is
easy to parse, and easy for a human to look at.
What might be easier is:
cpu0_scaling_available_governors_file_exists: PASS
cpu0_scaling_governor_file_exists: PASS
cpu1_scaling_available_governors_file_exists: PASS
cpu1_scaling_governor_file_exists: PASS
...

Another thing that I'm curious about here is...

 saving governor for cpu0 ...  DONE

Is that a result? Or just an informational message?  That's not clear, even
as a human reader.

deviation 0 % for 2333000 is ...  VERY
 GOOD

Same comments as above about having an easier to interpret format, but the
result here: VERY GOOD - what does that mean?  What are the other possible
values?  Is this simply another way of saying PASS?  Or should it actually
be a measurement reported here?

Thanks,
Paul Larson


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: lava-dev-tool + lava-server installation/development and local data

2011-06-30 Thread Paul Larson
On Thu, Jun 30, 2011 at 3:01 AM, Michael Hudson-Doyle 
michael.hud...@linaro.org wrote:

 It's one of these things I've been vaguely meaning to look into for ages
 -- can you run postgres as an unprivileged user like this for testing?
 I assume it's possible in some sense, but if e.g. it involves
 recompiling postgres or any thing like that it's not really practical.
 But I would be happy to only support postgres.

 If we're just talking about doing this testing periodically, and not every
time you run the unit tests, could we just do it in a chroot?

-Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


LAVA 2011.06 Release

2011-06-30 Thread Paul Larson
The Linaro Validation team is pleased to announce the first full release of
LAVA, for the 2011.06 milestone.

LAVA is the Linaro Automated Validation Architecture that Linaro is
deploying to automate the testing of Linaro images and components on
supported development boards.

The release pages with release notes, highlights, changelogs, and downloads
can be found at:

 * lava-dashboard -
https://launchpad.net/lava-dashboard/linaro-11.11/2011.06
 * lava-dashboard-tool -
https://launchpad.net/lava-dashboard-tool/linaro-11.11/2011.06
 * lava-dispatcher -
https://launchpad.net/lava-dispatcher/linaro-11.11/2011.06
 * lava-scheduler -
https://launchpad.net/lava-scheduler/linaro-11.11/2011.06
 * lava-scheduler-tool -
https://launchpad.net/lava-scheduler-tool/linaro-11.11/2011.06
 * lava-server - https://launchpad.net/lava-server/linaro-11.11/2011.06
 * lava-test - https://launchpad.net/lava-test/linaro-11.11/2011.06
 * lava-tool - https://launchpad.net/lava-tool/linaro-11.11/2011.06
 * linaro-python-dashboard-bundle -
https://launchpad.net/linaro-python-dashboard-bundle/linaro-11.11/2011.06
 * linaro-django-xmlrpc -
https://launchpad.net/linaro-django-xmlrpc/+milestone/2011.06

For more information about installing, running, and developing on LAVA, see:
https://wiki.linaro.org/Platform/Validation/LAVA/Documentation

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Please test Android RC builds

2011-06-30 Thread Paul Larson
http://validation.linaro.org/launch-control/dashboard/attachments/2755/

Tested on the released panda-leb version, here's the serial log showing the
same errors I mentioned in the previous results?  Is this a known problem?
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [RFC pm-qa 0/2] tests for cpufreq

2011-06-30 Thread Paul Larson
On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 When all tests will be finished I wish to switch the new way test suite
 execution in lava, if it is possible.

Given that the old tests are broken at the moment and disabled, any reason
we shouldn't switchover now?


 Will the following format be ok ?

 test_01/cpu0 : checking scaling_available_frequencies file ... PASS
 ...

That would probably translate internally to something like:
{test_id=test_01_cpu0, message=checking scaling_available_frequencies
file, result=PASS}
Is that ok?  Seems like something we should have no trouble making sense out
of later I think.  Also, the exact output is saved as an attachment as well.


 The result for a test case is PASS or FAIL.

 We support unknown as a result as well, if that helps you at all.
 Sometimes results can be indeterminate, or unimportant (odd as that may
sound, sometimes the message is really what you're after, as illustrated by
some of the previous pm qa tests)

But under some circumstances, we need to do some extra work where a
 failure does not mean the test case failed but the pre-requisite for the
 test case is not met.

...and we also support skip as a result.  That seems like a correct use
for it.
 You don't have to report it literally as skip, you can call it oink for
all we care, and just provide a translation table that converts whatever
your result strings are to {pass, fail, skip, unknown}.  For example, see
the test definition for ltp.


 deviation 0 % for 2333000 is ...  VERY
  GOOD
 
  Same comments as above about having an easier to interpret format, but
 the
  result here: VERY GOOD - what does that mean?  What are the other
 possible
  values?  Is this simply another way of saying PASS?  Or should it
 actually
  be a measurement reported here?

 Yep, I agree it is an informational message and should go to a logging
 file. I will stick to a simple result 'PASS' or 'FAIL' and I will let
 the user to read the documentation of the test in the wiki page to
 understand the meaning of these messages (GOOD, VERY GOOD...).

keeping to the template you mentioned earlier, I wonder if we could do
something like this:
deviation_0_for_2333000: VERY GOOD ... PASS
(are the numbers there consistent and useful as a test case name?  I'm
assuming so here)
That would allow you to capture VERY GOOD as details in the message (one
of the fields we keep).  Also, your test could be smart enough to know that
good, or verygood = pass, while bad, verybad = fail.  Possibly I'm making a
lot of assumptions here, but I think you see what I mean.

Keeping to a consistent results format in your output is a good practice,
and makes this *much* easier for capturing the data important to you.  Of
course, anything is possible.  We have some tests with rather elaborate
results and for those, the test definition just inherits from a base class
and defines it's own parser.  If you're not feeling very pythonic, you could
provide your own parser as part of the test download written in shell, c,
ruby, go, whatever, that just acts as a filter, then have it just read it
all in directly.  There are lots of options, but I'm of the opinion that a
consistent format makes it easier on humans looking at it as much as machine
parsers.  And since you have control over that, easiest to do it now. :)

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA Documentation

2011-06-29 Thread Paul Larson
On Tue, Jun 28, 2011 at 9:03 PM, Christian Robottom Reis k...@linaro.orgwrote:

 On Tue, Jun 28, 2011 at 09:55:07AM +0100, Paul Larson wrote:
  I've started on some basic documentation for LAVA at
  https://wiki.linaro.org/Platform/Validation/LAVA/Documentation

 Good job! I think you should probably split the setting up section from
 the writing jobs and hacking parts, since they are pretty different
 audiences.

Good point.  To an extent, they are kinda the same at this point in time,
because a lot of the questions we get are about:
1. how do I set it up to demo/test it in my environment
2. how do I create a job to test it out
3. how do I add support for XYZ

However as we go forward, I can see how all of these sections should grow,
and probably be targeted better at their intended audience and crosslink to
other sections as required.


 From the developer's perspective, I can think of two initial use cases: I
 am
 in a WG and want to see a test run for this branch I'm working on, and I
 am in a Platform unit or Landing team and I want to see how an image
 we're generating is behaving on a certain piece of hardware. Can we make
 it easy for these people to figure out what to do?

For case 1, we really need tests to support those kinds of loads.  We can
install images, and in the not so distant future we hope to even have
support for injecting kernels. This will capture the needs of some, but not
all WGs once they have tests in place.  We're also in talks currently with
the toolchain WG for instance to figure out how we can best proceed for
things they need.

For case 2, I think a lot of that centers around reports.  We added a new
report for this release which gives a better snapshot of the status of the
latest test run of a particular image type, on each board type [1].  I'm
sure we'll come up with other, better ways to look at the data, but this is
a good starting point.  Reports are still a *bit* difficult to write, but
we're considering how we can better document that, provide more examples,
and make it easier in other ways as well.

Thanks,
Paul Larson

[1]
http://validation.linaro.org/launch-control/dashboard/reports/boot-status-overview/
(N.B. This actually tracks job completion status at the moment, which should
not be equated to whether the board booted.  A failure just means that
*something* caused the test run to end prematurely.  Could have been a
failed test, or even just a timeout.)
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


LAVA Documentation

2011-06-28 Thread Paul Larson
I've started on some basic documentation for LAVA at
https://wiki.linaro.org/Platform/Validation/LAVA/Documentation
This is mostly aimed at those who are trying to set it up for themselves, or
get started on development for it.  However I'm also starting to cover
example jobs, and will expand that section further to list the available
actions and the parameters they can take.  It's a wiki, so feel free to add
to it, or let me know if there's a particular section you have questions
about and want to see expanded sooner.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: FYI: Linaro Validation showcase video at Computex 2011

2011-06-23 Thread Paul Larson
Neat!  Any feedback from those who saw it?

On Fri, Jun 24, 2011 at 12:00 AM, Jim Huang jim.hu...@linaro.org wrote:

 Hello list,

 During the first week of June, we prepared the technical showcase[1]
 about Linaro powered devices and projects including LAVA[1].

 To emphasize how LAVA works, we just uploaded another demo video:
http://www.youtube.com/watch?v=_3dT68MOzz0

 It starts at 2:27.

 Sincerely,
 -jserv

 [1] The overview video: ARM at Computex 2011 with Linaro
http://www.youtube.com/user/ARMflix#p/a/9ABC7CFA0491638F/0/9RzbAt27qic
 [2] https://wiki.linaro.org/Platform/Validation/LAVA

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA Dashboard user forum

2011-06-21 Thread Paul Larson
On Mon, Jun 20, 2011 at 9:35 PM, Zach Pfeffer zach.pfef...@linaro.orgwrote:

 Frans is going to work to pull these together. There's a BP at:


 https://blueprints.launchpad.net/linaro-android/+spec/linaro-android-connect-android-build-to-lava

 -Zach

 On 20 June 2011 18:47, Michael Hudson-Doyle michael.hud...@linaro.org
 wrote:

  The build should also probably wait until the test is done and get a
  pass fail, this should help set us up for Gerrit integration in the
  near future.
 
  I guess the naive implementation of this would leave the EC2 instance
  running while the test runs which would be wasteful... but that's
  something for Paul to figure out.
 

Would that really be necessary? Or does the process go more like:
1. topic branch created with testing requested, intended for inclusion if
everything is ok
2. do testing
3. if tests pass, merge (is this manual?)

The blueprint mentioned doesn't seem to cover the reverse path for Gerrit to
somehow interpret those test results and do something with them.  Should it?
 If so, I think we need to better understand how that piece works from the
Gerrit side.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Listing android tests in LAVA

2011-06-20 Thread Paul Larson
On Mon, Jun 20, 2011 at 4:20 AM, Paul Sokolovsky paul.sokolov...@linaro.org
 wrote:

   Can you tell me if I should do some kind of filtering/processing?

 Yes, apparently. Can you point me to the code which queues and sets
 'android.url' attribute on a test input?

  It's a script that I'm just hacking together as a temporary solution while
we have to poll.  You can either point it at a url of the type Zach wanted,
or point it at the build name and build id and it will find the necessary
pieces.  At this point, it would probably just choke if you pointed it at a
toolchain build.  If it didn't, lava would because it doesn't have any way
to deal with toolchain artifacts at the moment.  So, for right now it's safe
to assume that any results we have are for android, not for a android-build
built toolchain.

The reason for pointing at the url in the metadata, was because that's how
Zach said he wanted to search for results.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Renaming miscellaneous python projects that currently abuse linaro

2011-06-17 Thread Paul Larson
On Fri, Jun 17, 2011 at 9:35 AM, Zygmunt Krynicki 
zygmunt.kryni...@canonical.com wrote:

 Hi.

 I was wondering if anyone would object to namespace cleanup of our python
 projects.

I think it probably only makes sense to have Linaro in the name if there is
something specific to Linaro about it.  I think for most of these, they
could be generally used by anyone, so it makes more sense to leave that out.


 For linaro-django-pagination
 This project is a fork of dead/unmaintained django-pagination. The upstream
 author has rejected my requests to hand over ownership.

Is he interested in the changes you made as a patch, rather than a full
takeover of the project?  What are the main differences?  If it's so
different that it warrants a whole new project name, then we should name it
something that makes it more clear what the difference is, rather than just
good.  If it's not that different, then it would be better for us to work
with the current django-pagination project and see how we can merge our
changes.


Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Design and implementation of lava-ci tool

2011-06-15 Thread Paul Larson
On Wed, Jun 15, 2011 at 2:56 PM, Zygmunt Krynicki 
zygmunt.kryni...@linaro.org wrote:

 I was writing it this morning. It actually pulls all of the branches that
 compose lava and gives you an ability to install them in a managed
 virtualenv instance.

 Great!



  It seems to me,
 that on some things, this could be overkill, or not really make it that
 much
 simpler.  For instance... in the case of lava-test you may want to also
 install python-apt, but there are other things like usbutils.  In the case
 of the dispatcher, it can also run completely from the source branch.


 So there are three possible cases:

 1) A component has no dependencies and runs from tree (awesome!)
 2) A component depends on something we make
 3) A component depends on something others make

 But what about where other non-python packages need to be installed for it
to work properly? That was the main thing I was trying to sort out?  Does
the recipe have a rule to do this?  In the case where post-install
configuration is needed, does it point the user to instructions for how to
do that?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


abrek - lava-test

2011-06-09 Thread Paul Larson
As you may have noticed in a recent email thread, the validation team is in
the process of making some structural changes to the LAVA project.  For
finding all the projects for Linaro Validation that we are working on, the
easiest option is to go to the main LAVA project group page [1] on
Launchpad, which links to each of them.

The last project name change should happen today, and that will be the
rename of abrek to lava-test.  What this means, is that if you currently
pull abrek for use in any other test framework, it will need to be fixed to
pull lp:lava-test instead.  We will make sure this change is reflected in
lava-dispatcher so that there are no interruptions in daily tests that are
running in the validation farm.  However, if you are running your own
dispatcher you will want to update your installation with the update, which
should land in the next 24 hours once the rename of abrek is complete.

Thanks,
Paul Larson

[1] http://launchpad.net/lava
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PowerManagement] Cpufreq tests scenarii

2011-06-02 Thread Paul Larson
On Wed, Jun 1, 2011 at 4:10 AM, Daniel Lezcano daniel.lezc...@linaro.orgwrote:

 On 06/01/2011 09:59 AM, Paul Larson wrote:

 I have a board at home I can try to reproduce with if someone on the pm
 team
 doesn't.  I probably won't be able to get to it until later in the week
 though.


 Ok, thanks very much. I will discuss with the PM team and I will try to get
 a beagleboardXM.
 That will probably take more than one week for me.

 Do you confirm this problem happens only on the beagleXM ?

The only other place I was able to test it was panda, and it worked fine
there - no problem.
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: reorganzing the lava projects on launchpad

2011-06-01 Thread Paul Larson
Any good suggestions for a new name for abrek? I don't want to encourage
bike shedding on this, so unless someone has a much better suggestion, I
would suggest we just go with lava-test (skip the -tool) and be done with
it.

Thanks,
Paul Larson
On May 31, 2011 7:55 PM, Michael Hudson-Doyle michael.hud...@linaro.org
wrote:
 On Tue, 31 May 2011 13:42:56 +0800, Spring Zhang spring.zh...@linaro.org
wrote:
 Seems lava-dispatcher online now, some blueprints change from lava to
 lava-dispatcher,

https://blueprints.launchpad.net/lava-dispatcher/+spec/linaro-platforms-o-lava-dispatcher-improvements
,
 also lava-server and lava-scheduler.

 Ah yes, I should have followed up on this thread when I made the
 changes. So far:

 * lava was renamed to lava-dispatcher
 * lava-scheduler was created, but is only a skeleton
 * launch-control was renamed to lava-dashboard, but there is a redirect
 in place
 * launch-control-tool was renamed to lava-dashboard-tool (with redirect)
 * lava was created as a new project group, and all the above projects
 added to it.

 I haven't renamed abrek yet.

 Cheers,
 mwh
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PowerManagement] Cpufreq tests scenarii

2011-06-01 Thread Paul Larson
I have a board at home I can try to reproduce with if someone on the pm team
doesn't.  I probably won't be able to get to it until later in the week
though.

Thanks,
Paul Larson
On May 31, 2011 5:38 PM, Daniel Lezcano daniel.lezc...@linaro.org wrote:
 On 05/31/2011 08:48 PM, Paul Larson wrote:
 Since there is no lp project (that I am aware of) for the pmqa tests, I
 think I just reported it to Amit directly at the time. Iirc, he said to
 follow up on it when he had someone working on the qa tests, which
appears
 to be you, so I'm doing that now. :) Do you have a place where bugs for
it
 should live?

 No, not yet. Let me discuss with the power management team about
 creating a launchpad project.


 We could put it against abrek I suppose, but it's not really,
 it's a bug in the testsuite, not the frameworks.

 What I was seeing at the time, was avail_freq02 would never exit on
 beagleXM. It should take only a few seconds to complete I think, based on
 what I saw on panda. But on beagleXM it would sit in this state for
hours:

 I doubt the problem is coming from the test suite. At the first glance I
 think it raises a kernel bug.
 It is not normal to have an userspace program blocked in uninterruptible
 state and moreover some kthread blocked too.
 It is probable there is a domino effect here with a dangling lock in the
 kernel.

 Is it possible to have the kernel version where this problem appears ?
 Does it happen with beagleXM only or with more boards ? If the former,
 it will be hard for me to reproduce the problem as I don't have it. Is
 there a solution for that ? (eg. accessing a boards farm + a magic
 finger to reboot the boards).

 root@linaro:~# abrek run pwrmgmt
 [ 241.163391] INFO: task kworker/0:1:24 blocked for more than 120
seconds.
 [ 241.170379] echo 0 /proc/sys/kernel/hung_task_timeout_secs disables
 this message.
 [ 241.178680] INFO: task avail_freq02.sh:1078 blocked for more than 120
 seconds.
 [ 241.186218] echo 0 /proc/sys/kernel/hung_task_timeout_secs disables
 this message.

 [ 361.186889] INFO: task kworker/0:1:24 blocked for more than 120
seconds.
 [ 361.193878] echo 0 /proc/sys/kernel/hung_task_timeout_secs disables
 this message.
 [ 361.202209] INFO: task avail_freq02.sh:1078 blocked for more than 120
 seconds.
 [ 361.209747] echo 0 /proc/sys/kernel/hung_task_timeout_secs disables
 this message.

 Can you reproduce the problem but with hung_task_panic set to 1, so we
 will have a full stack trace and the more context.

 Thanks
 -- Daniel


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PowerManagement] Cpufreq tests scenarii

2011-05-31 Thread Paul Larson
Has the bug that caused pm qa tests to hang on beagleXM been fixed yet?  We
had these tests running daily at one time, but had to disable them because
they were completely hanging the boards.

Thanks,
Paul Larson

On Tue, May 31, 2011 at 10:15 AM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:


 Here is a bunch of scenarii I am planning to integrate to the pm-qa
 package.

 Any idea or comment will be appreciate.

 Note the test cases are designed for a specific host configured to
 have a minimal number of services running on it and without any pending
 cron
 jobs. This pre-requisite is needed in order to not alter the expected
 results.

 Thanks
  -- Daniel

 cpufreq:
 

 (1) test the cpufreq framework is available
 check the following files are present in the sysfs path:
  /sys/devices/system/cpu/cpu[0-9].*
  - cpufreq/scaling_available_frequencies
  - cpufreq/scaling_cur_freq
  - cpufreq/scaling_setspeed

 There are also several other files:
  - cpufreq/cpuinfo_max_freq
  - cpufreq/cpuinfo_cur_freq
  - cpufreq/cpuinfo_min_freq
  - cpufreq/cpuinfo_transition_latency
  - cpufreq/stats/time_in_state
  - cpufreq/stats/total_trans
  - cpufreq/stats/trans_table
  - ...

 Should we do some testing on that or do we assume it is not up to
 Linaro to do that as being part of the generic cpufreq framework ?

 (2) test the change of the frequency is effective in 'userspace' mode

- set the governor to 'userspace' policy
- for each frequency and cpu
- write the frequency
- wait at least cpuinfo_transition_latency
- read the frequency
- check the frequency is the expected one

 (3) test the change of the frequencies affect the performances of a test
 program

(*) Write a simple program which takes a cpu list as parameter in
order to set the affinity on it. This program computes the number
of cycles (eg. a simple counter) it did in 1 second and display
the result. In case more than one cpu is specified, a process is
created for each cpu, setting the affinity on it.

(**) - set the governor to userspace policy
 - set the frequency to the lower value
 - wait at least cpuinfo_transition_latency
  - run the test (*) program for each cpu, combinate them and
   concatenate the result to a file
 - for each frequency rerun (**)
 - check the result file contains noticeable increasing values

 (3) test the load of the cpu affects the frequency with 'ondemand'

(*) write a simple program which does nothing more than consuming cpu
 (no syscall)

for each cpu
 - set the governor to 'ondemand'
 - run (*) in background
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - read the frequency
 - kill (*)
 - check the frequency is equal to the higher frequency available

 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - read the frequency
 - check the frequency is the lowest available

 (4) test the load of the cpu does not affect the frequency with 'userspace'

for each cpu
 - set the governor to 'userspace'
 - set the frequency between min and max frequencies
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - run (*) in background
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - read the frequency
 - kill (*)
 - check the frequency is equal to the one we set

 (5) test the load of the cpu does not affect the frequency with 'powersave'

for each cpu
 - set the governor to 'powersave'
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - check the frequency is the lowest available
 - run (*) in background
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies -  - read the frequency
 - kill (*)
 - check the frequency is the lowest available

 (6) test the load of the cpu affects the frequency with 'conservative'

for each cpu
 - set the governor to 'conservative'
 for each freq step
 - set the up_threshold to the freq step
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - run (*) in background
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - read the frequency
 - check the frequency is equal to higher we have with the freq_step
 - kill (*)


 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PowerManagement] Cpufreq tests scenarii

2011-05-31 Thread Paul Larson
Since there is no lp project (that I am aware of) for the pmqa tests, I
think I just reported it to Amit directly at the time.  Iirc, he said to
follow up on it when he had someone working on the qa tests, which appears
to be you, so I'm doing that now. :)  Do you have a place where bugs for it
should live?  We could put it against abrek I suppose, but it's not really,
it's a bug in the testsuite, not the frameworks.

What I was seeing at the time, was avail_freq02 would never exit on
beagleXM.  It should take only a few seconds to complete I think, based on
what I saw on panda.  But on beagleXM it would sit in this state for hours:

root@linaro:~# abrek run pwrmgmt
[  241.163391] INFO: task kworker/0:1:24 blocked for more than 120 seconds.
[  241.170379] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables
this message.
[  241.178680] INFO: task avail_freq02.sh:1078 blocked for more than 120
seconds.
[  241.186218] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables
this message.

[  361.186889] INFO: task kworker/0:1:24 blocked for more than 120 seconds.
[  361.193878] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables
this message.
[  361.202209] INFO: task avail_freq02.sh:1078 blocked for more than 120
seconds.
[  361.209747] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables
this message.

Thanks,
Paul Larson

On Tue, May 31, 2011 at 12:25 PM, Daniel Lezcano
daniel.lezc...@linaro.orgwrote:

 On 05/31/2011 05:46 PM, Paul Larson wrote:

 Has the bug that caused pm qa tests to hang on beagleXM been fixed yet?
  We
 had these tests running daily at one time, but had to disable them because
 they were completely hanging the boards.


 Oh, I was not aware of such bug. Do you have a pointer to this bug please ?
 What do you mean when you say the they were 'hanging the boards' ?
 Is the board unresponsive or the qa tests are blocked ?


  Thanks,
 Paul Larson

 On Tue, May 31, 2011 at 10:15 AM, Daniel Lezcano
 daniel.lezc...@linaro.orgwrote:

  Here is a bunch of scenarii I am planning to integrate to the pm-qa
 package.

 Any idea or comment will be appreciate.

 Note the test cases are designed for a specific host configured to
 have a minimal number of services running on it and without any pending
 cron
 jobs. This pre-requisite is needed in order to not alter the expected
 results.

 Thanks
  -- Daniel

 cpufreq:
 

 (1) test the cpufreq framework is available
 check the following files are present in the sysfs path:
  /sys/devices/system/cpu/cpu[0-9].*
  -  cpufreq/scaling_available_frequencies
  -  cpufreq/scaling_cur_freq
  -  cpufreq/scaling_setspeed

 There are also several other files:
  -  cpufreq/cpuinfo_max_freq
  -  cpufreq/cpuinfo_cur_freq
  -  cpufreq/cpuinfo_min_freq
  -  cpufreq/cpuinfo_transition_latency
  -  cpufreq/stats/time_in_state
  -  cpufreq/stats/total_trans
  -  cpufreq/stats/trans_table
  -  ...

 Should we do some testing on that or do we assume it is not up to
 Linaro to do that as being part of the generic cpufreq framework ?

 (2) test the change of the frequency is effective in 'userspace' mode

- set the governor to 'userspace' policy
- for each frequency and cpu
- write the frequency
- wait at least cpuinfo_transition_latency
- read the frequency
- check the frequency is the expected one

 (3) test the change of the frequencies affect the performances of a test
 program

(*) Write a simple program which takes a cpu list as parameter in
order to set the affinity on it. This program computes the number
of cycles (eg. a simple counter) it did in 1 second and display
the result. In case more than one cpu is specified, a process is
created for each cpu, setting the affinity on it.

(**) - set the governor to userspace policy
 - set the frequency to the lower value
 - wait at least cpuinfo_transition_latency
  - run the test (*) program for each cpu, combinate them and
   concatenate the result to a file
 - for each frequency rerun (**)
 - check the result file contains noticeable increasing values

 (3) test the load of the cpu affects the frequency with 'ondemand'

(*) write a simple program which does nothing more than consuming cpu
 (no syscall)

for each cpu
 - set the governor to 'ondemand'
 - run (*) in background
 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - read the frequency
 - kill (*)
 - check the frequency is equal to the higher frequency available

 - wait at least cpuinfo_transition_latency *
 nr_scaling_available_frequencies
 - read the frequency
 - check the frequency is the lowest available

 (4) test the load of the cpu does not affect the frequency with
 'userspace'

for each cpu
 - set the governor to 'userspace'
 - set the frequency between min and max

Re: designing the scheduler command line api

2011-05-30 Thread Paul Larson
I think the long form of specifying the server is not terrible for now.
After all, aliases would make it a non-issue anyway.  Pulling from an
environment variable should be simple, and iirc has already been done for
lc-tool.  I think it's unlikely though, that one would commonly submit to
moreb than one server frequently.  In the long run, I think it would be nice
to consider a command to store a default server to submit the jobbto, if
none is specified.

Assuming the auth token has been previously stored, and the default server
specified, that would simplify the command line to something like:
lava-tool submit-job test.json

Thanks,
Paul Larson
On May 29, 2011 11:54 PM, Michael Hudson-Doyle michael.hud...@linaro.org
wrote:
 Hi all,

 I'm working on the infrastructure that will underlie the scheduler
 command line api. Zygmunt and I have the technical side understood I
 think, but what I want to think aloud about is how the command line
 should work for a user. In particular, I wonder how much state the
 command line tool should save.

 The basic command line entry point is submit a job. For now at least,
 the job will be a JSON file, so the simplest possible invocation would
 be:

 $ lava-tool submit-job test.json

 But this doesn't specify anything about where the job should go. The
 least effort solution would be to require a full specification on the
 command line:

 $ lava-tool submit-job --farm https://mwhudson:$
to...@validation.linaro.org test.json

 This is clearly a bad idea though: tokens will be long and unwieldy and
 passing it on the command line means they will be disclosed in ways that
 we should avoid (they'll appear in ps output for all users and could
 easily end up in log files).

 So we need a way of storing tokens. The easiest thing to do would be to
 store a token for a (username, host) pair, so you'd run a command like:

 $ lava-tool auth-add --token-file token-file https://mwhudson@
validation.linaro.org

 Then you'd still need to specify the username and host on every
 invocation:

 $ lava-tool submit-job --farm https://mwhud...@validation.linaro.orgtest.json

 Would this be too unwieldy do you think? For scripted or cronjobbed
 invocations its probably fine, for the command line it might be a bit
 awkward. But I don't know how much call there is for a genuinely
 interactive tool here.

 I guess the point I've argued myself to here is that I should implement
 this long form for now, and if a need arises add more conveniences (for
 example, reading from an environment variable or adding a command to set
 the defaults). Does anyone disagree too strongly with that?

 Cheers,
 mwh

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Blueprint dilemma: validation website vs dashboard improvements

2011-05-25 Thread Paul Larson
I don't think we need to overcomplicate that piece.  The validation website
links to reports on the lava dashboard, and should provide some grouping of
the data by teams, so that teams can have their own weather report sort of
page, or link to the things they care about.  If needs be, we can create a
separate project for this and store the pages in bzr.

-Paul Larson

On Wed, May 25, 2011 at 11:35 AM, Zygmunt Krynicki 
zygmunt.kryni...@linaro.org wrote:

 Hi guys.

 After looking hard at those blueprints and our discussions during UDS I'm
 getting this impression that the line between the two is blurred.

 It would significantly to frame the problem help if we could determine the
 architecture of validation website and how it differs from lava
 (especially after we did lava-server), where it ends up being (which
 project?) and so on.

 Best regards
 ZK

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LTP test result collecting

2011-05-02 Thread Paul Larson
http://validation.linaro.org/launch-control/dashboard/test-runs/fccb906e-7301-11e0-816b-2e4070f01206/has
the results from a recent run I did on panda in the validation lab.  I
do restrict the default set of tests in ltp to skip a few of the nastier IO
ones that run ok on fast machines with sata disks, but on our dev boards
just needlessly pound the sd cards, typically long enough to make you think
the system is just completely unresponsive.

-Paul Larson

On Mon, May 2, 2011 at 3:59 AM, Shawn Guo shawn@freescale.com wrote:

 I'm drafting the blueprint [1], and need your favors to collect LTP
 (Linux Test Project [2]) result on boards that Linaro supports with
 Natty kernel running on.

 The example steps of the testing (I did on i.mx51 babbage with Beta-2
 linaro-n-developer) are documented on Whiteboard of [1].  Please help
 run the test on the boards you have except i.mx51, and send me the result.
 I appreciate the effort.

 [1]
 https://blueprints.launchpad.net/linux-linaro/+spec/linaro-kernel-o-ras-fix-ltp
 [2] http://ltp.sourceforge.net/

 --
 Regards,
 Shawn


 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [lava] Android image support

2011-04-13 Thread Paul Larson
On Wed, Apr 13, 2011 at 11:25 PM, Alexander Sack a...@linaro.org wrote:
 On Wed, Apr 13, 2011 at 12:52 PM, Jeremy Chang jeremy.ch...@linaro.org
wrote:
 Hi, list:
Just want to let you know, especially for the validation team members,
I am trying to do Android support on lava, so I created a branch,
 lp:~jeremychang/lava/android-support
Hi, Paul, maybe you can help me give it a review. Thanks!

 Paul is travelling  I assume he will be able to review this on friday.
Yes, trying to catch up with email, I'll take a look at it as soon as I can.
 And thanks for working on this!  I'm looking forward to seeing what you've
got!


 .
I am on my personal environment, so I changed some configs to fit
 my environment for testing. Also I documented them here in
 https://wiki.linaro.org/JeremyChang/Sandbox/LavaAndroidValidation
On the partitioning... is all of that really necessary?  Is there any way to
get this down?  To support this, we'll need to modify MMC_BLOCK_MINORS for
the 10.11 kernels that we have for the master images on most of the boards,
as well as on any laptop used to create these images.  Would be nice if
there were some easy way around this.

 17 -MASTER_STR = root@master:
  18 +MASTER_STR = root@linaro:
  19  #Test image recognization string
  20  TESTER_STR = root@linaro:

This is ill-advised, as you now can't tell which image you're booted into.
Better to just set the hostname on you master image to master.


Basically I would like to a make a prototype based on the existing
 Lava to do the testing/Validation for Android from the deployment to
 submit_result. Now I already made it do the continual process of
 starting from the deployment, reboot to Android, do basic test and
 simple monkey test.

Now what I thinking is, also the trickiest item seems that how to
 present the test results of various Android validations to dashboard
 [1]. Due to platform difference, abrek, python code could not run on
 Android target, there is a need to generate the json bundle in lava
 for sending to the dashboard. I think for android validation,
 benchmark/testing, abrek could not be used.
Why is there an android version of test_abrek here?  It doesn't seem to
belong, but filling in run_adb_shell_command looks like it would be useful.

 For now - to get past this point - you could just implement an
 action/command that is monkey_results_to_json.
Zygmunt did some looking at conversion scripts for gcc tests a while back.
You should talk to him as I think he has some good feelings about good and
bad approaches to this problem since he's been through some of the pain with
it already.  But yes, abrek will convert it, but some other conversion piece
will be needed since we don't use abrek here.

 my understanding is that attachment should be used to attach
 additional information ... the main results should be in the main
 json part. basically result (boolean) and/or measurement (float)
 from what i recall.
Attachments can be used when you need to preserve the whole result as it
was, logfiles, even binaries.  However, launch control can't easily tell you
much about them... just dump them out for you.  For instance, we plan to
store the serial log in an attachment.  It's not something we need the
dashboard to query on, but it would be useful to have if we get a failure,
and want to investigate further.  Parsing the test results, and storing them
in measurements and results makes them easier to query for, so that graphs
and reports can later be generated.

Regarding to generating the bundle, I will check what's the way to
 generate the reasonable software_context and hardware_context for
 android.
I wouldn't worry about this too much right now, it's not a critical thing to
do.

Thanks again for working on this! I'll look at it more in depth when I get
home.
Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Automated kernel testing.

2011-04-07 Thread Paul Larson
On Thu, Apr 7, 2011 at 3:24 AM, David Gilbert david.gilb...@linaro.org wrote:
 I'm curious; do we have any interaction with the autotest project - it
 seems it's whole point is automated kernel testing,.
 http://autotest.kernel.org/ and test.kernel.org
I did some looking at it, but what we are trying to provide is much
bigger than just kernel testing by itself.  That being said, I think
some of the tests in the autotest client would be useful to run.
There's a couple of ways we could do it.  The preference would be to
pick the important ones, and enable abrek to work with them.  We could
also possibly look at running the autotest client directly under lava,
but would require a bit of extra work to get the results into the
right format.

Thanks,
Paul Larson

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Automated kernel testing.

2011-04-06 Thread Paul Larson
Hi Chris, we actually have a project in progress called LAVA to do something
very similar to this.  Our initial target is to test full linaro images and
hardware packs, however we are also very interested in extending this to
include running with just an updated kernel in a known good image to test
things like upstream and landing team kernels.  I'm away from my computer
right now, but I would be very intersted in having your input on this if you
are interested.  I should be around later this evening or tomorrow if you'd
like to chat further on irc.  Look for plars on freenode.

Thanks,
Paul Larson
On Apr 6, 2011 6:50 PM, Chris Ball c...@laptop.org wrote:
 Hi Linaro folks,

 I'd like to set up an automated testing environment to help me maintain
 the MMC subsystem: just a few boards of different architectures booting
 each day's linux-next and running basic performance and integrity tests
 without any manual intervention involved. I've got access to remote
 power switches, so I think it'll be pretty easy to get running.

 I was thinking that this will probably find generic kernel bugs as they
 appear in linux-next, and was wondering whether Linaro has any interest
 in such testing -- if so, do you have any preferred software to use for
 it, or any resources I should take a look at to avoid duplicating work?
 I've been looking at using autotest, but it seems a bit abandoned
 lately.

 Would be grateful for any pointers or ideas. Thanks!

 - Chris.
 --
 Chris Ball c...@laptop.org http://printf.net/
 One Laptop Per Child

 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA integration effort for Android validation

2011-03-15 Thread Paul Larson
On Mon, Mar 14, 2011 at 11:22 AM, Jeremy Chang jeremy.ch...@linaro.orgwrote:

 Hi, list:
I am working on Validation part for Android, in which mainly
 inclusive of integrating existing benchmark and testing suites into
 abrek, like 0xbench and android CTS. For now in my thought, if these
 benchmark/testing can integrate with abrek, then it should be no much
 problem or extra effort for integrating further into LAVA.

As discussed before, I think abrek is probably not the right choice for
running tests under android.  Abrek was designed to work on linaro images
running python in the client, specifically it was designed to run under
Linaro or Ubuntu images, before we had even considered android at all.  We
probably need some kind of mechanism for running the tests from the server
over adb instead, then outputting the results into a json bundle that the
dashboard can understand.



Though I have some questions regarding to validation.

How to detect the early fail through serial console? like
 detecting the failure during kernel booting stage also before running
 init or getting the shell. I think this will be a job issued by the
 dispatcher though I don't know how this checking mechanism will be
 done. Would there be anything here needed to do specifically for
 Android?

Right, the dispatcher currently handles this.  We are using pexpect to drive
things over the serial console.  So once the system is booting, we expect to
see a valid shell prompt within some reasonable timeout period.  If we
don't, then we know the boot failed.  I expect the bit that detects the
shell prompt may need to be changed somewhat for android.  Perhaps for
android we need to wait for a valid adb devices output? or a working adb
shell?



How to connect the devices in the farm? I checked the wiki page,
 Platform/Validation/Specs/HardwareSetup [1]. The network will be used.
 I am thinking what's needed for setting up a network environment for
 this? The device is needed to fetch a fixed ip by dhcp just after
 booting up? USB gadget is an alternative for Android and in most
 situation could be more convenient for personal testing. adb (Android
 Debug Bridge, running on host side) can connect to a device through
 USB or TCP/IP.

USB gadget is hard for us, if we have a lot of machines.  Network would be
better, but can we rely in it to be working?  Maybe we just make that a
requirement for running android tests.  I don't currently have it set up,
but my current thinking is that we want to define a new network for the
validation machines, perhaps 10.1.x.x, and set up static assignments for
them in a dhcp server running on the control node.  This way, they could
either be configured statically, or via dhcp.


Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH 2/2 android/device/linaro/common] mount mmc partitions instead of mtd partitions

2011-03-14 Thread Paul Larson
On Mon, Mar 14, 2011 at 9:37 AM, Jeremy Chang jeremy.ch...@linaro.orgwrote:

 +# mount mmc partitions
 +mount ext4 mmc@blk0p3 /system
 +mount ext4 mmc@blk0p3 /system ro remount
 +mount ext4 mmc@blk0p5 /cache
 +mount ext4 mmc@blk0p6 /data
 +mount ext4 mmc@blk0p7 /sdcard

 In my understanding, this will require a very particular sd card partition
layout, which seems to use quite a few partitions.  Is this documented
somewhere?  I'm trying to think about whether it will be possible to make
this coexist with Linaro image testing in our validation environment where
we currently have 2 partitions for booting a master image, and two
partitions for booting a new linaro image we wish to test.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH 2/2 android/device/linaro/common] mount mmc partitions instead of mtd partitions

2011-03-14 Thread Paul Larson

 Hi, Paul:

  There's a wiki page here,
 https://wiki.linaro.org/Platform/Android/DevBoards
   Also I have created a branch
 lp:~jeremychang/linaro-image-tools/android for bug #724207.
   Besides using linaro-media-create to deploy the android contents,
 indeed I am curious about how we should create the 2 partitions for
 master image for automatic validation. Regarding to deployment, is
 there any effort we should do to integrate android to LAVA?
   Thanks.

 Do you think the sizes listed there will be adequate for long?  They seem a
bit small, but then again I know android is a lot more stripped down than
most of our linaro images.

You don't need to worry about creating the master image part.  That's
basically a piece that we put on the machine to make sure we have some
stable image to boot back into.  The only part that concerned me here was
the static mapping of partition numbers to mountpoints.  There are a couple
of different ways I could see this working:
1. with the current way we do things, we typically either do:
mmcblk0p1 - boot (for master image)
mmcblk0p2 - root (for master imge)
then either p3, p4 for testboot, testrootfs, or an extended partition, with
2 or more logical partitions labeled testboot, testrootfs.
We could adjust this slightly and do:
mmcblk0p1 - boot
mmcblk0p2 - root
mmcblk0p3 - androidsystem
mmcblk0p4 [extended]
  mmcblk0p5 - androidcache
  mmcblk0p6 - androiddata
  mmcblk0p7 - androidsdcard / testrootfs (this could be the bulk of
remaining space used by both, depending on which image type we are booting)
  mmcblk0p8 - testboot

We would need to try it, but it might make a good solution for the short
term.

2. Long term, I think we want to eventually get to a system where we are
loading a minimal initramfs from somewhere.  We're currently looking into a
custom hardware device that would allow us to load this from one sdcard,
then switch over to another for building images on.  If we get something
like that in place, then we can completely wipe out a test sd card that
would allow us to put whatever we want on it.  There are a lot of details to
work out with this, so I don't see it happening this cycle, but I think it's
where we want to be eventually.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH 2/2 android/device/linaro/common] mount mmc partitions instead of mtd partitions

2011-03-14 Thread Paul Larson
On Mon, Mar 14, 2011 at 12:05 PM, Paul Larson paul.lar...@linaro.orgwrote:


 1. with the current way we do things, we typically either do:
 mmcblk0p1 - boot (for master image)
 mmcblk0p2 - root (for master imge)
 then either p3, p4 for testboot, testrootfs, or an extended partition, with
 2 or more logical partitions labeled testboot, testrootfs.
 We could adjust this slightly and do:
 mmcblk0p1 - boot
 mmcblk0p2 - root
 mmcblk0p3 - androidsystem
 mmcblk0p4 [extended]
   mmcblk0p5 - androidcache
   mmcblk0p6 - androiddata
   mmcblk0p7 - androidsdcard / testrootfs (this could be the bulk of
 remaining space used by both, depending on which image type we are booting)
   mmcblk0p8 - testboot

 We would need to try it, but it might make a good solution for the short
 term.

I see one thing that could cause a problem here.  I tried actually creating
an SD card with this partitioning scheme and it seems to choke on the last
partition.  Looking at the mmc block driver, it seems the default value for
MMC_BLOCK_MINORS is 8, so we only see the first 7 (one is for the whole
disk) partitions under /dev.  Even if we change this default in the kernel,
I'm not sure if uboot would still have a problem with it or not.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: problem with linaro-media-create

2011-03-04 Thread Paul Larson
On Fri, Mar 4, 2011 at 2:09 AM, Spring Zhang spring.zh...@linaro.orgwrote:


 Ok.  Which wiki page did you look at?  We should be directing maverick
 users
 to enable the linaro-maintainers/tools ppa, which includes both the
 linaro-image-tools and qemu-linaro packages.

 I can find linaro-image-tools, but no qemu-linaro on maverick, and I added
 PPA.

 qemu-linaro is just the source package name.  If you already have the
necessary qemu bits installed, just update.  Otherwise, iirc you need
qemu-user-static or qemu-kvm-extras-static.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: linaro-omap kernel 1003.6 boot failure

2011-02-22 Thread Paul Larson
On Tue, Feb 22, 2011 at 7:09 AM, Frederic Turgis frederic.tur...@linaro.org
 wrote:

 Hi,

 To perform some systemtap checking, I needed kernel package with debug
 symbols, which is available in 1003.6 version from
 http://ddebs.ubuntu.com/pool/universe/l/linux-linaro-omap/
 So I did some apt-get update, install
 linux-image-2.6.37-1003-linaro-omap followed by flash-kernel
 2.6.37-1003-linaro-omap.
 Boot fails at some point (before setting up omap_hsuart in an OK log
 from 2.6.37-1002)

 Any known issue or thing I am doing wrong ?

 Hi Fred, this is a known bug.  You can track the progress of it here:
https://bugs.launchpad.net/ubuntu/+source/linux-linaro/+bug/720055

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Booting problem on Panda board

2011-02-21 Thread Paul Larson
On Mon, Feb 21, 2011 at 6:26 AM, Avik Sil avik@linaro.org wrote:

 On Monday 21 February 2011 03:13 PM, Andy Green wrote:

 On 02/21/2011 08:20 AM, Somebody in the thread at some point said:

 Hi,

 I've been trying to boot Linaro on Panda board following the
 instructions given in https://wiki.linaro.org/Boards/Panda. I've used
 linaro-natty-headless-tar-20110216-0.tar.gz and
 hwpack_linaro-panda_20110217-0_armel_supported.tar.gz. After writing the
 image successfully on an SD card I tried booting the board from it and
 got a kernel panic: http://paste.ubuntu.com/569954

 Please let me know how to get out of this problem.


 There's a bug for it here with some more comment on it FWIW

 https://bugs.launchpad.net/ubuntu/+source/linux-linaro/+bug/720055

  After reverting to an older headless image and hwapck (of 20110214), I
 could get rid of this problem but unable to get the shell prompt:
 http://paste.ubuntu.com/570020. The system stops there.

 Also, pressing the PWRON_RESET button does not reboot the system.

Hi Avik, I suspect you are now hitting the power off bug [1].  It seems to
strike at random places in the boot for different people/images.

Thanks,
Paul Larson

[1] https://bugs.launchpad.net/ubuntu/+source/linux-linaro/+bug/708883
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Please review blueprint linaro-n-validation-job-dispatcher

2011-02-14 Thread Paul Larson
On Mon, Feb 14, 2011 at 5:58 AM, Mirsad Vojnikovic 
mirsad.vojniko...@linaro.org wrote:


 I have a question regarding test job json file and messages from Scheduler
 to Dispatcher: is this file created by Scheduler and put in the message
 queue where Dispatcher will fetch it from, or how should this work?

 That's one possibility, yes, that the scheduler could create this from user
input.  Another of course, is that a user could submit a job control file
directly to the scheduler rather than dealing with selecting options.
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: /latest link now active on snapshots.linaro.org

2011-02-14 Thread Paul Larson
On Mon, Feb 14, 2011 at 10:39 AM, Amit Kucheria amit.kuche...@linaro.orgwrote:

 The currently link needs the following navigation:


 http://snapshots.linaro.org/11.05-daily/linaro-developer/latest/0/images/tar/linaro-n-developer-tar-20110214-0.tar.gz

 Perhaps it's also be possible to simply that to


 http://snapshots.linaro.org/11.05-daily/linaro-developer/latest/linaro-n-developer.tar.gz?

Only if linaro-n-developer.tar.gz is a symlink to
linrao-n-developer-tar-DATESTAMP-BUILDID.tar.gz.  It is still useful to have
some easy way to see which image we are pulling, even if it means pulling it
out of the filename.  Another alternative, would be if we could drop some
buildstamp file in the image directory to pull this from.

Another thought too... if we are already going to this extent to make it
easier for people to stay up to date with the latest image, does it make
sense to go ahead and enable creation of .zsync files for the images?
Granted, these aren't nearly the size of an iso, but some of our folks with
slower connections might appreciate it.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Validation team weekly status

2011-02-09 Thread Paul Larson
Wiki version:
https://wiki.linaro.org/Platform/Validation/Status/2011-02-10

 * Period: (20110203-20110209)
 * PM: TBD
 * Past reports : https://wiki.linaro.org/Platform/Validation
 * Burndown information : http://status.linaro.org

== Key Points for wider discussion ==
 * None

== Team Highlights ==
 * Zygmunt and Deepti's latest dashboard changes have been merged. This
brings in chart support for the toolchain working group as well as uniform
object ownership management.
 * Discussions with outside groups on participation in LAVA development
 * Lab hardware arrived in Cambridge, being worked on this week
 * Talk on LAVA submitted for ELC
 * Weekly/Alpha-2 qatracker testing on several boards

== Upcoming Deliverables ==
 * No immediate milestones due.

=== Blueprint Milestones ===
 * No immediate milestones due

== Risks / Issues ==
 * No Project manager currently - Paul taking care of status, meetings, etc.
until this is resolved

== Miscellaneous ==
 * Spring Zhang on leave February 1st - 11th.
 * Paul Larson on US holiday Feb 21
 * Mirsad is still at 50%, will move to alternating days schedule (marked on
his calendar)
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Please review blueprint linaro-n-validation-job-dispatcher

2011-02-09 Thread Paul Larson
On Mon, Jan 31, 2011 at 11:58 AM, Paul Larson paul.lar...@linaro.orgwrote:

  2. One queue TOTAL. One queue may seem like a bottleneck, but I don't
 think it has to be in practice.  One process can monitor that queue, then
 launch a process or thread to handle each new job that comes in.

 I think RabbitMQ message can have the ability to include the board
 information in it to use one queue, and also we can use different queues for
 different types of boards.

 Right, but my question was, if you are already encoding that information in
 the job stream, what is the advantage to having a queue for each board type,
 rather than a single queue?





 Job description:
 I'd like to see some more detail here.  Can you provide an example of a
 job file that would illustrate how you see it working?  We also need to
 specify the underlying mechanisms that will handle parsing what's in the job
 file, and calling [something?] to do those tasks.  What we have here feels
 like it might be a bit cumbersome.

 I added an detailed one on the spec, like:

1.

[Beagle-1,

 http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/imx51/20110131/0/images/hwpack/hwpack_linaro-imx51_20110131-0_armel_unsupported.tar.gz,


 http://snapshots.linaro.org/11.05-daily/linaro-headless/20110131/0/images/tar/linaro-natty-headless-tar-20110131-0.tar.gz,
900, [abrek, LTP, 180], [none, echo 100  abc, 1], ... ]
2.

[IMX51-2, 20100131, 900, [abrek, LTP, 180], [none, echo 100 
abc, 1], ... ]

 This looks almost like JSON, which is what Zygmunt was originally pushing
 for IIRC.  If we are already going down that path, seems it would be
 sensible to take it a step further and have defined sections.  For example:

 Tests:[
 {
 name:LTP,
 testsuite:abrek,
 subtest:ltp,  not thrilled about that name subtest,
 but can't think of something better to call it at the moment
 timeout:180,
 reboot_after:True
 },
 {
 name:echo test,
 testsuite:shell,
 timeout:20
 reboot_after:False
 }
 ]
 What do you think?  Obviously, there would be other sections for things
 like defining the host characteristics, the image to deploy, etc.





 Other questions...
 What if we have a dependency, how does that get installed on the target
 image? For example, if I want to do a test that requires bootchart be
 installed before the system is booted, we should be able to specify that,
 and have it installed before booting.

 What about installing necessary test suites?  How do we tell it, in
 advance, what we need to have installed, and how does it get on the image
 before we boot, or before we start testing?

 I think validation tools, test suites and necessary files are likely to
 install after test image deployment.

 Yes, clearly they have to be installed after deployment, but we may also
 need to consider them installing BEFORE we actually boot the test image.

 Had another thought on this tonight, I didn't hear much back on the
previous proposal, but how about something more like this as an example of
what a job description might look like.  In this scenario, there would be
some metadata for the job itself, such as timeout values, etc.  Then steps
that would define commands with parameters.  That would allow us to be more
flexible, and add support for more commands without changing the format.
Hopefully the mailer doesn't mangle this too badly. :)

{
  job_name: foo,
  target: panda01,
  timeout: 18000,
  steps: [
{
  command: deploy,
  parameters:
{
  rootfs: 
http://snapshots.linaro.org/11.05-daily/linaro-developer/20110208/0/images/tar/linaro-n-developer-tar-20110208-0.tar.gz

  hwpack: 
http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/panda/20110208/0/images/hwpack/hwpack_linaro-panda_20110208-0_armel_supported.tar.gz

}
},
{
  command: boot_test_image
},
{
  command: test_abrek,
  parameters:
{
  test_name: ltp
}
},
{
  command: submit_results,
  parameters:
{
  server: http://dashboard.linaro.org;,
  stream: panda01-ltp
}
}
  ]
}

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: problem with linaro-media-create

2011-02-07 Thread Paul Larson
On Mon, Feb 7, 2011 at 8:25 AM, Aneesh V ane...@ti.com wrote:

 Hi,

 I am trying to prepare an MMC card to boot up Panda with Linaro daily
 build image.

 I downloaded linaro-image-tools from:
 https://launchpad.net/linaro-image-tools

Hi Aneesh, this is a known problem.  For the moment, if you are not using
natty, the easiest workround is to simply upgrade to the version of
qemu-kvm-extras-static in natty.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


LAVA scheduler spec

2011-02-04 Thread Paul Larson
Hi Mirsad, I'm looking at the recent edits to
https://wiki.linaro.org/Platform/Validation/Specs/ValidationScheduler and
wanted to start a thread to discuss.  Would love to hear thoughts from
others as well.

We could probably use some more in the way of implementation details, but
this is starting to take shape pretty well, good work.  I have a few
comments below:

 Admin users can also cancel any scheduled jobs.
Job submitters should be allowed to cancel their own jobs too, right?

I think in general, the user stories need tweaking.  Many of them center
around automatic scheduling of jobs based on some event (adding a machine,
adding a test, etc).  Based on the updated design, this kind of logic would
be in the piece we were referring to as the driver.  The scheduler shouldn't
be making those decisions on its own, but it should provide an interface for
both humans to schedule jobs (web, cli) as well as and api for machines
(driver) to do this.

 should we avoid scheduling image tests twice because a hwpack is coming in
after images or vv.
Is this a question?  Again, I don't think that's the scheduler's call.  The
scheduler isn't deciding what tests to run, and what to run them on.  In
this case, assuming we have the resources to pull it off, running the new
image with the old, and the new hwpack would be good to do.

 Test job definition
Is this different from the job definition used by the dipatcher?  Please
tell me if I'm missing something here, but I think to schedule something,
you only really need two blobs of information:
1a. specific host to run on
   -OR-
1b. (any/every system matching given criteria)
This one is tricky, and though it sounds really useful, my personally
feeling is that it is of questionable value.  In theory, it lets you make
more efficient use of your hardware when you have multiple identical
machines.  In practice, what I've seen on similar systems is that humans
typically know exactly which machine they want to run something on.  Where
it might really come in to play is later when we have a driver automatically
scheduling jobs for us.
2. job file - this is the piece that the job dispatcher consumes.  It could
be handwritten, machine generated, or created based on a web form where the
user selects what they want.

 Test job status
One distinction I want to make here is job status vs. test result.  A failed
test can certainly have a complete job status.
Incomplete, as a job status, just means that the dispatcher was unable to
finsish all the steps in the job.  For instance, a better example would be
if we had a test that required an image to be deployed, booted, and a test
run on it.  If we tried to deploy the image and hit a kernel panic on
reboot, that is an incomplete job because it never made it far enough to run
the specified test.

 Link to test results in launch-control
If we tie this closely enough with launch-control, it seems we could just
communicate the job id to the dispatcher so that it gets rolled up with the
bundle.  That way the dashboard would have a backlink to the job, and could
create the link to the bundle once it is deserialized.  Just a different
option if it's easier.  I don't see an obvious advantage to either approach.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: linaro-n-validation-job-dispatcher

2011-02-02 Thread Paul Larson
Moving this thread to the linaro-dev mailing list because I'd like it to be
public discussion, and specifically to involve Rob since we talked about his
interest in getting involved with LAVA development and especially the job
dispatcher.

2. One queue TOTAL. One queue may seem like a bottleneck, but I don't think
 it has to be in practice.  One process can monitor that queue, then launch a
 process or thread to handle each new job that comes in.

 I think RabbitMQ message can have the ability to include the board
 information in it to use one queue, and also we can use different queues for
 different types of boards.

Right, but my question was, if you are already encoding information about
which host to run on in the job stream, what is the advantage to having a
queue for each board type, rather than a single queue?





 Job description:
 I'd like to see some more detail here.  Can you provide an example of a
 job file that would illustrate how you see it working?  We also need to
 specify the underlying mechanisms that will handle parsing what's in the job
 file, and calling [something?] to do those tasks.  What we have here feels
 like it might be a bit cumbersome.

 I added an detailed one on the spec, like:

1.

[Beagle-1,

 http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/imx51/20110131/0/images/hwpack/hwpack_linaro-imx51_20110131-0_armel_unsupported.tar.gz,


 http://snapshots.linaro.org/11.05-daily/linaro-headless/20110131/0/images/tar/linaro-natty-headless-tar-20110131-0.tar.gz,
900, [abrek, LTP, 180], [none, echo 100  abc, 1], ... ]
2.

[IMX51-2, 20100131, 900, [abrek, LTP, 180], [none, echo 100 
abc, 1], ... ]

 This looks almost like JSON, which is what Zygmunt was originally pushing
for IIRC.  If we are already going down that path, seems it would be
sensible to take it a step further and have defined sections.  For example:

Tests:[
{
name:LTP,
testsuite:abrek,
subtest:ltp,  not thrilled about that name subtest, but
can't think of something better to call it at the moment
timeout:180,
reboot_after:True
},
{
name:echo test,
testsuite:shell,
timeout:20
reboot_after:False
}
]
What do you think?  Obviously, there would be other sections for things like
defining the host characteristics, the image to deploy, etc.





 Other questions...
 What if we have a dependency, how does that get installed on the target
 image? For example, if I want to do a test that requires bootchart be
 installed before the system is booted, we should be able to specify that,
 and have it installed before booting.

 What about installing necessary test suites?  How do we tell it, in
 advance, what we need to have installed, and how does it get on the image
 before we boot, or before we start testing?

 I think validation tools, test suites and necessary files are likely to
 install after test image deployment.

Yes, clearly they have to be installed after deployment, but we may also
need to consider them installing BEFORE we actually boot the test image.


Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Panda Board power supplies

2011-02-02 Thread Paul Larson

 Semi-relatedly, I lost the .au/.nz plug for my beagle xM power supply at

 the rally, so if someone wants to mail me theirs or bring it to Budapest
 ... :-)

I can bring you mine in Budapest :)
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: do recent linaro-headless dailies work on your beagle xM?

2010-12-15 Thread Paul Larson
I can reproduce this on qemu as well, it looks like the last working hwpack
was 20101207, and 20101208 fails.  Has anyone filed a bug for this already?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: New problem with BeagleBoard

2010-11-22 Thread Paul Larson
Hi Ira, just out of curiosity, have you tried this with the 10.11 release
images?  I know some people tested that with Beagle XM and it was reported
to work, so it may be worth seeing if it's something with your board, or
something in u-boot that has broken since then.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Call for testing: Linaro 10.11 FINAL

2010-11-09 Thread Paul Larson
On Tue, Nov 9, 2010 at 5:12 PM, john stultz johns...@us.ibm.com wrote:


 So I know its not on the testing list, but the qemu instructions on the
 wiki page fail, as they don't include an omap3 hwpack.

 Feel free to update the wiki :)


 Further, after providing an omap3 hwpack to build the image, the
 resulting image seems to have partition table issues when booted with
 qemu:

 [   24.961090] EXT3-fs (mmcblk0p2): mounted filesystem with ordered data
 mode
 Begin: Running /scripts/local-bottom ... done.
 done.
 Begin: Running /scripts/init-bottom ... done.
 fsck from util-linux-ng 2.17.2
 rootfs: The filesystem size (according to the superblock) is 1030168 blocks
 The physical size of the device is 1030118 blocks
 Either the superblock or the partition table is likely to be corrupt!


 rootfs: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
 mountall: fsck / [224] terminated with status 4
 mountall: Filesystem has errors: /
 Errors were found while checking the disk drive for /
 Press F to attempt to fix the errors, I to ignore, S to skip mounting or M
 for manual recovery

 That bug *should* be fixed as of a few weeks ago:

revno: 141
fixes bug(s): https://launchpad.net/bugs/658511
committer: matt.wad...@linaro.org
branch nick: align-image
timestamp: Wed 2010-10-13 12:35:44 -0600
message:
  Round the size of the raw disk image up to a multiple of 256K
  so it is an exact number of SD card erase blocks in length.
  Otherwise Linux under qemu cannot access the last part of the
  card and is likely to complain that the last partition on the
  disk has been truncated.

Is your branch up to date?  If so, what are the command line options you are
using to create the image?

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Call for testing: Linaro 10.11 FINAL

2010-11-09 Thread Paul Larson
The size correction in lmc happens at the end, so my best guess is that it's
a qemu issue.  Id put a bug in against that, and we should probably release
note it as well.

-Paul Larson

On Nov 9, 2010 6:58 PM, john stultz johns...@us.ibm.com wrote:

On Tue, 2010-11-09 at 17:47 -0600, Paul Larson wrote:
 On Tue, Nov 9, 2010 at 5:12 PM, john stultz ...
Sure, if the bug below doesn't warrant removing it.



 Further, after providing an omap3 hwpack to build the image,
 the
 re...
Yep. bzr pulled today. I'm on rev 165.

I've just gone back testing older images and hwpacks I've used in the
past and am seeing the same issue there, so it does seem like its
connected to the linaro-media-create scripts.

Here's the command I'm using, just as listed on the wiki:
#linaro-media-create --image_file beagle_sd.img --image_size 4G --swap_file
200 --dev beagle --binary
linaro-images/linaro-m-headless-tar-20101108-2.tar.gz  --hwpack
linaro-images/hwpack_linaro-omap3_20101109-1_armel_supported.tar.gz


Per our irc discussion, I tried reproducing this by omitting the
--image_size 4G portion, and indeed it seems that worked and the
resulting image boots fine. Similarly it works fine using 8G. Something
just seems off using 4G.

Looking at linaro-media-create to see if I can narrow anything down.

thanks
-john
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Quick test of images from a different build farm

2010-10-13 Thread Paul Larson
Hi, I have a request for some testing on a few images that have been
produced in a different build farm.  The goal of this testing is not really
to validate the images themselves, but to make sure that we aren't seeing
anything broken that looks like it was due to the new build environment.
One thing to note, is that the test images I have for this are from
20101006, so I don't expect to see significant differences from the weekly
testing on the 7th, last week.

For convenience, I set up a milestone on the Linaro QA Tracker [1] under the
heading New build farm to track the results of this testing.  I would
greatly appreciate if someone has a few spare cycles and can help by loading
a few of these images on the boards they have available, and give it a quick
try.

The images for testing are located at:
http://people.canonical.com/~plars/linaro/images/

Instructions for installing these images are the same as for the ones that
we've produced in the past.  For reference, those instructions are posted
at: http://wiki.linaro.org/Source/ImageInstallation.

Thanks,
Paul Larson

[1] http://qatracker.linaro.org/
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Weekly Linaro image testing

2010-09-27 Thread Paul Larson
On Mon, Sep 27, 2010 at 3:56 PM, Jamie Bennett jamie.benn...@linaro.orgwrote:

 Hi,

 We have only 43 (or 42 depending on your timezone) days until the Linaro
 final release [1] and
 as we get closer, the need for more structured testing is essential. An
 idea that has been
 incubating for some time is to have a weekly 'test day' where individuals
 can spend a couple
 of hours downloading, installing and testing a Linaro image on their
 hardware. Which image(s)
 you can test will obviously depend on the hardware available but currently
 we produce:

  * OMAP3 Headless
  * OMAP3 ALIP
  * OMAP3 Plasma Handset
  * OMAP3 EFL Netbook
  * Versatile Express Headless
  * UX500 Headless

 I will work on getting testcases written up on the wiki for this similar to
what is at https://wiki.linaro.org/Platform/QA/TestCases/HeadlessOMAP before
this Thursday.  I we currently have OMAP3 Headless and VExpress Headless
written up, and John Rigby has agreed to help me get a Ux500 version written
since he has a board.  Anyone who has knowledge of the other images, I
welcome your input in writing up test instructions for them.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: hudson instance for daily u-boot builds

2010-09-07 Thread Paul Larson
On Tue, Sep 7, 2010 at 2:41 PM, Loïc Minier loic.min...@linaro.org wrote:

Hey

  Per the topic quoted below (kernel + u-boot release process), I've
  setup the awesome hudson CI server on my personal system until we can
  move it to the datacenter; it's accessible at http://hudson.dooz.org/

This looks great!  Just out of curiosity, did you look at anything else such
as buildbot maybe?  I'm curious what reasons you had for choosing hudson
over other alternatives.

Thanks,
Paul Larson
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev