Re: 'menu' in OFW on an XO-1
got the same result with 2 x XO1 Q2E45 (dextrose439dg and 373pyg) have not used 'menu' before, always used 'help' to see options Tony > With firmware Q2E45 on an XO-1 (yes, I finally got it unlocked!), I > type 'menu' at the ok prompt. > > On an XO-1.5, I get a very useful list of icons that run different > hardware tests. On this XO-1, I only get an array of square outlines. > The first (top-left) is blue. The others are black. Clicking on them > does nothing. > > Is that expected behaviour? ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
'menu' in OFW on an XO-1
With firmware Q2E45 on an XO-1 (yes, I finally got it unlocked!), I type 'menu' at the ok prompt. On an XO-1.5, I get a very useful list of icons that run different hardware tests. On this XO-1, I only get an array of square outlines. The first (top-left) is blue. The others are black. Clicking on them does nothing. Is that expected behaviour? ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Discovering the XOs local timezone in a bash script
Well, 'My Settings' -> 'Date & Time' allows the user to specify the local timezone (I always do this) - but I'm not sure which routines actually make use of that setting. It can be extracted by (all on one line): gconftool-2 --direct --config-source=xml:readwrite:/home/olpc/.gconf/desktop/sugar/date --get /timezone but then you have to decode that string ( e.g., America/Chicago ). mikus ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Discovering the XOs local timezone in a bash script
On 03/12/2011 10:21 PM, Samuel Greenfeld wrote: > The XO image OLPC supplies defaults to UTC. Users can select a time zone > offset in Sugar if they want, but it is purely a numerical control at > last check and does not allow you to choose a setting that is regional > (like "America/New York" or "EST5EDT", which include Daylight Savings > support). ok so not automatic but I can still work with that. I'll make sure to tell people testing solar to please set that to the value for their region. > Sugar reports only relative times in its core GUI, so I don't know how > common it is for deployments or other users to actually change this > setting. Setting a time zone other than UTC with 10.1.3 and prior may > also expose a flaw where the offset is repetitively applied every > reboot, shifting the clock. Apparently not very common. Grepping through my collection of 1217 power log files I only found 61 of them that had that value set to something other than GMT or UTC. Granted that most of those files where generated by me and I didn't know about that control but I also have a lot of logs from users. Thanks for the info. -- Richard A. Smith One Laptop per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Discovering the XOs local timezone in a bash script
The XO image OLPC supplies defaults to UTC. Users can select a time zone offset in Sugar if they want, but it is purely a numerical control at last check and does not allow you to choose a setting that is regional (like "America/New York" or "EST5EDT", which include Daylight Savings support). Sugar reports only relative times in its core GUI, so I don't know how common it is for deployments or other users to actually change this setting. Setting a time zone other than UTC with 10.1.3 and prior may also expose a flaw where the offset is repetitively applied every reboot, shifting the clock. Perhaps proper time zone support (with the optional frame clock some deployments like) could be part of the 11.2.0 release if there is a good reason to include it. But that's a design discussion for the Sugar-devel list, as I don't quite know why Sugar avoids showing actual times in the first place. If we decide to include a Calendar app in 11.2.0 that might also be good justification for making time zone support easier to use. --- SJG On 3/12/2011 10:05 PM, Richard A. Smith wrote: > I've added a new feature in my power logging processor that allows the > plotting of power input vs time of day. To do this I have to know the > local timzone so I can translate the UTC datestamp back to the local time. > > `date` says the XO's timezone is set to UTC so I'll have to get it from > somewhere else. > > Can someone give me a quick rundown of how we are managing timezones so > I know what to look at to determine this? > ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Discovering the XOs local timezone in a bash script
Last I knew we used standard Linux conventions for timezones and sugar called the standard Linux commands (via sudo) to set the timezone. But that should make 'date' report the correct local time (unless you use '-u') so maybe someone broke that sometime in the past two years. Check /etc/timezone? --scott -- ( http://cscott.net/ ) ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Discovering the XOs local timezone in a bash script
I've added a new feature in my power logging processor that allows the plotting of power input vs time of day. To do this I have to know the local timzone so I can translate the UTC datestamp back to the local time. `date` says the XO's timezone is set to UTC so I'll have to get it from somewhere else. Can someone give me a quick rundown of how we are managing timezones so I know what to look at to determine this? -- Richard A. Smith One Laptop per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Memory replacement
On Sat, Mar 12, 2011 at 5:51 PM, Arnd Bergmann wrote: > I've had four cards with a Sandisk label that had unusual characteristics > and manufacturer/OEM IDs that refer to other companies, three Samsung ("SM") > and one unknown ("BE", possibly lexar). In all cases, the Sandisk support > has confirmed from photos that the cards are definitely fake. They also Please see the blog post I cited in the email immediately prior to yours, which discusses this situation precisely. Often the cards are not actually "fake" -- they may even be produced on the exact same equipment as the usual cards, but "off the books" during hours when the factory is officially closed. This sort of thing is very very widespread, and fakes can come even via official distribution channels. (Discussed in bunnie's post.) > However, they have apparently managed to make them work well > for random access by using some erase blocks as SLC (writing only > the pages that carry the most significant bit in each cell) and > by doing log structured writes in there, something that apparently > others have not figured out yet. Also, as I mentioned, they > consistenly use a relatively large number of open erase blocks. > I've measured both effects on SD cards and USB sticks. You've been lucky. > I believe you can get this level of sophistication only from > companies that make the nand flash, the controller and the card: > Sandisk, Samsung and Toshiba. > Other brands that just get the controllers and the flash chips > from whoever sells them cheaply (kingston, adata, panasonic, > transcend, ...) apparently don't get the really good stuff. You're giving the OEMs too much credit. As John says, unless you arrange for a special SKU, even the "first source" companies will give you whatever they've got cheap that day. >> How we deal with this is constant testing and getting notification from >> the manufacturer that they are changing the internals (unfortunately, >> we aren't willing to pay the premium to have a special SKU). > > Do you have test results somewhere publically available? We are currently > discussing adding some tweaks to the linux mmc drivers to detect cards > with certain features, and to do some optimizations in the block layer > for common ones. http://wiki.laptop.org/go/NAND_Testing But the testing wad is talking about is really *on the factory floor*: Regular sampling of chips as they come into the factory to ensure that the chips *you are actually about to put into the XOs* are consistent. Relying on manufacturing data reported by the chips is not reliable. --scott -- ( http://cscott.net/ ) ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Memory replacement
On Friday 11 March 2011 18:28:49 John Watlington wrote: > > On Mar 11, 2011, at 5:35 AM, Arnd Bergmann wrote: > > > I've tested around a dozen media from them, and while you are true > > that they use rather different algorithms and NAND chips inside, all > > of them can write to at least 5 erase blocks before getting into > > garbage collection, which is really needed for ext3 file systems. > > > > Contrast this with Kingston cards, which all use the same algorithm > > and can only write data linearly to one erase block at a time, resulting > > in one or two orders of magnitude higher internal write amplification. > > > > Most other vendors are somewhere inbetween, and you sometimes get > > fake cards that don't do what you expect, such a a bunch of Samsung > > microSDHC cards thaI have I which are labeled Sandisk on the outside. > > Those aren't fakes. That is what I'm trying to get across. I've had four cards with a Sandisk label that had unusual characteristics and manufacturer/OEM IDs that refer to other companies, three Samsung ("SM") and one unknown ("BE", possibly lexar). In all cases, the Sandisk support has confirmed from photos that the cards are definitely fake. They also explained that all authentic cards (possibly fake ones as well, but I have not seen them) will be labeled "Made in China", not "Made in Korea" or "Made in Taiwan" as my fake ones, and that the authentic microSD cards have the serial number on the front side, not on the back. > > I've also seen some really cheap noname cards outperform similar-spec'd > > sandisk card, both regarding maximum throughput and the garbage collection > > algorithms, but you can't rely on that. > > > My point is that you can't rely on Sandisk either. > > I've been in discussion with both Sandisk and Adata about these issues, > as well as constantly testing batches of new SD cards from all major > vendors. > > Unless you pay a lot extra and order at least 100K, you have no > control over what they give you. They don't just change NAND chips, > they change the controller chip and its firmware. Frequently. > And they don't update either the SKU number, part marking or the > identification fields available to software.The manufacturing batch > number printed on the outside is the only thing that changes. I agree that you cannot rely on specific behavior to stay the same with any vendor. One thing I noticed for instance is that many new Sandisk cards are using TLC (three level cell) NAND, which is inherently slower and cheaper than the regular two-level MLC used in older cards or those from other vendors. However, they have apparently managed to make them work well for random access by using some erase blocks as SLC (writing only the pages that carry the most significant bit in each cell) and by doing log structured writes in there, something that apparently others have not figured out yet. Also, as I mentioned, they consistenly use a relatively large number of open erase blocks. I've measured both effects on SD cards and USB sticks. I believe you can get this level of sophistication only from companies that make the nand flash, the controller and the card: Sandisk, Samsung and Toshiba. Other brands that just get the controllers and the flash chips from whoever sells them cheaply (kingston, adata, panasonic, transcend, ...) apparently don't get the really good stuff. > How we deal with this is constant testing and getting notification from > the manufacturer that they are changing the internals (unfortunately, > we aren't willing to pay the premium to have a special SKU). Do you have test results somewhere publically available? We are currently discussing adding some tweaks to the linux mmc drivers to detect cards with certain features, and to do some optimizations in the block layer for common ones. Arnd ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Memory replacement
Canonical related blog post: http://www.bunniestudios.com/blog/?p=918 Mandatory reading for anyone who has to deal with flash memory. --scott -- ( http://cscott.net/ ) ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Memory replacement
On Mar 11, 2011, at 5:35 AM, Arnd Bergmann wrote: > I've tested around a dozen media from them, and while you are true > that they use rather different algorithms and NAND chips inside, all > of them can write to at least 5 erase blocks before getting into > garbage collection, which is really needed for ext3 file systems. > > Contrast this with Kingston cards, which all use the same algorithm > and can only write data linearly to one erase block at a time, resulting > in one or two orders of magnitude higher internal write amplification. > > Most other vendors are somewhere inbetween, and you sometimes get > fake cards that don't do what you expect, such a a bunch of Samsung > microSDHC cards thaI have I which are labeled Sandisk on the outside. Those aren't fakes. That is what I'm trying to get across. > I've also seen some really cheap noname cards outperform similar-spec'd > sandisk card, both regarding maximum throughput and the garbage collection > algorithms, but you can't rely on that. My point is that you can't rely on Sandisk either. I've been in discussion with both Sandisk and Adata about these issues, as well as constantly testing batches of new SD cards from all major vendors. Unless you pay a lot extra and order at least 100K, you have no control over what they give you. They don't just change NAND chips, they change the controller chip and its firmware. Frequently. And they don't update either the SKU number, part marking or the identification fields available to software.The manufacturing batch number printed on the outside is the only thing that changes. How we deal with this is constant testing and getting notification from the manufacturer that they are changing the internals (unfortunately, we aren't willing to pay the premium to have a special SKU). Cheers, wad ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: [OLPC-AU] XO-1 developer key does not work
On Sat, 2011-03-12 at 02:36 -0500, C. Scott Ananian wrote: > On Sat, Mar 12, 2011 at 2:16 AM, Mikus Grinbergs wrote: > >> why am I getting different readings for each method? > > > > My guess is that file /home/.devkey.html was copied in from some other > > system, and shows the serial number and UUID of the copied-from system. > > It would be interesting to investigate how /home/.devkey.html got onto > the machine -- ie, what build you used, and how you installed it onto > the device -- in order to prevent this problem from recurring. > Sounds like one of the side-effects of cloning the machine that had requested a key. Jerry ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: [OLPC-AU] XO-1 developer key does not work
On 12 March 2011 18:36, C. Scott Ananian wrote: > On Sat, Mar 12, 2011 at 2:16 AM, Mikus Grinbergs wrote: >>> why am I getting different readings for each method? >> >> My guess is that file /home/.devkey.html was copied in from some other >> system, and shows the serial number and UUID of the copied-from system. All I can say is that this XO has been lying around for a couple of years. I'm not really sure what happened, but at least I know to trust the collection key method over the others. I have another 20+ XO-1s to unlock as well, so that's the best way for me. Sridhar ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel