2010/1/24 Ted Dunning <ted.dunn...@gmail.com>:
> The other take-away was that threading looks plausible for SGD, but full on
> map-reduce except on somewhat randomized shards of features probably isn't
> that useful.  Even shard may not be very useful since different mappers (or
> reducers) may just mostly redo the same work.

Yes, I had the same overall feeling about the mapreducability of the
training. Good threading/multicore  support sounds more like a good
mid-term objective for online learners in Mahout. MapReduce is still
very interesting for feature extraction for large training / testing
datasets (hashing text is still CPU intensive and completely
parallelizable).

> On Sun, Jan 24, 2010 at 5:37 AM, Olivier Grisel 
> <olivier.gri...@ensta.org>wrote:
>
>> The main takeway point is that averaging for linear models is possible
>> but not as interesting as horizontal feature sharding that
>> experimentally works for both linear and non linear models.
>>
>> The second takeway point is that vowpal wabbit looks more and more
>> unbeatable :)
>
> Unbeatable perhaps.  But that also makes it pretty important to have in
> Mahout.

Yes sure. Speaking of which I did some more work to wrap the online
logistic regression model into a multi label document classifier:

   http://github.com/ogrisel/mahout/commits/MAHOUT-228

A simple unittest with a toy dataset of around 20 sentences
categorized in 0 to 3 categories confirms the models converges towards
good F1 measure. I will now work on larger dataset in the example
package based on the wikipedia extractor.

-- 
Olivier
http://twitter.com/ogrisel - http://code.oliviergrisel.name

Reply via email to