Re: mailbox over HDFS/HBase

2011-05-25 Thread Eric Charles

Yes, let's go this way (I will put some links in the initial GSoC JIRA).

Your exams first! You will have plenty of time when they will be finished :)

In first instance, I think you have to correctly have in hand the james 
mailbox implementations and the hbase api. mailbox-hbase implementation 
can happen just after the samples/tests you would do.


Tks,
- Eric


On 24/05/2011 21:21, Ioan Eugen Stan wrote:

To summarize the whole discussion above:

- We will use HBase, with HBase API
- we will use [1] to centralize the information about how the emails
are handled (what is immutable, the flags, etc.)
- I will try to define a data model/ schema designed for HBase / NoSQL
storage and submit it to discussion on the Hadoop/HBase mailing list
- start writing some code.


[1] https://issues.apache.org/jira/browse/MAILBOX-72

Note: I am also during my exam session so I will not be dedicating all
of my time to the project. This session will end on june 10, but I
plan to have some things working by then.




-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-25 Thread Robert Burrell Donkin
On Wed, May 25, 2011 at 1:10 PM, Eric Charles e...@apache.org wrote:
 Your exams first! You will have plenty of time when they will be finished :)

+1

Robert

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-25 Thread Robert Burrell Donkin
On Tue, May 24, 2011 at 8:07 PM, Ioan Eugen Stan stan.ieu...@gmail.com wrote:
  (my observation)

 Kind of.. you often see an IMAP client todo some big FETCH on the first
 connect to see if there are changes in the mailbox. Like

 a FETCH 1:* (FLAGS)

 This will hopefully get improved when Apache James IMAP supports the
 CONDSTORE[a] and QRESYNC[b] extensions. But thats on my todo list ;)
 Unfortunally this will need to change the API of the current mailbox release
 (0.2), but thats not something you should care about atm. Just use
 the 0.2 release for your development

 So I guess I should read the IMAP RFC to see how data is going to be
 accessed in order to make the data model just right.

The way clients use IMAP isn't deduceable from the RFCs. So don't
spend too much time trying to analyse the RFC...

Robert

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-25 Thread Robert Burrell Donkin
On Tue, May 24, 2011 at 12:24 PM, Eric Charles e...@apache.org wrote:
snip

 On 24/05/2011 07:44, Norman wrote:

 snip
 - users usually access the last 50-100 emails (my observation)

 Kind of.. you often see an IMAP client todo some big FETCH on the
 first connect to see if there are changes in the mailbox. Like

 a FETCH 1:* (FLAGS)


 Yes, I regulary see that when I debug with wireshark some imap traffic. The
 full fetch can take some time for large mailboxes...

even pushing the data over the wire for these FETCHes takes a while. i
had it in mind to use streaming paged retrieval with asynchronous
writing to solve this...

Robert

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-25 Thread Eric Charles

Yes, data transfer on the network is for now the main latency cause.

However, there are still some optimization to implement regarding large 
queries. For now, it is batched (batch of 100 I think), but solution 
like you propose would be better (depending on mailbox implementation to 
support streaming).


Tks,
- Eric

On 25/05/2011 16:53, Robert Burrell Donkin wrote:

On Tue, May 24, 2011 at 12:24 PM, Eric Charlese...@apache.org  wrote:
snip


On 24/05/2011 07:44, Norman wrote:

snip

- users usually access the last 50-100 emails (my observation)


Kind of.. you often see an IMAP client todo some big FETCH on the
first connect to see if there are changes in the mailbox. Like

a FETCH 1:* (FLAGS)



Yes, I regulary see that when I debug with wireshark some imap traffic. The
full fetch can take some time for large mailboxes...


even pushing the data over the wire for these FETCHes takes a while. i
had it in mind to use streaming paged retrieval with asynchronous
writing to solve this...

Robert

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org




-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-24 Thread Eric Charles

On 24/05/2011 07:51, Norman wrote:


2. If we store each folder in a file, we may have less performance
issue on read (larger file), but we face the issue that we can not
alter the content (only append!!). So does not sound like an option.

Well we could just have some kind of info which mails are deleted and
skip then while read from the file. This would still need to cleanup
deleted messages later somehow. Not sure if it makes sense
given by the complexibilty it will introduce..



Yep, I also thought to maintain a list a expunged/deleted mails per 
mailbox, but that's not the most performant solution.


It's true that the SequenceFile [1] only allows append, the MapWritable 
[2] implement java.util.Map, so you've got the put, get, remove...


If we have a MapWritable per Mailbox, we will need to open/close it 
frequently (based on user SELECT), this may be not performant (don't 
know?).  Also, with this approach, we are more in a KeyValue storage 
approach, and we may better finally take a real KeyValue store to get 
all needed functionality (scan,...).


Tks,
- Eric


[1] 
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/SequenceFile.Writer.html
[2] 
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/MapWritable.html



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-24 Thread Eric Charles

See my comments inline.
Tks,
- Eric

On 24/05/2011 07:44, Norman wrote:

snip


First, about email :
- emails are essentially immutable. Once created they do not modify.
- meta information is read/write (like the status - read/unread).
maybe other stuff, I still have to get up to date.

The only read-write you need to care about are the FLAGS. Nothing else
is allowed to get changed once the mail is stored.
So you have:
- Append message + metadata
- Delete message + metadata
- Change FLAGS which is stored as metadata



Very good summary :)
I would also add the mailbox to the message metadata.
Maybe implicit when you say message, but depending on the choices, the 
way we'll implement may vary completely. The mailbox of the message is 
r/w because user can move a message from a mailbox to another.


snip



- you can delete an email, but other than that you can't modify it.
- users usually access the last 50-100 emails (my observation)


Kind of.. you often see an IMAP client todo some big FETCH on the
first connect to see if there are changes in the mailbox. Like

a FETCH 1:* (FLAGS)



Yes, I regulary see that when I debug with wireshark some imap traffic. 
The full fetch can take some time for large mailboxes...



This will hopefully get improved when Apache James IMAP supports the
CONDSTORE[a] and QRESYNC[b] extensions. But thats on my todo list ;)
Unfortunally this will need to change the API of the current mailbox
release (0.2), but thats not something you should care about atm. Just use
the 0.2 release for your development



yes, let's stick to 0.2 release to not be impacted by upcoming changes 
in trunk.





About HDFS:

- is designed to work well with large data with the order of magnitude
of GB and beyond. It has a block size 64 MB. This enables less disk
seeks when reading a file, because the file is less fragmented. It
uses bulk reads and writes enables to HDFS to perform better: all the
data is in one place, and you have a small number of open file
handlers, which means less over-heed.
- does not provide random file alteration. HDFS only supports APPEND
information at the end of an existing file. If you need to modify a
file, the only way to do it is to create a new file with the
modifications.


I thought we could do something similar to maildir which use the
filename as meta-data container.
See [c] and [d]. Not sure about the small file problem here ;)



Yes, no experience either with many small files in hadoop, but let's 
trust what the hadoop community says and writes :)



HBase:

- is a NoSQL implementation over Hadoop.
- provides the user a way to store information and access it very
easily based on some keys.
- provides a way to modify the files by keeping a log, similar to the
way journal file systems work: it appends all the modifications to a
log file. When certain conditions are met the log file is merged back
into the „database”.



HBase sounds like a good fit ...



+1

HBase is not difficult to install, well documented and the client API is 
very well done. Facebook's mailing system is built upon it.



My conclusions:

Because emails are small and require that a part of them needs to be
changed, storing them in a filesystem that was designed for large
files, which does not provide a way to modify these files is not a
sensible thing to do.

I see a couple of choices:

1. we use HBase
2. we keep the meta information in a separate database, outside
Hadoop, but things will not scale very well.
3. we design it on top of HDFS, but essentially we (I) will end up
solving the same problems that HBase solved


Using a seperate database for meta-information will only work if we can
store it in a distributed fashion. Otherwise it
just kills all the benefits of hadoop. Maybe storing the meta-data
in a distributed SOLR index could do the trick, not sure.


The most easy and straight forward solution is to use HBase, There is
a paper [3] that shows some results with an email store based on
Cassandra, so it is proven to work.

I wrote a prototype which use cassandra for Apache James Mailbox, which
is not Open-Source (yet?). It works quite well but suffer from any
locking, so you need some distributed locking
service like hazelcast [e]. So using NoSQL should work without probs,
you just need to keep in mind how the data is accessed.


I am thinking of using Gora and avoiding to use HBase API directly.
This will ensure that James could use any NoSQL storage that Gora can
access. What keeps me back is that Gora does not seem to be very
active and it's also incubating so I may run into things not easy to
get out of.


Maybe its just me but I still think a ORM mapper can just not work well
in the NoSQL world. As you need to design your storage in the way you
access the data. I would prolly just use the HBase API.


What do you think?


[1] http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/26022
[2] http://vimeo.com/search/videos/search:cloudera/st/48b36a32
[3] 

Re: mailbox over HDFS/HBase

2011-05-24 Thread Eric Charles

On 24/05/2011 07:44, Norman wrote:

I wrote a prototype which use cassandra for Apache James Mailbox, which
is not Open-Source (yet?). It works quite well but suffer from any
locking, so you need some distributed locking
service like hazelcast [e]. So using NoSQL should work without probs,
you just need to keep in mind how the data is accessed.



mailbox-cassandra, interesting ;)


I am thinking of using Gora and avoiding to use HBase API directly.
This will ensure that James could use any NoSQL storage that Gora can
access. What keeps me back is that Gora does not seem to be very
active and it's also incubating so I may run into things not easy to
get out of.


Maybe its just me but I still think a ORM mapper can just not work well
in the NoSQL world. As you need to design your storage in the way you
access the data. I would prolly just use the HBase API.


+1




What do you think?


[1] http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/26022
[2] http://vimeo.com/search/videos/search:cloudera/st/48b36a32
[3] http://ewh.ieee.org/r6/scv/computer/nfic/2009/IBM-Jun-Rao.pdf

Hope it helps,
Norman

[a] http://tools.ietf.org/html/rfc4551
[b] http://tools.ietf.org/search/rfc5162
[c] http://cr.yp.to/proto/maildir.html
[d] http://www.courier-mta.org/imap/README.maildirquota.html
[e] http://www.hazelcast.com/

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org




-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-24 Thread Ioan Eugen Stan
 (my observation)

 Kind of.. you often see an IMAP client todo some big FETCH on the first
 connect to see if there are changes in the mailbox. Like

 a FETCH 1:* (FLAGS)

 This will hopefully get improved when Apache James IMAP supports the
 CONDSTORE[a] and QRESYNC[b] extensions. But thats on my todo list ;)
 Unfortunally this will need to change the API of the current mailbox release
 (0.2), but thats not something you should care about atm. Just use
 the 0.2 release for your development

So I guess I should read the IMAP RFC to see how data is going to be
accessed in order to make the data model just right.


 Hope it helps,
 Norman

 [a] http://tools.ietf.org/html/rfc4551
 [b] http://tools.ietf.org/search/rfc5162
 [c] http://cr.yp.to/proto/maildir.html
 [d] http://www.courier-mta.org/imap/README.maildirquota.html
 [e] http://www.hazelcast.com/


Oh goodie, more reading . I hope this doesn't form a trend :D.

-- 
Ioan-Eugen Stan

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-24 Thread Ioan Eugen Stan
 So:
 - mailbox (immutable: create/read/delete/query)
 - message (immutable: create/read/delete/query)
 - message flags (create/read/update/delete/query)
 - subscriptions (create/read/update/delete/query)

 The mailbox and message datamodel is defined in [1] (please note the need
 Header and Property are clearly separate objects).

 The subscription datamodel is defined in [2].

I will check that too.

To summarize the whole discussion above:

- We will use HBase, with HBase API
- we will use [1] to centralize the information about how the emails
are handled (what is immutable, the flags, etc.)
- I will try to define a data model/ schema designed for HBase / NoSQL
storage and submit it to discussion on the Hadoop/HBase mailing list
- start writing some code.


[1] https://issues.apache.org/jira/browse/MAILBOX-72

Note: I am also during my exam session so I will not be dedicating all
of my time to the project. This session will end on june 10, but I
plan to have some things working by then.

-- 
Ioan-Eugen Stan

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



mailbox over HDFS/HBase

2011-05-23 Thread Ioan Eugen Stan
Hello,


I had some discussions with Eric about what will be the best way to
implement the mailbox over HDFS and we agreed that it's better to
inform the list about the situation.

The project idea that I applied for is to implement James mailbox
storage over Hadoop HDFS and one of the first steps was to find the
best way to interact with Hadoop. So I just did that. I have spent the
last week or so trying to figure out the best way to implement the
mailbox over Hadoop. I found the training videos from Cloudera to be
very helpful [2].
I also wrote on the Hadoop mailing list to ask them for an opinion
(before watching the videos) . You can read the discussion here [1].

I have come to the conclusion that there is no easy way to implement
the mailbox directly over HDFS, and my opinion is to use HBase, either
directly or over Gora. I will support my statement with some of the
things I found out.

First, about email :
- emails are essentially immutable. Once created they do not modify.
- meta information is read/write (like the status - read/unread).
maybe other stuff, I still have to get up to date.
- you can delete an email, but other than that you can't modify it.
- users usually access the last 50-100 emails (my observation)

About HDFS:

- is designed to work well with large data with the order of magnitude
of GB and beyond. It has a block size  64 MB. This enables less disk
seeks when reading a file, because the file is less fragmented. It
uses bulk reads and writes enables to HDFS to perform better: all the
data is in one place, and you have a small number of open file
handlers, which means less over-heed.
- does not provide random file alteration. HDFS only supports APPEND
information at the end of an existing file. If you need to modify a
file, the only way to do it is to create a new file with the
modifications.


HBase:

- is a NoSQL implementation over Hadoop.
- provides the user a way to store information and access it very
easily based on some keys.
- provides a way to modify the files by keeping a log, similar to the
way journal file systems work: it appends all the modifications to a
log file. When certain conditions are met the log file is merged back
into the „database”.


My conclusions:

Because emails are small and require that a part of them needs to be
changed, storing them in a filesystem that was designed for large
files, which does not provide a way to modify these files is not a
sensible thing to do.

I see a couple of choices:

1. we use HBase
2. we keep the meta information in a separate database, outside
Hadoop, but things will not scale very well.
3. we design it on top of HDFS, but essentially we (I) will end up
solving the same problems that HBase solved

The most easy and straight forward solution is to use HBase, There is
a paper [3] that shows some results with an email store based on
Cassandra, so it is proven to work.
I am thinking of using Gora and avoiding to use HBase API directly.
This will ensure that James could use any NoSQL storage that Gora can
access. What keeps me back is that Gora does not seem to be very
active and it's also incubating so I may run into things not easy to
get out of.


What do you think?


[1] http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/26022
[2] http://vimeo.com/search/videos/search:cloudera/st/48b36a32
[3] http://ewh.ieee.org/r6/scv/computer/nfic/2009/IBM-Jun-Rao.pdf
-- 
Ioan-Eugen Stan

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: mailbox over HDFS/HBase

2011-05-23 Thread Eric Charles

Hi,

For the immutable mails:

1. if we store each mail in a file, we don't have the alter it but we 
face the performance issue cause reading a small file in Hadoop seems 
expensive (not performant).


2. If we store each folder in a file, we may have less performance issue 
on read (larger file), but we face the issue that we can not alter the 
content (only append!!). So does not sound like an option.


For associated metadata, maildir offers this functionality by using the 
file name as metadata container. On change, the file is renamed addign 
some flags,... which is possible with Hadoop ([4] for example operations 
on hdfs). Once again, at the price of performance for small size.


As Robert suggested in [1], a benchmark could be setup, but we would 
need a realistic cluster (numerous hardware machines with replication 
factor = 3) and large dataset (millions of mails) to get some 
representative numbers.


On the possible file format, we have a limited options (hadoop calls 
these some Writable): Text or BytesWritable. There's also file-based 
data structures: SequenceFile or MapFile.


I also answered on [1] asking what hadoop can offer in regards to Avro 
format (see also [5] on the protocol buffer, avro kind-of, usage at 
twitter). I don't know if Avro file format changes anything to the 
exposed considerations...


In this Hadoop approach, we also need to ask how we get/query the 
information. Directly read the Hadoop Writable/File via the io API, or 
use a map/reduce job ? The map/reduce job result will be stored in a 
OutputFile which must in its turn be read again, sounds a bit too much 
to me...


Now, if we find all these too challenging and we are not sure we will 
get a performant solution, HBase for example is a proven solution and 
offers a structured storage on top of Hadoop.


There's some ORM around (like the datanucleus jdo,...) but the HBase 
native API is rich enough and should do the job for us without 
additional layer.


I am following the Apache Gora incubating mailing list as it seems to 
have much to offer (persistence towards hbase, cassandra,.. indexing...) 
but the last time the project seemed to be quiet. This doesn't mean the 
today functionality is not usable for us.


Another question is about the potential usage of the existing lucene 
index to help us on the queries (for IMAP, currently in mailbox-store 
project). This would be a nice solution to use, but today the index is 
local (not distributed). It's a work in progress, and can evolve towards 
distribution. I don't think we need to decide on this now, but the 
question will come one day.


Tks,

- Eric

[4] 
http://myjavanotebook.blogspot.com/2008/05/hadoop-file-system-tutorial.html
[5] 
http://www.slideshare.net/kevinweil/protocol-buffers-and-hadoop-at-twitter


On 24/05/2011 00:01, Ioan Eugen Stan wrote:

Hello,


I had some discussions with Eric about what will be the best way to
implement the mailbox over HDFS and we agreed that it's better to
inform the list about the situation.

The project idea that I applied for is to implement James mailbox
storage over Hadoop HDFS and one of the first steps was to find the
best way to interact with Hadoop. So I just did that. I have spent the
last week or so trying to figure out the best way to implement the
mailbox over Hadoop. I found the training videos from Cloudera to be
very helpful [2].
I also wrote on the Hadoop mailing list to ask them for an opinion
(before watching the videos) . You can read the discussion here [1].

I have come to the conclusion that there is no easy way to implement
the mailbox directly over HDFS, and my opinion is to use HBase, either
directly or over Gora. I will support my statement with some of the
things I found out.

First, about email :
- emails are essentially immutable. Once created they do not modify.
- meta information is read/write (like the status - read/unread).
maybe other stuff, I still have to get up to date.
- you can delete an email, but other than that you can't modify it.
- users usually access the last 50-100 emails (my observation)

About HDFS:

- is designed to work well with large data with the order of magnitude
of GB and beyond. It has a block size  64 MB. This enables less disk
seeks when reading a file, because the file is less fragmented. It
uses bulk reads and writes enables to HDFS to perform better: all the
data is in one place, and you have a small number of open file
handlers, which means less over-heed.
- does not provide random file alteration. HDFS only supports APPEND
information at the end of an existing file. If you need to modify a
file, the only way to do it is to create a new file with the
modifications.


HBase:

- is a NoSQL implementation over Hadoop.
- provides the user a way to store information and access it very
easily based on some keys.
- provides a way to modify the files by keeping a log, similar to the
way journal file systems work: it appends all the modifications to a
log file. 

Re: mailbox over HDFS/HBase

2011-05-23 Thread Norman

Hi there,

comments inside...

Am 24.05.2011 00:01, schrieb Ioan Eugen Stan:

Hello,


I had some discussions with Eric about what will be the best way to
implement the mailbox over HDFS and we agreed that it's better to
inform the list about the situation.

The project idea that I applied for is to implement James mailbox
storage over Hadoop HDFS and one of the first steps was to find the
best way to interact with Hadoop. So I just did that. I have spent the
last week or so trying to figure out the best way to implement the
mailbox over Hadoop. I found the training videos from Cloudera to be
very helpful [2].
I also wrote on the Hadoop mailing list to ask them for an opinion
(before watching the videos) . You can read the discussion here [1].


Ok I had a look at this..


I have come to the conclusion that there is no easy way to implement
the mailbox directly over HDFS, and my opinion is to use HBase, either
directly or over Gora. I will support my statement with some of the
things I found out.

First, about email :
- emails are essentially immutable. Once created they do not modify.
- meta information is read/write (like the status - read/unread).
maybe other stuff, I still have to get up to date.
The only read-write you need to care about are the FLAGS. Nothing else 
is allowed to get changed once the mail is stored.

So you have:
 - Append message + metadata
 - Delete message + metadata
 - Change FLAGS which is stored as metadata


- you can delete an email, but other than that you can't modify it.
- users usually access the last 50-100 emails (my observation)


Kind of.. you often see an IMAP client todo some big FETCH on the 
first connect to see if there are changes in the mailbox. Like


a FETCH 1:* (FLAGS)

This will hopefully get improved when Apache James IMAP supports the 
CONDSTORE[a] and QRESYNC[b] extensions. But thats on my todo list ;)
Unfortunally this will need to change the API of the current mailbox 
release (0.2), but thats not something you should care about atm. Just use

the 0.2 release for your development



About HDFS:

- is designed to work well with large data with the order of magnitude
of GB and beyond. It has a block size  64 MB. This enables less disk
seeks when reading a file, because the file is less fragmented. It
uses bulk reads and writes enables to HDFS to perform better: all the
data is in one place, and you have a small number of open file
handlers, which means less over-heed.
- does not provide random file alteration. HDFS only supports APPEND
information at the end of an existing file. If you need to modify a
file, the only way to do it is to create a new file with the
modifications.

I thought we could do something similar to maildir which use the 
filename as meta-data container.

See [c] and [d]. Not sure about the small file problem here ;)


HBase:

- is a NoSQL implementation over Hadoop.
- provides the user a way to store information and access it very
easily based on some keys.
- provides a way to modify the files by keeping a log, similar to the
way journal file systems work: it appends all the modifications to a
log file. When certain conditions are met the log file is merged back
into the „database”.



HBase sounds like a good fit ...


My conclusions:

Because emails are small and require that a part of them needs to be
changed, storing them in a filesystem that was designed for large
files, which does not provide a way to modify these files is not a
sensible thing to do.

I see a couple of choices:

1. we use HBase
2. we keep the meta information in a separate database, outside
Hadoop, but things will not scale very well.
3. we design it on top of HDFS, but essentially we (I) will end up
solving the same problems that HBase solved

Using a seperate database for meta-information will only work if we can 
store it in a distributed fashion. Otherwise it
just kills all the benefits of hadoop. Maybe storing the meta-data 
in a distributed SOLR index could do the trick, not sure.



The most easy and straight forward solution is to use HBase, There is
a paper [3] that shows some results with an email store based on
Cassandra, so it is proven to work.
I wrote a prototype which use cassandra for Apache James Mailbox, which 
is not Open-Source (yet?). It works quite well but suffer from any 
locking, so you need some distributed locking
service like hazelcast [e].  So using NoSQL should work without probs, 
you just need to keep in mind how the data is accessed.



I am thinking of using Gora and avoiding to use HBase API directly.
This will ensure that James could use any NoSQL storage that Gora can
access. What keeps me back is that Gora does not seem to be very
active and it's also incubating so I may run into things not easy to
get out of.


Maybe its just me but I still think a ORM mapper can just not work well 
in the NoSQL world. As you need to design your storage in the way you 
access the data. I would prolly just use the HBase API.



What do 

Re: mailbox over HDFS/HBase

2011-05-23 Thread Norman

Hi Eric,

comments inside...

Am 24.05.2011 06:08, schrieb Eric Charles:

Hi,

For the immutable mails:

1. if we store each mail in a file, we don't have the alter it but we 
face the performance issue cause reading a small file in Hadoop seems 
expensive (not performant).

Seems like this, yeah..

2. If we store each folder in a file, we may have less performance 
issue on read (larger file), but we face the issue that we can not 
alter the content (only append!!). So does not sound like an option.
Well we could just have some kind of info which mails are deleted and 
skip then while read from the file. This would still need to cleanup 
deleted messages later somehow. Not sure if it makes sense

given by the complexibilty it will introduce..




For associated metadata, maildir offers this functionality by using 
the file name as metadata container. On change, the file is renamed 
addign some flags,... which is possible with Hadoop ([4] for example 
operations on hdfs). Once again, at the price of performance for small 
size.


As Robert suggested in [1], a benchmark could be setup, but we would 
need a realistic cluster (numerous hardware machines with replication 
factor = 3) and large dataset (millions of mails) to get some 
representative numbers.


On the possible file format, we have a limited options (hadoop calls 
these some Writable): Text or BytesWritable. There's also file-based 
data structures: SequenceFile or MapFile.


I also answered on [1] asking what hadoop can offer in regards to Avro 
format (see also [5] on the protocol buffer, avro kind-of, usage at 
twitter). I don't know if Avro file format changes anything to the 
exposed considerations...


In this Hadoop approach, we also need to ask how we get/query the 
information. Directly read the Hadoop Writable/File via the io API, or 
use a map/reduce job ? The map/reduce job result will be stored in a 
OutputFile which must in its turn be read again, sounds a bit too much 
to me...


Now, if we find all these too challenging and we are not sure we will 
get a performant solution, HBase for example is a proven solution and 
offers a structured storage on top of Hadoop.


There's some ORM around (like the datanucleus jdo,...) but the HBase 
native API is rich enough and should do the job for us without 
additional layer.


+1, for no ORM ;)



I am following the Apache Gora incubating mailing list as it seems to 
have much to offer (persistence towards hbase, cassandra,.. 
indexing...) but the last time the project seemed to be quiet. This 
doesn't mean the today functionality is not usable for us.


Another question is about the potential usage of the existing lucene 
index to help us on the queries (for IMAP, currently in mailbox-store 
project). This would be a nice solution to use, but today the index is 
local (not distributed). It's a work in progress, and can evolve 
towards distribution. I don't think we need to decide on this now, but 
the question will come one day.
Unfortunally the Lucene Index is not complete yet, its still on my todo 
list ;)




Tks,

- Eric

[4] 
http://myjavanotebook.blogspot.com/2008/05/hadoop-file-system-tutorial.html
[5] 
http://www.slideshare.net/kevinweil/protocol-buffers-and-hadoop-at-twitter


On 24/05/2011 00:01, Ioan Eugen Stan wrote:

Hello,


I had some discussions with Eric about what will be the best way to
implement the mailbox over HDFS and we agreed that it's better to
inform the list about the situation.

The project idea that I applied for is to implement James mailbox
storage over Hadoop HDFS and one of the first steps was to find the
best way to interact with Hadoop. So I just did that. I have spent the
last week or so trying to figure out the best way to implement the
mailbox over Hadoop. I found the training videos from Cloudera to be
very helpful [2].
I also wrote on the Hadoop mailing list to ask them for an opinion
(before watching the videos) . You can read the discussion here [1].

I have come to the conclusion that there is no easy way to implement
the mailbox directly over HDFS, and my opinion is to use HBase, either
directly or over Gora. I will support my statement with some of the
things I found out.

First, about email :
- emails are essentially immutable. Once created they do not modify.
- meta information is read/write (like the status - read/unread).
maybe other stuff, I still have to get up to date.
- you can delete an email, but other than that you can't modify it.
- users usually access the last 50-100 emails (my observation)

About HDFS:

- is designed to work well with large data with the order of magnitude
of GB and beyond. It has a block size  64 MB. This enables less disk
seeks when reading a file, because the file is less fragmented. It
uses bulk reads and writes enables to HDFS to perform better: all the
data is in one place, and you have a small number of open file
handlers, which means less over-heed.
- does not provide random file alteration. HDFS only