I didn't realize Ubuntu synchronized from testing instead of unstable. Will you
please consider a high priority upload to reduce the delay to 3 days?
If it helps
influence your decision, I am a Debian Developer myself. Also, I have appended
python test snippets as requested.
from pygame import *
Also, I have appended python test snippets as requested.
Besides that small snippet (which may not really hit SDL much) I've written a
real application that also exercises all sort of pygame stuff; sound, windows,
blitting, full-screen mode, etc. Works fine for me.
--
To UNSUBSCRIBE, email
I'll help as best I can, but let us please get what we have now in
unstable. Do you want me to NMU?
Vincent, I'm concerned about timing. Ubuntu will snapshot Debian on January
12th for their next long term release. If faster scaling is not in place
before then, it will take two extra years to percolate to a derivative
distribution that I care about. Please consider having Debian deploy ahead
of
it will take two extra years to percolate to a derivative distribution that I
care about
To clarify, I care a lot about Debian. I also care about a particular
derivative distribution.
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe.
I tried submitting a bug first, messed it up somehow, then resorted
to direct email instead of figuring it out. I have not contacted upstream,
but that is a very good idea. I'll leave that in your hands since Debian
package maintainers invariably have a great working relationship with their
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Thu, 08 Dec 2011 16:59:38 -0800
Source: jhove
Binary: jhove
Architecture: source all
Version: 1.6+dfsg-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff Breidenbach j...@debian.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Thu, 08 Dec 2011 16:28:27 -0800
Source: leptonlib
Binary: libleptonica-dev libleptonica leptonica-progs
Architecture: source amd64
Version: 1.68-5
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Thu, 08 Dec 2011 16:34:49 -0800
Source: libwebp
Binary: libwebp-dev libwebp2 webp
Architecture: source amd64
Version: 0.1.3-2
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Thu, 08 Dec 2011 16:49:55 -0800
Source: perceptualdiff
Binary: perceptualdiff
Architecture: source amd64
Version: 1.1.1-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff Breidenbach j
Julien, thanks for the education, Mehdi, thanks for the NMU. Luk
thanks for being helpful.
And yes, I ... oddly enough ... knew about the SONAME bump in libwebp.
I'll see what I can improve during my next Leptonica upload, maybe at
end of year or early January.
NMUs are always welcome for my
Julien, thanks for the education, Mehdi, thanks for the NMU. Luk
thanks for being helpful.
And yes, I ... oddly enough ... knew about the SONAME bump in libwebp.
I'll see what I can improve during my next Leptonica upload, maybe at
end of year or early January.
NMUs are always welcome for my
I'm pleased to announce The Mail Archive has passed the 100 million
message mark this past week. It took 13 years and eight generations of
hardware, but we made it.
-Jeff
--
To unsubscribe, send mail to gossip-unsubscr...@jab.org.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Mon, 31 Oct 2011 10:15:38 -0700
Source: libwebp
Binary: libwebp-dev libwebp2 webp
Architecture: source amd64
Version: 0.1.3-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff
JB Specifically, I want to blackhole any messages largerJB than size
X, unless the List-Id field is set to A,B, or C in whichJB case I'd
like the limit raised to Y.
DL Well, he did say 'blackhole', so in the data acl, a discard stanza can
DL be added that tests for the size X and list id != what
message_size_limit = ${if
match_ip{$sender_host_address}{+printer_ips}{200M}{50M}}
Can one vary the size limit based on the List-Id: field? There are a
few attachment heavy mailing lists that I would like to raise the
limit for.
-Jeff
--
## List details at
Mailman has a setting for what you want
I am not running mailman, this is all about Exim receiving messages
from elsewhere. Specifically, I want to blackhole any messages larger
than size X, unless the List-Id field is set to A,B, or C in which
case I'd like the limit raised to Y.
-Jeff
--
##
Feel free to NMU.
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
? ? ? ?this is the ways these sensors are built. There are 'dead' pixels at
both
ends. There is nothing you can do with it.
Stef, can you recommend a SANE compatible sensor that goes to the
edge? OptiBook 3600? Something else?
June has been a weird month. Some of the automatic software programs stopped
running, resulting in many search indexes falling behind. Plus we had a
crash this morning causing about 90 minutes downtime. Crazy!
I think I've finally traced the problem to a May 30 operating system update.
It
On Sun, Jun 12, 2011 at 3:11 PM, e-letter inp...@gmail.com wrote:
Unable to use the search feature, e.g. search for text or a
returns 0 results, clearly incorrect.
Works for me, from both the home page and in an individual archive.
Can you please supply the exact search URL that is giving
I assume the indexing is not running at their end for some reason.
That is exactly correct. If I run by hand it works fine, and that go
link will resolve now. Still looking into why the cron job that kicks
off indexing had trouble. Thanks for problem report.
--
To unsubscribe, send mail to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Wed, 13 Apr 2011 22:19:31 -0700
Source: leptonlib
Binary: libleptonica-dev libleptonica leptonica-progs
Architecture: source amd64
Version: 1.68-4
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Wed, 13 Apr 2011 15:14:03 -0700
Source: libwebp
Binary: libwebp-dev libwebp0 webp
Architecture: source amd64
Version: 0.1.2-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Mon, 28 Feb 2011 17:25:17 -0800
Source: libwebp
Binary: libwebp-dev libwebp0 webp
Architecture: source amd64
Version: 0.1-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff Breidenbach
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Tue, 15 Mar 2011 15:53:55 -0700
Source: leptonlib
Binary: libleptonica-dev libleptonica leptonica-progs
Architecture: source amd64
Version: 1.68-2
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Tue, 15 Mar 2011 17:08:46 -0700
Source: leptonlib
Binary: libleptonica-dev libleptonica leptonica-progs
Architecture: source amd64
Version: 1.68-3
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Mon, 14 Mar 2011 14:15:53 -0700
Source: leptonlib
Binary: libleptonica-dev libleptonica leptonica-progs
Architecture: source amd64
Version: 1.68-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Perhaps it's time for us to revisit [Lucene for site-wide search]
A few weeks ago I tested Lucene's ability to search across multiple
indexes (MultiSearcher) and it is hopelessly slow; queries take 5
seconds across just a few hundred indexes.
Right now I'm trying index merging
I was doing some experiments today, and managed to briefly knock over
a server in the process. I looked at searching multiple indexes
(Lucene's MultiSearcher) and merging
(IndexWriter.addIndexesNoOptimize). The former is unusably slow. The
latter seems to be on track for about 6 hours if the
Oops, sorry for the more-or-less duplicate message. The extra factor
of 2 in time is because the temporary files are turning out twice as
big as I was expecting. Earl, good suggestion, and no we haven't
explored it (yet).
--
To unsubscribe, send mail to gossip-unsubscr...@jab.org.
That diffstat is cerainly much larger than can be accepted at this stage
in the release. Have you contacted the security team about these? It's
possible to look at getting an update in after the release which fixes
these specific issues.
I contacted the security team January 9th, no response.
Progress report:
We are receiving your messages but our system is failing to recognize
them as belonging to a new list. It took me a while to find them since
gg1 is not logged the same way as regular inbound messages. I will
take the sorting engine out back into the parking lot and try to
talk
Fixed faster than expected. Only a few hundred messages affected
across the entire corpus as far as I can tell, and they are all being
dealt with. One question: the gg1 address is really designed for
Google Groups, not for other stuff. Did you try the regular archival
address first and get some
Ping.
On Sun, Jan 16, 2011 at 6:19 PM, Jeff Breidenbach j...@jab.org wrote:
diffstat: 61 files changed, 1502 insertions(+), 657 deletions(-)
Is all of that necessary to fix the security issues?
No.
However, I do not have the ability to isolate (and especially validate)
just the security
diffstat: 61 files changed, 1502 insertions(+), 657 deletions(-)
Is all of that necessary to fix the security issues?
No.
However, I do not have the ability to isolate (and especially validate)
just the security fixes. Additionally, it is conservative release that
should not break any existing
Not touching package due to block request by freeze (contact debian-release
if update is needed)
This mhonarc release fixes multiple security problems. Please propagate it
everywhere.
http://security-tracker.debian.org/tracker/source-package/mhonarc
-Jeff (the package maintainer)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Sun, 09 Jan 2011 17:21:45 -0800
Source: mhonarc
Binary: mhonarc
Architecture: source all
Version: 2.6.18-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff Breidenbach j...@debian.org
Additional Item Attachment, bug #20142 (project mhonarc):
File name: data.gzSize:5 KB
___
Reply to this item at:
http://savannah.nongnu.org/bugs/?20142
___
Message
Follow-up Comment #9, bug #20142 (project mhonarc):
I've added about 500 examples of From fields containing backslashes - all
this data is within the last two weeks. Since by necessity this field contains
email addresses, I recommend deleting the dataset when finished. It is very
easy to
After extensive discussion, upstream is preparing a new release of mhonarc
(the security and related bug fixes are more extensive than the patch
supplied to Debian). I prefer to ship the new release as the security
update, rather than attempt a backport. Happy to discuss if security team
has any
After extensive discussion, upstream is preparing a new release of mhonarc
(the security and related bug fixes are more extensive than the patch
supplied to Debian). I prefer to ship the new release as the security
update, rather than attempt a backport. Happy to discuss if security team
has any
Based on discussion with Earl so far, I think the correct fix is disabling
HTML mail support by default.
Based on discussion with Earl so far, I think the correct fix is disabling
HTML mail support by default.
Season's greetings.
Thank you all for sticking with The Mail Archive as we close out the
decade. Here's a quick rundown of the trials, tribulations, and
triumphs over the past year.
First, let's talk about infrastructure. Our uptime was 99.57% which is
similar to previous years. This year's main
On Wed, Dec 22, 2010 at 4:46 AM, e-letter inp...@gmail.com wrote:
Please offer the ability to search using international date format
(-mm-dd) as a search criterion
Done. (Actually, this has always worked).
The interesting question: is it possible? Are the originating
mails stored so that the visible html can be repaired?
A mathematician, a physicist, and an engineer walk into a bar.
The mathematician says The raw mail exists even the old stuff is in
offline
cold storage. It can be matched by
The Mail Archive does have to be very aggressive to obfuscate email
addresses, otherwise a lot of people go bonkers. But yes, it is dumb to
break a hyperlink, especially a hyperlink to The Mail Archive. Your feature
request is valid and if you are feeling eager, feel free to send in a patch.
You found a bug. In The Mail Archive's hash calculation, there is an
incorrect urlib.unquote run on the message id. This is escaping the minus
sign, and therefore calculating based on an incorrect message id. We're
going to have to regenerate the entire message-id index after the bug is
fixed.
The gossip malling list uses a somewhat obscure and very limited list server
called Enemies of Carlotta. Another fun fact is it runs on a 5 watt NSLU2,
which has a grand total of 32 megabytes of memory. That's less memory than
the very first hardware iteration of The Mail Archive, which was a 90
If you visit the home page on The Mail Archive, you may notice
search is broken. You can not currently search the entire corpus.
We're working on it but it will take some time.
The other search features still work, e.g. search works fine for an
individual archive, and you can still search list
Testing obfuscation, data below:
j...@jab.org
http://j...@jab.org
mailto:j...@jab.org
http://www.mail-archive.com/gossip@jab.org/msg01358.html
http://www.mail-archive.com/gossip@jab.org/
http://www.mail-archive.com/gossip@jab.org
http://mail-archive.com/gossip@jab.org/msg01358.html
I've finally completed localization of advanced search. If you speak a
language other than English, now is a great time to click around the user
interface and see if there are any silly language mistakes. (To get to
advanced search, do a regular search first, then you'll see a link)
Cheers,
Jeff
This is general interest, so we are responding publicly.
We've discovered that many of the advanced query results have been
leaking out through a fractured fiber optic line in the Gulf of
Mexico. It is hard to get a precise measurement, but we believe 13 to
20 thousand bits of information per day
Ok, it only took ... 4 years ... but we now have sort-by-date
available in the advanced search interface. Enjoy.
-Jeff
I wasn't clear. Is there some way to organize the search results?
When I used the trial search engine on the sundial list and typed in
oglesby the 550 results were all
Sorry, no computer available for me to test.
--
parted (and possibly cfdisk) creates invalid partition tables
https://bugs.launchpad.net/bugs/198248
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
Hi Andrius,
I wasn't aware that there were adult content ads being served at all. Can
you please supply a URL so I can take a look? If you don't want to supply to
the entire group, send to M-A staff at themailarch...@gmail.com. I'm not
keen on adjusting aspects of individual messages because that
An advanced search form would basically have a bunch of fields like
subject and date and from then string them together into a query
syntax described below. Then redirect that query to the standard search URL.
Implementation would most likely be in PHP. Hard part isn't the programming,
it is
I talked to the author of Leptonica and this is not a bug.
Leptonica is designed to be all or nothing with respect to
headers. He says:
When including headers, use
#include allheaders.h
This includes all the headers for the library.
You must also precede this with stdio.h and stdlib.h.
So the
ACK, will apply patch as is when I have a chance. Leptonica author agrees.
Non-maintainer-upload in Debian ok as well.
WONTFIX
This version of PyLucene is old (depends on gcj) and new versions will
require a reworked package. Not worth patching this up. Help with packaging
a new PyLucene appreciated.
Hmmm works on Ubuntu Hardy.
$ python
Python 2.5.2 (r252:60911, Jul 22 2009, 15:33:10)
[GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2
Type help, copyright, credits or license for more information.
from lucene import Document
Document()
Traceback (most recent call last):
File stdin, line 1,
The list name links to the info page; optionally not displayed if made
redundant by the nature of the logo.
This one requires too much per-list thinking; we'll only consider changes
that are fully automatic. The other parts sound reasonable to me, but it
would be nice to know if other people
I have a web site for the list rules and, very soon, a link to the
archives. Is there a header I can add to my messages that will let the
archiver pick up and display my site URL on its own line?
We'll honor RFC2369 headers. A quick glance suggests List-Help is most
appropriate.
Also, is
Bring it on. The Mail Archive gets tens of thousands of inbound messages
daily, and currently serves on the order of queries per second. So I don't
think there is any concern about swamping the service given the numbers
mentioned. Doesn't matter to use how many mbox files are involved for
Hi Randy,
Bad news first.It is not such a great idea for mail-archive.com to re-send
mail. That's asking for trouble with respect to abuse and spammers. On the
good side, we can get the same effect if the mail server is in cahoots with
the archiving service. A specification (RFC5604) is in place
Ok, I checked; the message was dropped because it was bigger than our size
limit. We limit inbound message size for a variety of reasons; one is
historically attachment heavy archives consume a lot of resources and -
statistically speaking - are much more likely to be spammy. (There were
some
Thanks for the problem report, Joseph.
Looks like a problem with the RAID filesystem after a power outage at
the datacenter. I'm running some integrity checks, and this is going
to take a while. In the meantime, I've mounted backup disk from two
days ago. This is going to be a little stale (the
I spent this weekend testing. Here is my report.
I applied the kernel patch vmscan: do not unconditionally treat zones
that fail zone_reclaim() as full. Next, I activated
AUFS_CONFIG_HINOTIFY and made sure to only move data with aufs mounted
with udba=inotify. Finally, I set AuSize_DEBLK to (4
the archives, but somehow Google found it, indexed it, and the guy
threatened
me with bloody murder if I didn't take it down.
Yes. It is critical to keep user perception in mind. Specifically, if you
don't keep email addresses off the global search engines, there will be a
deluge of vocal
Looks good to me. You may want to explain in the documentation why
user memory is better than kernel memory. Not everyone knows about
this. Again we are appreciative but will be slow and careful to test;
I haven't compiled my own kernel since Linux 1.1.54 on Slackware.
-Jeff
On Thu, Aug 20, 2009
Nice solution Jordan, but I think about a pythonic way to
fully integrate searchable archives into MM.
If you are interested in PyLucene, I would be very happy to share
maintenance of the relevant Debian package.
-Jeff
___
Mailman-Developers mailing
How do you think?
I am not qualified to comment on the technical design.
However, I suggest understanding the problem better before doing
serious work. Let me know if I can be of help.
Also, I don't know how important this use case is. I suspect that very
few people work with this many files.
For 1.6 million files, assuming their name are all numbered
sequentially, 1, 2, 3 ... 1599, 1600, about 180MB will be
necessary for them. In this case, you might want to try these values.
#define AuSize_DEBLK (4 * 1024 * 1024)
#define AuSize_NHASH (16 * 1024)
Filenames are like
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.7
Date: Tue, 28 Jul 2009 15:10:11 -0700
Source: leptonlib
Binary: libleptonica-dev libleptonica leptonica-progs
Architecture: source amd64
Version: 1.62-1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Hi all,
A while ago Earl asked if we'd be willing to share the code driving the
search feature for The Mail Archive (mail-archive.com). I've gone ahead and
put the relevant source code online. That's the good news.
The bad news is you won't be able to just grab this code and add search to a
And are any of you familiar with the legal requirements to remove messages?
ChillingEffects does a pretty good job explaining some of these issues.
http://www.chillingeffects.org
Follow-up Comment #3, bug #18112 (project mhonarc):
Note to self - here's a particularly strong example. Might be breaking for
two separate reasons.
http://www.mail-archive.com/pymol-us...@lists.sourceforge.net/msg06881.html
___
Reply to
I upgraded from Ubuntu 8.10 to 9.04 and no longer see any visual
difference for Tamil in Firefox. I guess it was a now-fixed Firefox
issue all along.
On Sun, Mar 22, 2009 at 10:45 PM, Jeff Breidenbach j...@jab.org wrote:
I'm attaching screenshots of what I am seeing - Firefox is claiming
some
OK, ok... I put something together and sent it to you and Earl for review.
I'm not sure about the bounty though since I did say that I'd do this
before.
The patch is awesome and the bounty is yours. Very nice work.
-Jeff
Modifying code to remember last message number is
straight-forward and independent of any file system that
may be in use.
Ok, to spice things up I'm offering a $300 bounty for whoever writes
the patch that gets accepted into the MhonArc codebase.
How can I be most helpful?
-Jeff
Happy Easter and ping... :)
I can work on a patch in the next few days unless Earl tells me to
back off because he's going to do it. :)
Did you revert the patch I've sent which enlarges some aufs parameters?
Not yet. I will revert now.
--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial
Valerie Aurora wrote a set of articles recently on the various
union-type filesystems Linux Weekly News. Are you thinking about
getting involved with union mounts? According to the articles, there
are still some serious problems with readdir().
http://lwn.net/Articles/326818
--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
I built using module-assistant and the Ubuntu 8.04 version of aufs, as
packaged by Julian Andres Klode. It builds fine without the patch.
With the patch, compile fails on the BUILD_BUG_ON line. This is on
x86_64 architecture.
# m-a -k /usr/src/linux-headers-2.6.24-16-server -l 2.6.24-16-server
I grabbed Julian's aufs-source package from Debian Lenny, which is a
little newer. Using it, I was able to successfully patch and compile
and run. No change in performance. Maybe I made a mistake, I am pretty
sleepy.
# cat /sys/module/aufs/version
20080714
$ time ls -U | wc -l # cached, with
I don't see any memory allocation errors in the logs. I did have to
upgrade the kernel slightly yesterday to get all this to work, and it
appears to have splashed relatime mount options everywhere. Also,
while I am now using the 20080714 aufs, my copy of aufs-tools is
completely ancient 20070605.
These large files are created on your first writable branch
(/data4/.aufs.xino). Is the capacity enough?
Also, I don't see this file.
# ls -a /data4
. .. archive .wh..wh.aufs .wh..wh.plink
--
Great, that is a possible performance enhancement for the future, as
very large directories can be slow to read, even when totally cached
by Linux. For the immediate term, I'll talk with the aufs folks to see
if directory reads can get faster.
$ time ls -U /dev/null # cached
real0m1.471s
Thank you for the response. I suspect this is primarily an algorithm
issue. The aufs directory read takes about 1 minute if the SSD
directory has 10,000 files. It takes only 30 seconds if the SSD
directory has 5000 files. It takes 1 second if the SSD directory has
100 files. Does aufs look at the
This question is a little esoteric.
I decided to try improving mhonarc's archiving speed with one of those
whiz-bang solid state drives from Intel. Unfortunately, they are
pretty low capacity and I can't fit all the data. So I decided to go
with a hybrid strategy; all new writes go to the SSD,
I'm seeing a little bit of weird behaviour in the Tamil lanaguage.
Check out the subject line on the message page, which is fine. Versus
the same subject line on the index page, which is rendering
incorrectly due to a UTF-8 character being split by whitespace. (This
is the lastest message from
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Format: 1.8
Date: Fri, 13 Feb 2009 23:33:01 -0800
Source: jablicator
Binary: jablicator
Architecture: source all
Version: 1.0.1
Distribution: unstable
Urgency: low
Maintainer: Jeff Breidenbach j...@debian.org
Changed-By: Jeff Breidenbach j
Happy New Year.
I finally sat down to update the pylucene and jcc packages for Debian,
and noticed the patch required for setuptools. Always a curve ball
somewhere!
Felix, what is Fedora doing about this? Maybe the easiest thing is for
me to wait for upstream adoption of the patch. Any thoughts
Since Lucene has long been superceded by Lucene2, maybe the right
thing to do is kill off this package entirely. What does the
dependency tree look like these days?
___
pkg-java-maintainers mailing list
pkg-java-maintainers@lists.alioth.debian.org
Patch sent to upstream; will make a new package next point release.
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Since Lucene has long been superceded by Lucene2, maybe the right
thing to do is kill off this package entirely. What does the
dependency tree look like these days?
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact
Public bug reported:
Description:
Trying to install a Ubuntu from the LiveCD, I get through the first few
stages and then a dialog appears that days starting paritioner. It has
a progress bar that zooms to 50% instantly. After about two seconds, the
animated progress bar stops its animation and
801 - 900 of 1614 matches
Mail list logo