Amirouche Amazigh BOUBEKKI ~ https://hyper.dev
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org
Le lun. 13 juil. 2020 à 21:22, Adam Sanchez a écrit :
>
> I have 14T SSD (RAID 0)
>
> Le lun. 13 juil. 2020 à 21:19, Amirouche Boubekki
> a écrit :
> >
> > Le lun. 13 juil. 2020 à 19:42, Adam Sanchez a écrit
> > :
> > >
> > > Hi,
>
___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
--
Amirouche ~ https://hyper.dev
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Le jeu. 11 juin 2020 à 11:13, David Causse a écrit :
>
> Hi,
>
> did you "munge"[0] the dumps prior to loading them?
> As a comparison, loading the munged dump on a WMF production machine (128G,
> 32cores, SSD drives) takes around 8days.
>
> 0:
omes up with another sharding strategy, how will edits that
span multiple regions will happen?
How will it make entering the wikidata party easier?
I dare to write in the open: it seems to me like we are witnessing
"Earth is flat vs. Earth is not flat" kind of event.
Thanks for the reply!
Hello Guillaume,
Le ven. 7 févr. 2020 à 14:33, Guillaume Lederrey
a écrit :
>
> Hello all!
>
> First of all, my apologies for the long silence. We need to do better in
> terms of communication. I'll try my best to send a monthly update from now
> on. Keep me honest, remind me if I fail.
>
It
availability
>
> ref: https://lists.wikimedia.org/pipermail/wikidata/2019-June/013124.html
The other proposal I made is about replacing both wikibase and blazegraph:
https://meta.wikimedia.org/wiki/Grants:Project/Iamamz3/Prototype_A_Scalable_WikiData
What do you think?
availability
>
> ref: https://lists.wikimedia.org/pipermail/wikidata/2019-June/013124.html
The other proposal I made is about replacing both wikibase and blazegraph:
https://meta.wikimedia.org/wiki/Grants:Project/Iamamz3/Prototype_A_Scalable_WikiData
What do you think?
Checkout my proposal at
https://meta.wikimedia.org/wiki/Grants:Project/Future-proof_WDQS
I started working a paper (more will follow) that will document and
support my work, see
https://en.wikiversity.org/wiki/WikiJournal_Preprints/Generic_Tuple_Store#Future-proof_WDQS
Happy Holydays ;-)
Le dim. 22 déc. 2019 à 21:23, Kingsley Idehen a écrit :
>
> On 12/22/19 4:17 PM, Kingsley Idehen wrote:
>
> On 12/22/19 3:17 PM, Amirouche Boubekki wrote:
>
> Hello all ;-)
>
>
> I ported the code to Chez Scheme to do an apple-to-apple comparison
> between GNU Gu
Hello all ;-)
I ported the code to Chez Scheme to do an apple-to-apple comparison
between GNU Guile and Chez and took the time to launch a few queries
against Virtuoso available in Ubuntu 18.04 (LTS).
Spoiler: the new code is always faster.
The hard disk is SATA, and the CPU is dubbed:
for research purposes.
I agree.
> ... just an idea I thought I would float out there. Perhaps there is
> another (better) way to define a concise dump.
>
> Best,
> Aidan
>
> ___
> Wikida
/amirouche/nomunofu
What do you think?
___
Wikidata-tech mailing list
Wikidata-tech@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-tech
:
wget http://hyper.dev/nomunofu-v0.1.4.tar.bz2
The directory is 11G uncompressed.
Grab the source code with the following command:
git clone https://github.com/amirouche/nomunofu
Here is an example Python query that returns at most 5 adverbs:
In [10]: for item in nomunofu.query(
...: (var
Le dim. 8 déc. 2019 à 18:52, Amirouche Boubekki
a écrit :
>
> I am very pleased to announce the immediate availability of nomunofu.
>
> nomunofu is database server written in GNU Guile that is powered by
> WiredTiger ordered key-value store.
>
> It allows to store and que
wikidata
triples.
You can get the code with the following command:
git clone https://github.com/amirouche/nomunofu
After the installation of GNU Guix [0], you can do:
make init && gunzip test.nt.gz && make index && make web
And in another terminal:
make query
l the text" that does not
apply to concept search or wikification.
The most common term for this kind of search is called "fuzzy search"
or "spell checking" or "autocomplete". The basic algorithm is to
search terms using prefixes
I created another draft proposal to create a *prototype* to scale wikidata,
using the tools I have been building, that goes beyond only scaling
WikiData Query Service. The first quarter should be reserved to WDQS.
As you might have seen, the first proposal
Hello Sebastian and Stas,
Le mer. 12 juin 2019 à 19:27, Amirouche Boubekki <
amirouche.boube...@gmail.com> a écrit :
> Hello Sebastian,
>
> First thanks a lot for the reply. I started to believe that what I was
> saying was complete nonsense.
>
> Le mer. 12 juin 2019 à 1
Le mer. 12 juin 2019 à 19:11, Stas Malyshev a
écrit :
> Hi!
>
> >> So there needs to be some smarter solution, one that we'd unlike to
> > develop inhouse
> >
> > Big cat, small fish. As wikidata continue to grow, it will have specific
> > needs.
> > Needs that are unlikely to be solved by
Hello Sebastian,
First thanks a lot for the reply. I started to believe that what I was
saying was complete nonsense.
Le mer. 12 juin 2019 à 16:51, Sebastian Hellmann <
hellm...@informatik.uni-leipzig.de> a écrit :
> Hi Amirouche,
> On 12.06.19 14:07, Amirouche Boubekki wro
Le dim. 9 juin 2019 à 23:18, Amirouche Boubekki <
amirouche.boube...@gmail.com> a écrit :
> I made a proposal for a grant at
> https://meta.wikimedia.org/wiki/Grants:Project/WDQS_On_FoundationDB
>
> Mind the fact that this is not about the versioned quadstore. It is about
&g
I made a proposal for a grant at
https://meta.wikimedia.org/wiki/Grants:Project/WDQS_On_FoundationDB
Mind the fact that this is not about the versioned quadstore. It is about
simple triplestore, it mainly missing bindings for foundationdb and SPARQL
syntax.
Also, I will prolly need help to
uot;)
("wikidata", "used-by", "google")
That is, one has to create an hyper-edge if you want to be able to query
those facts.
> [2] https://phabricator.wikimedia.org/project/view/1239/
Best regards,
Amirouche ~ amz3
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Hello all,
Le mar. 4 juin 2019 à 15:46, Marielle Volz a
écrit :
> Yes, the api is at
> https://www.wikidata.org/w/api.php?action=query=search=Bush
>
> There's a sandbox where you can play with the various options:
>
>
Hi Joshua,
Thanks for your input.
Le jeu. 16 mai 2019 à 17:02, Joshua Shinavier a écrit :
> Hi Amirouche,
>
> The version history and time-travel features sound a lot like the
> "integrated versioning system" of Freebase, circa 2009 when they (Metaweb)
> presented a
ht now, I am working on getting it all together.
https://github.com/awesome-data-distribution/datae/tree/master/docs/SCHEME20XY#abstract
On Fri, May 3, 2019 at 8:19 PM Amirouche Boubekki <
> amirouche.boube...@gmail.com> wrote:
>
>> GerardM post triggered my interest to post to
GerardM post triggered my interest to post to the mailing list. As you
might know I am working on functional quadstore that is quadstore that
keeps around old version of data, like a wiki but in direct-acyclic-graph.
It only stores differences between commits. It rely on snapshot of the
latest
Hello,
I am investigating with several people other the rainbow in GNU project as
part of guix [0].
Our goal is to make our packages easier to discover by our users via
full-text search or structured queries.
Questions:
a) I see Arch and Debian have properties. What would it take to have a
What wikidata doesn't track the license of each piece of information?!
--
Amirouche ~ amz3 ~ http://www.hyperdev.fr
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Le 09/07/2017 à 08:53, Timothy Holborn a écrit :
Hi Peter,
Awesome. Yes. this is the sort of thing i was looking to leverage.
I couldn't find the RDF output for wordnet. FWIW: i find this useful
http://osds.openlinksw.com/
Still v.interested to understand how we might further enhance
#Is_CC_the_right_license_for_data.3F
On 20 March 2017 at 21:57, Amirouche <amirou...@hypermove.net
<mailto:amirou...@hypermove.net>> wrote:
Héllo all!
Le 02/03/2017 à 10:34, Léa Lacroix a écrit :
Hello Amirouche,
Thanks a lot for your interest in this proj
Fixed the subject of the mail
Le 20/03/2017 à 21:57, Amirouche a écrit :
Héllo all!
Le 02/03/2017 à 10:34, Léa Lacroix a écrit :
Hello Amirouche,
Thanks a lot for your interest in this project and your proposal to
help.
Currently, the development team is still working on the new datatype
Héllo all!
Le 02/03/2017 à 10:34, Léa Lacroix a écrit :
Hello Amirouche,
Thanks a lot for your interest in this project and your proposal to help.
Currently, the development team is still working on the new datatype
structure for lexemes, and we don't have something to demo yet.
I don't
Background
DBpedia and Wikidata currently focus primarily on representing factual
knowledge as contained in Wikipedia infoboxes. A vast amount of
information, however, is contained in the unstructured Wikipedia article
texts. With the DBpedia Open Text Extraction Challenge, we aim to
Héllo,
I have been lurking around for some month now. I stumbled upon the
wiktionary in wikidata project
via for instance this pdf
https://upload.wikimedia.org/wikipedia/commons/6/60/Wikidata_for_Wiktionary_announcement.pdf
Now I'd like to help. For that I want to build a bot to achieve
für Körperschaften I Berlin, Steuernummer 27/029/42207.
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
--
Amirouche ~ amz3 ~ http://www.hyperdev.fr
, what do
you use wikidata for using your favorite tool? I'd like to
replicate that work using AjguDB and see whether it's up to
the task.
Thanks in advance!
PS: I have written a similar library in Scheme.
[0] https://github.com/amirouche/AjguDB
--
Amirouche ~ amz3 ~ http://www.hyperdev.fr
mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
--
Amirouche ~ amz3 ~ http://www.hyperdev.fr
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
39 matches
Mail list logo