st if I would be doing ORDER BY id, revision DESC on
the whole table? Because one future query I am working on is where I
select all rows but for only the latest (highest) revision. Curious if
that will have an effect there.
Mitar
[1] https://www.postgresql.org/docs/16/indexes-ordering.html
--
htt
pretty straightforward?
Mitar
[1]
https://stackoverflow.com/questions/45597101/primary-key-with-asc-or-desc-ordering
[2] https://www.postgresql.org/docs/16/indexes-ordering.html
--
https://mitar.tnode.com/
https://twitter.com/mitar_m
https://noc.social/@mitar
Hi!
Oh, I can use PQparameterStatus to obtain application_name of the
current connection. It seems then it is not needed to add this
information into notice message.
Mitar
On Wed, Mar 27, 2024 at 4:22 PM Mitar wrote:
>
> Hi!
>
> We take care to always set application_name to imp
pplication name (when available) to the error and notice message
fields [2]?
Mitar
[1]
https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LINE-PREFIX
[2] https://www.postgresql.org/docs/current/protocol-error-fields.html
--
https://mitar.tnode.com/
https://twitter.c
Hi!
There was no response here. I made the following issue instead:
https://phabricator.wikimedia.org/T360859
Mitar
On Sat, Mar 2, 2024 at 7:24 PM Mitar wrote:
>
> Hi!
>
> Recently, a timestamp with calendarmodel
> https://www.wikidata.org/wiki/Q12138 has been introduced
Mitar created this task.
Mitar added a project: Wikidata.
Restricted Application added a subscriber: Aklapper.
TASK DESCRIPTION
Recently, a timestamp with calendarmodel https://www.wikidata.org/wiki/Q12138
has been introduced into
Wikidata:
https://www.wikidata.org/w/index.php?title
Hi!
Recently, a timestamp with calendarmodel
https://www.wikidata.org/wiki/Q12138 has been introduced into
Wikidata:
https://www.wikidata.org/w/index.php?title=Q105958428=2004936527
How is this possible? I thought that the only allowed values are
Q1985727 and Q1985786?
Mitar
--
https
he Content-Length header is added automatically."
"Once the headers have been flushed (...), the request body may be unavailable."
My understanding is that after the main handler returns, none of this
matters anymore. But are there any other similar side effects?
Mitar
--
Hi!
On Mon, Nov 20, 2023 at 10:26 AM Duncan Harris wrote:
> Why do you care about buffering in Go vs the OS?
Just because I hope that in Go I might have a chance to know when they
are written out than in OS.
Mitar
--
https://mitar.tnode.com/
https://twitter.com/mitar_m
https://noc.soc
by malice) and would like to have some data
on how often that is happening.
Mitar
--
https://mitar.tnode.com/
https://twitter.com/mitar_m
https://noc.social/@mitar
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from
Mitar added a comment.
Awesome! Thanks. This looks really amazing. I am not too convinced that we
should introduce a different dump format, but changing compression seems to
really be a low hanging fruit.
TASK DETAIL
https://phabricator.wikimedia.org/T222985
EMAIL PREFERENCES
https
Mitar added a comment.
I think it would be useful to have a benchmark with more options: JSON with
gzip, bzip (decompressed with lbzip2), and zstd. And then for QuickStatements
the same. Could you do that?
TASK DETAIL
https://phabricator.wikimedia.org/T222985
EMAIL PREFERENCES
https
Hi!
Done: https://bugs.chromium.org/p/chromium/issues/detail?id=1415291
Mitar
On Wed, Nov 2, 2022 at 11:46 PM Mike Taylor wrote:
>
> Hi Mitar,
>
> This is really good feedback. Would you mind filing a bug at
> crbug.com/new? Feel free to respond here with the link.
>
what is used as an Accept header with link
rel="preload" and as="fetch" and it looks like Chrome always sets
Accept: */*, even if you specify type="application/json".
Mitar
On Sun, Jul 10, 2022 at 8:04 PM Mitar wrote:
>
> Hi!
>
> On Sun, Jul
s not
support all use cases.
Currently to me it looks like the best bet is to move the bearer token
to Cookie header. That one might be included when doing a preload
through Link header.
[1] https://bugs.chromium.org/p/chromium/issues/detail?id=962642
[2] https://bugs.chromium.org/p/chromium/issues/d
ts. While HTTP2 push can support such use
cases.
[1] https://bugs.chromium.org/p/chromium/issues/detail?id=962642
Am I missing something obvious about the Link header which would address
those concerns?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
--
You received this messag
Mitar closed this task as "Resolved".
Mitar claimed this task.
TASK DETAIL
https://phabricator.wikimedia.org/T278031
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Mitar
Cc: ImreSamu, Addshore, Mitar, Aklapper, Busfault, Ast
Mitar added a comment.
I checked `wikidata-20220620-all.json.bz2` and it contains now `modified`
field (alongside other fields which are present in API).
TASK DETAIL
https://phabricator.wikimedia.org/T278031
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel
limitation of Early Hints seems to be that resources which require
Authorization header cannot be preloaded, am I mistaken? With HTTP2 push
you can push such a resource and add a corresponding anticipated header.
Mitar
On Wed, Mar 16, 2022 at 7:05 AM Kenji Baheux
wrote:
> Hi Thomas,
>
>
Hi!
Thanks for noticing and sharing. Another known issue with HTML dumps
is that it seems that categories and templates are not always
extracted: https://phabricator.wikimedia.org/T300124
Mitar
On Tue, Apr 5, 2022 at 12:59 PM Jan Berkel wrote:
>
> Hello,
>
> just a heads-up for
b.com/golang/go/pull/42597
[3] https://github.com/golang/net/pull/96
[4] https://github.com/golang/go/issues/18594
[5] https://github.com/golang/go/issues/51361
Mitar
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
--
You received this message because you are subscribed to the Googl
Hi!
I made this ticket [1] to track regaining access to metadata as a dump.
[1] https://phabricator.wikimedia.org/T301039
Mitar
On Tue, Feb 8, 2022 at 2:32 AM Platonides wrote:
>
> The metadata used to be included in the image table, but it was changed 6
> months ago out to Externa
uot;:"tt:609531648","text":"tt:609531649"}}
But that table itself does not seem to be available as a dump? Or am I
missing something or misunderstanding something?
[1] https://www.mediawiki.org/wiki/Manual:Text_table
Mitar
On Fri, Feb 4, 2022 at 6:54 AM Ariel Glenn
/mediawiki
Mitar
On Thu, Feb 3, 2022 at 9:13 AM Mitar wrote:
>
> Hi!
>
> I see. Thanks.
>
>
> Mitar
>
> On Thu, Feb 3, 2022 at 7:17 AM Ariel Glenn WMF wrote:
> >
> > The media/file descriptions contained in the dump are the wikitext of the
> > r
Hi!
I see. Thanks.
Mitar
On Thu, Feb 3, 2022 at 7:17 AM Ariel Glenn WMF wrote:
>
> The media/file descriptions contained in the dump are the wikitext of the
> revisions of pages with the File: prefix, plus the metadata about those pages
> and revisions (user that made the edi
s
there a dump which contains that information? And what is "media/file
descriptions" then? Wiki pages of files?
[1] https://www.mediawiki.org/wiki/API:Imageinfo
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
Xmldatadumps-l
Mitar added a comment.
I would vote for simply including hashes in dumps. They would make dumps
bigger, but they would be consistent with output of `EntityData` which
currently includes hashes for all snaks.
TASK DETAIL
https://phabricator.wikimedia.org/T174029
EMAIL PREFERENCES
https
Mitar added a comment.
Just a followup from somebody coming to Wikidata dumps in 2021: it is really
confusing that dumps do not include hashes, especially because `EntityData`
seems to show them now for all snaks (main, qualifiers, references). So when
one is debugging this, using
ole list if you need that.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org
that would be great. Of course, even better would
be to prevent insertion (because in 99% it means somebody is blindly
inserting a default zero value).
[1]
https://www.wikidata.org/w/index.php?title=Special:Contributions/Mitar==500=Mitar
Mitar
On Mon, Jan 10, 2022 at 4:50 PM Lydia Pintscher
wrot
://gitlab.com/tozd/go/mediawiki
Any feedback is welcome.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org
://gitlab.com/tozd/go/mediawiki
Any feedback is welcome.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https
://gitlab.com/tozd/go/mediawiki
Any feedback is welcome.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
Wiki-research-l mailing list -- wiki-research-l@lists.wikimedia.org
To unsubscribe send an email to wiki-research-l-le
/Wikibase/master/php/md_docs_topics_json.html
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org
? Are they information? Can they be safely
ignored? Should those claims be updated in Wikidata to remove those
fields?
I can provide a list of those if anyone is interested.
[1] https://doc.wikimedia.org/Wikibase/master/php/md_docs_topics_json.html
Mitar
--
http://mitar.tnode.com/
https
k
trace every time you called any of its methods, even when the error
you were wrapping already had a stack trace. Most functions in this
package add a stack trace only if the error does not already have one.
The only exception is Wrap which does it again.
Mitar
--
http://mitar.tnode.com/
https://twit
you use
`errors.Is` to determine which of base errors happened and map that to
a message for the end user, in their language. So in a way,
github.com/cockroachdb/errors has too much stuff for me, so I prefer
something leaner.
Mitar
On Mon, Jan 3, 2022 at 7:27 AM Gaurav Maheshwari
wrote:
>
&g
codewise and very familiar human wise.
Now you can use Errorf to both wrap an existing error, format the
error message, and record a stack trace.
https://gitlab.com/tozd/go/errors
Check it out. Any feedback is welcome.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
--
You received
Hi!
Thank you for the reply. I made the following tasks:
https://phabricator.wikimedia.org/T298436
https://phabricator.wikimedia.org/T298437
Mitar
On Sat, Jan 1, 2022 at 6:07 PM Ariel Glenn WMF wrote:
>
> Hello Mitar! I'm glad you are finding the Wikimedia Enterprise dumps
articles.
Also, is there an API endpoint or Special page which can return the
same JSON for a single Wikipedia page? The JSON structure looks very
useful by itself (e.g., not in bulk).
Mitar
On Tue, Oct 19, 2021 at 4:57 PM Ariel Glenn WMF wrote:
>
> I am pleased to announce that Wik
Mitar added a comment.
I learned today that Wikipedia has a nice approach with a multistream bz2
archive <https://dumps.wikimedia.org/enwiki/> and additional file with an
index, which tells you an offset into the bz2 archive you have to decompress as
a chunk to access particula
16. So I can assume the
post is simply false?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an
are installed
as CLI tools?
Mitar
[1] https://groups.google.com/g/golang-nuts/c/frh9zQPEjUk/m/9tnVPAegDgAJ
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this grou
Hi!
On Sat, Nov 6, 2021 at 2:43 PM Tom Lane wrote:
> Mitar writes:
> > Anyone? Any way to determine the number of affected rows in a statement
> > trigger?
>
> Check the size of the transition relation.
Yes, this is what we are currently doing, but it looks very
ineffic
" OR "UPDATE 3",
> but I don't see how to access that from the trigger. I might have to submit
> a patch for that if nobody else knows a way to get it. (Hopefully somebody
> will respond with the answer...?)
Anyone? Any way to determine the number of affected rows in
>> operate by row.
>
> That is not true
Sorry to be imprecise. In this thread I am interested in statement
triggers, so I didn't mention this explicitly here. So statement
triggers do not have NEW and OLD. But you can combine it with a
row-level rule and this works then well together.
Mit
ere would be the way to go? So
one could write:
CREATE TRIGGER my_trigger AFTER UPDATE ON my_table FOR EACH STATEMENT
WHEN AFFECTED <> 0 EXECUTE FUNCTION my_table_func();
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
Hi!
On Wed, Oct 27, 2021 at 1:16 AM Mark Dilger
wrote:
> If Mitar finds that suppress_redundant_updates_trigger is sufficient, that
> may be a simpler solution. Thanks for mentioning it.
>
> The suppress_redundant_updates_trigger uses memcmp on the old and new rows.
> I don't
n of style or is this a better approach than my:
PERFORM * FROM old_values LIMIT 1;
IF FOUND THEN ...
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
is per statement, not per row. So I do not
think your approach works there? So this is why I am then making a
more complicated check inside the trigger itself.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
is generally fast, but probably because it
can use indices, not sure how fast *= is, given that it is comparing
binary representations. What is experience with this operator of
others?
Mitar
[1]
https://www.postgresql.org/docs/current/functions-comparisons.html#COMPOSITE-TYPE-COMPARISON
--
http
has an unique index column, if that helps.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
Mitar added the comment:
I think the issue is that it is hard to subclass it. Ideally, call to open
would be made through a new _open method which would then call it, and one
could easily subclass that method if/when needed.
--
nosy: +mitar
Mitar added a comment.
In fact, this is not a problem, see
https://phabricator.wikimedia.org/T222985#7164507
pbzip2 is problematic and cannot decompress in parallel files not compressed
with pbzip2. But lbzip2 can. So using lbzip2 makes decompression of single file
dumps fast. So
Mitar added a comment.
OK, so it seems the problem is in pbzip2. It is not able to decompress in
parallel unless compression was made with pbzip2, too. But lbzip2 can
decompress all of them in parallel.
See:
$ time bunzip2 -c -k latest-lexemes.json.bz2 > /dev/null
r
Mitar added a comment.
Are you saying that existing wikidata json dumps can be decompressed in
parallel if using lbzip2, but not pbzip2?
TASK DETAIL
https://phabricator.wikimedia.org/T222985
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Mitar
Mitar added a comment.
I am realizing that maybe the problem is just that bzip2 compression is not
multistream but singlestream. Moreover, using newer compression algorithms like
zstd might decrease decompression speed even further, removing the need for
multiple files altogether. See https
Mitar added a comment.
As a reference see also this discussion
<https://www.wikidata.org/wiki/Wikidata_talk:Database_download#Dumps_cannot_be_decompressed_in_parallel>.
I think the problem with bzip2 is that it is currently singlestream so one
cannot really decompress it in pa
Mitar added a comment.
Are you sure `lastrevid` works like that for the whole dump? I think that
dump is made from multiple shards, so it might be that `lastrevid` is not
consistent across all items?
TASK DETAIL
https://phabricator.wikimedia.org/T209390
EMAIL PREFERENCES
https
so now I am searching
for other explanations for the results of my benchmark.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
mpression-ility would apply to both JSONB and JSON column
types, no? Moreover, it looks like JSONB column type ends up larger on
disk.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
.9207249263213
Size: { pg_total_relation_size: '4597833728' }
[1] https://gitlab.com/mitar/benchmark-pg-json/-/blob/master/example.json
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
] https://gitlab.com/mitar/benchmark-pg-json
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
Mitar added a comment.
Thank you for redirecting me to this issue. As I mentioned in T278204
<https://phabricator.wikimedia.org/T278204> my main motivation is in fact not
downloading in parallel, but processing in parallel. Just decompressing that
large file takes half a day on my m
Mitar added a comment.
I realized I have exactly the same need as poster on StackOveflow: get a dump
and then using real-time feed to keep it updated. But you have to know where to
start with the real-time feed through EventStreams, using historical
consumption
<ht
Mitar updated the task description.
TASK DETAIL
https://phabricator.wikimedia.org/T278204
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Mitar
Cc: Addshore, hoo, Mitar, Invadibot, maantietaja, jannee_e, Akuckartz, Nandana,
Lahi, Gq86
Mitar updated the task description.
TASK DETAIL
https://phabricator.wikimedia.org/T278204
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Mitar
Cc: Addshore, hoo, Mitar, Invadibot, maantietaja, jannee_e, Akuckartz, Nandana,
Lahi, Gq86
Mitar created this task.
Mitar added projects: Wikidata, Dumps-Generation.
Restricted Application added a project: wdwb-tech.
TASK DESCRIPTION
My understanding is that dumps are currently in fact already produced by
multiple shards and then combined into one file. I wonder why simply multiple
Mitar added a comment.
I see that API does return the `modified` field:
https://www.wikidata.org/w/api.php?action=wbgetentities=json=Q1
TASK DETAIL
https://phabricator.wikimedia.org/T278031
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Mitar
Mitar added a comment.
Personally, I would love to have for each item in the dump a timestamp when
it was created and a timestamp when it was last modified.
Related: https://phabricator.wikimedia.org/T278031
TASK DETAIL
https://phabricator.wikimedia.org/T209390
EMAIL PREFERENCES
Restricted Application added a project: wdwb-tech.
TASK DETAIL
https://phabricator.wikimedia.org/T209390
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Mitar
Cc: Mitar, ArielGlenn, Smalyshev, Addshore, Invadibot, maantietaja, jannee_e,
Akuckartz
This is still present for me in Ubuntu 20.04, so I do not think it is
resolved.
Moreover, through time memory usage of the app grows substantially. I
suspect there is a memory leak.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
This is still present for me in Ubuntu 20.04, so I do not think it is
resolved.
Moreover, through time memory usage of the app grows substantially. I
suspect there is a memory leak.
--
You received this bug notification because you are a member of Ubuntu
Desktop Bugs, which is subscribed to
This is still present for me in Ubuntu 20.04, so I do not think it is
resolved.
Moreover, through time memory usage of the app grows substantially. I
suspect there is a memory leak.
--
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to
This is still a problem in Ubuntu 20.04.
I am also noticing high CPU usage and UI often triggers "this app is
frozen, kill it?" message.
--
You received this bug notification because you are a member of Ubuntu
Desktop Bugs, which is subscribed to gnome-calendar in Ubuntu.
This is still a problem in Ubuntu 20.04.
I am also noticing high CPU usage and UI often triggers "this app is
frozen, kill it?" message.
--
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to gnome-calendar in Ubuntu.
This is still a problem in Ubuntu 20.04.
I am also noticing high CPU usage and UI often triggers "this app is
frozen, kill it?" message.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1770886
Title:
you
gave up on that work?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
Hi!
On Thu, Jul 2, 2020 at 7:51 PM Mark Dilger wrote:
> I expect these issues to be less than half what you would need to resolve,
> though much of the rest of it is less clear to me.
Thank you for this insightful input. I will think it over.
Mitar
--
http://mitar.tnode.com/
ortant (not just the last state) because it
allows one to merge with a potentially changed local state in the web
app while it was offline. So in a way it is logical replication and
replay, but just at database - client level.
[1] https://eng.uber.com/postgres-to-mysql-migration/
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
n’t just nice-to-have features.
Oh, I forgot about that. ctid is still just 32 bits? So then for such
table with permanent MVCC this would have to be increased, to like 64
bits or something. Then one would not have to do wrap-around
protection, no?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
have to run a custom version of PostgreSQL
or is this possible through an extension of sort?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
Change by Mitar :
--
nosy: +mitar
___
Python tracker
<https://bugs.python.org/issue22848>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
I can confirm this is not working on Bionic. vainfo output:
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
error: failed to resolve
I can confirm this is not working on Bionic. vainfo output:
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
error: failed to resolve
I can confirm this is not working on Bionic. vainfo output:
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
error: failed to resolve
I can confirm this is not working on Bionic. vainfo output:
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
error: failed to resolve
des are able to utilize full gigabit if such demand would be
required? So that I can assure that they are ready and available? And
that there is not some other bottleneck somewhere on nodes themselves?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
all installations
there is (except for the wire format, which has limitations). So
having to map to it and back, but without developer having to think
about it, might be the best solution.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
se clients often have optimized JSON
readers. Which can beat any other binary serialization format. In
node.js, it is simply the fastest there is to transfer data:
https://mitar.tnode.com/post/in-nodejs-always-query-in-json-from-postgresql/
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
ear how to automatically parse that.
What I am missing is a way to automatically parse composite types.
Those are generally not completely arbitrary, but are defined by the
query, not by data.
What would be the next step to move this further in some direction?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
many data types.
I think RowDescription should be extended to provide full recursive
metadata about all data types. That would be the best way to do it.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
ded to provide information for
composite types as well, recursively. In that way you would not even
have to go and fetch additional information from other types,
potentially hitting race conditions.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
message and then reading descriptions in pg_type table.
>
> Is there some other way to get full typing information of the result I
> am assuming is available to PostreSQL internally?
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
ised
that it says "Many bioinformatics researchers use SQLite in this way."
With limit on 2000 columns this is a very strange claim. I would love
to see a reference here and see how they do that. I might learn
something new.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
the main use case would be sub-sampling rows.
Mitar
On Thu, Oct 17, 2019 at 4:11 PM Donald Griggs wrote:
>
> So if character-separated values (CSV-ish) were originally your preferred
> import format, would using that format for the blob's work for you?
>
> E.g., Suppose you need to inde
d decoding cells in over multiple rows?
Mitar
On Thu, Oct 17, 2019 at 3:38 PM Hick Gunter wrote:
>
> I have the impression that you still do not grasp the folly of a 100k column
> schema.
>
> See the example below, which only has 6 fields. As you can see, each field
> requires a
embed, that approach would be useful. Like composite value
types.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
t modifying to original data too
much. I do hear suggestions to do such transformation, but that is
less ideal for our use case.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.
Hi!
In that case we would have to define a standard BLOB storage format,
slightly defeating the idea of using SQLite to define such standard
future-proof format. :-)
Mitar
On Thu, Oct 17, 2019 at 11:19 AM Hick Gunter wrote:
>
> Since your data is at least mostly opaque in the
1 - 100 of 1570 matches
Mail list logo