And "full legal name" How about my dad, whose full name was Dr. John
Michael Patrick Dennis Emmet O'Gorman, PhD. How many rules does that
break? I've fought many companies over that apostrophe in my life.
Governments tend to throw it away, but it's on my old passport and birth
certificate.
On Thu, Jun 14, 2018 at 8:58 AM, Richard Hipp wrote:
> ...
>
> So there you have it: If you want to harass someone by sending them
> thousands of subscription confirmations, there is now a website to
> assist you. Do we need any further evidence that the heart of man is
> deceitful above all
On Sun, May 13, 2018 at 9:01 AM, Dennis Clarke
wrote:
> On 05/13/2018 11:57 AM, Kevin O'Gorman wrote:
>
>> The arguments here are simplified
>>
>
>
> Will you stop top posting please?
>
> I am trying to follow along here about some x86 boxen stuff but
>
The arguments here are simplified, and assume some things that may or may
not be true. The server I keep in my garage has 16 real cores, 32
threads. More importantly, it uses DDR4 memory which I think means there
are 4 channels to memory which can be used in parallel -- perhaps not on
exactly th
It's not clear to me why reads must be serialized at all. Maybe this could
be re-thought? Maybe there should be a way to tell SQLite that a certain
DB or table is to be read-only and unserialized?
On Sun, May 13, 2018 at 7:15 AM, Keith Medcalf wrote:
>
> Say Hi to Gene!
>
> https://en.wikipedi
Oops. Wrong list. Should go to a Django group. I noticed as soon as I
sent this. Please ignore.
On Fri, Apr 27, 2018 at 2:58 PM, Kevin O'Gorman
wrote:
> I've got a working site, but I made a copy of the database in order to do
> some development work.
> I've hit
I've got a working site, but I made a copy of the database in order to do
some development work.
I've hit a snag that looks like a problem in the data.
Ive written a management command to show the problem:
from django.core.management.base import BaseCommand, CommandError
# Stuff for the library
ot an index on ppos, then you will be wasting time
> >recreating
> >> the index for each query.
> >>
> >> You will probably need to increase the cache size beyond the paltry
> >> default in order for the entire btree structures to be cached in
> >RAM -
On Sun, Nov 26, 2017 at 12:02 AM, Clemens Ladisch
wrote:
> Kevin O'Gorman wrote:
> > I wrote a super simple program to the read the file and count how many
> > records are already there. I got impatient waiting for it so I killed
> > the process and added an output
On Sun, Nov 26, 2017 at 1:39 AM, Simon Slavin wrote:
>
>
> On 26 Nov 2017, at 3:13am, Kevin O'Gorman wrote:
> >
> > I've got a database of some 100 million records, and a file of just over
> > 300 thousand that I want represented in it. I wanted to check ho
I'm pretty new at SQLite, so this may seem obvious to you. Be kind.
I'm using Python on Ubuntu Linux 16.04 LTS, and the sqlite that is built
into Python. The database
is using WAL.
I've got a database of some 100 million records, and a file of just over
300 thousand that I want represented in it
On Sat, Sep 30, 2017 at 11:41 PM, Clemens Ladisch
wrote:
> Kevin O'Gorman wrote:
> > my latest trial run ended with a segmentation fault
>
> Really a segmentation fault? What is the error message?
>
What such things always say "segementation fault (core dumped)&
[mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On
> Behalf Of Kevin O'Gorman
> Sent: Saturday, September 30, 2017 3:55 PM
> To: sqlite-users
> Subject: [sqlite] Seg fault with core dump. How to explore?
>
> > Here's my prime suspect: I'm usin
one of the two main
tables does contain primary keys (integer autoincrement primary key) of the
other.
I'm a little leery of switching on account of one crash, as it may weel be
an over-reaction.
On Sat, Sep 30, 2017 at 4:30 PM, Simon Slavin wrote:
>
>
> On 30 Sep 2017, a
I'm testing new code, and my latest trial run ended with a segmentation
fault after about 5 hours.
I'm running Python 3.5 and its standard sqlite3 module On Xubuntu 16.04.3
LTS. The code is short -- about 300 lines.
This particular program is merging two databases. The result has reached
25 GB,
2017 at 10:38 AM, Richard Hipp wrote:
> On 2/21/17, Kevin O'Gorman wrote:
> > I'm not at all sure this is the right place to ask, but as it only comes
> up
> > when I'm running one of
> > my sqlite jobs, I thought I'd start here. I'm running Python
I'm not at all sure this is the right place to ask, but as it only comes up
when I'm running one of
my sqlite jobs, I thought I'd start here. I'm running Python 3.5 scripts
in Linux 16.04.1 using the sqlite3 package. Machine is Core i5, 32GB RAM.
Some of my stuff takes a while to run, and I like
t', 4, 14, 0, '', '00', None)
(20, 'Next', 3, 6, 0, '', '01', None)
(21, 'Close', 1, 0, 0, '', '00', None)
(22, 'Close', 3, 0, 0, '', '00', None)
(23, 'Close', 4, 0, 0, '', '00
On Wed, Feb 1, 2017 at 6:35 PM, Richard Hipp wrote:
> On 2/1/17, Kevin O'Gorman wrote:
> > I have a database of positions and moves in a strategic game, and I'm
> > searching for unsolved positions that have been connected to an immediate
> > ancestor. I
I have a database of positions and moves in a strategic game, and I'm
searching for unsolved positions that have been connected to an immediate
ancestor. I'm using Python 3.5.2, and the code looks like
#!/usr/bin/env python3
"""Output positions that are reachable but unsolved at census 18 or grea
On Sat, Jan 14, 2017 at 5:04 PM, Simon Slavin wrote:
>
> On 15 Jan 2017, at 1:01am, Kevin O'Gorman wrote:
>
> > Update: the integrity check said "ok" after about 1/2 hour.
> > the record count now takes about 4 seconds -- maybe I remembered wrong
> and
>
Update: the integrity check said "ok" after about 1/2 hour.
the record count now takes about 4 seconds -- maybe I remembered wrong and
it always took this long, but I wasn't stopping it until it had hung for
several minutes.
Color me baffled.
On Sat, Jan 14, 2017 at 4:49 PM, Kevin
I've got a database that has acted strangely from time to time. Or
actually a series of them, since I erase and build from scratch sometimes,
as I'm just starting this project.
Anyway, the latest is that the DB is about 11 GB. It's pretty simple, just
2 main tables and maybe 5 indexes, no foreig
On Fri, Jan 13, 2017 at 3:34 AM, Clemens Ladisch wrote:
> Kevin O'Gorman wrote:
> > On Tue, Jan 10, 2017 at 11:29 PM, Clemens Ladisch
> wrote:
> >> Kevin O'Gorman wrote:
> >>> If I go on to the second table, it appears to finish normally, but
>
On Tue, Jan 10, 2017 at 7:52 PM, Simon Slavin wrote:
>
> On 11 Jan 2017, at 3:28am, Kevin O'Gorman wrote:
>
> > I have a modest amount of data that I'm loading into an SQLite database
> for
> > the first time. For the moment it contains just two tables and a f
On Tue, Jan 10, 2017 at 11:29 PM, Clemens Ladisch
wrote:
> Kevin O'Gorman wrote:
> > If I go on to the second table, it appears to finish normally, but when I
> > try to look at the database with sqlite3, a command-line tool for
> > interacting with SQLite, it sa
This is a problem I don't quite know how to report in a way that will be
useful.
I'm using Python 3.5 and its builtin sqlite package.
I have a modest amount of data that I'm loading into an SQLite database for
the first time. For the moment it contains just two tables and a few
indices, nothing
o me.
I just don't know how to bifurcate between the power supply and something
on the mobo USB interface. And it hasn't happened again.
On Wed, Dec 7, 2016 at 5:18 PM, Kevin O'Gorman
wrote:
> Good feedback. I haven't done a memory check on that machine in a
> while..
mory, just an undetected memory fault
> which will cause random AHTBL.
>
> > -Original Message-
> > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> > On Behalf Of Kevin O'Gorman
> > Sent: Sunday, 4 December, 2016 09:21
> > To: SQLite ma
suspecting a partial power failure. I don't know enough about mobos
and USB to diagnose whether the problem was on the mobo or the power supply.
Creepty. I had to do a hard reset to get thing going again, and it's been
running fine for a day now.
On Mon, Nov 21, 2016 at 9:51 AM, Kevi
On Mon, Nov 21, 2016 at 9:41 AM, Roger Binns wrote:
> On 19/11/16 08:08, Kevin O'Gorman wrote:
> > System with problems: Running Xubuntu Linux 16.04.1, Python 3.5.2.
> [...]
> > System without this problem: Running Ubuntu Linux 14.04.5, Python 3.4.3.
>
> You are good
On Fri, Nov 18, 2016 at 10:18 AM, James K. Lowden
wrote:
> On Fri, 18 Nov 2016 08:55:11 -0800
> "Kevin O'Gorman" wrote:
>
> > All of the python code is a single thread. The closest I come
> > is a few times where I use subprocess.Popen to create what amou
Ran Memtest86+ 5.01, two complete passes, with no errors.
On Sat, Nov 19, 2016 at 8:19 AM, Kevin O'Gorman
wrote:
>
>
> On Sat, Nov 19, 2016 at 8:11 AM, Simon Slavin
> wrote:
>
>>
>> On 19 Nov 2016, at 4:08pm, Kevin O'Gorman
>> wrote:
>>
>
On Sat, Nov 19, 2016 at 8:11 AM, Simon Slavin wrote:
>
> On 19 Nov 2016, at 4:08pm, Kevin O'Gorman wrote:
>
> > I have two different machines running this stuff. Only one is having the
> > seg faults, but they are different enough that this does not convince me
> &
On Fri, Nov 18, 2016 at 3:19 PM, James K. Lowden
wrote:
> On Fri, 18 Nov 2016 10:56:37 -0800
> Roger Binns wrote:
>
> > Popen calls fork (it seems like you are doing Unix/Mac, not Windows).
> > fork() duplicates the process including all open file descriptors.
> > One or more of those descriptor
On Fri, Nov 18, 2016 at 8:38 AM, Roger Binns wrote:
> On 17/11/16 19:14, Kevin O'Gorman wrote:
> > SO: I need help bifurcating this problem. For instance, how can I tell
> if
> > the fault lies in SQLite, or in python? Or even in the hardware, given
> that
> > th
On Fri, Nov 18, 2016 at 3:11 AM, Simon Slavin wrote:
> Forgot to say ...
>
> Most of these problems result from attempting to reuse memory you've
> already released. Even if the error is happening inside a SQLite routine,
> it will be because you passed it a pointer to an SQLite connection which
I ran this thing 3 times with identical inputs, it is deterministic, but it
failed after 66, 128 and 96 minutes respectively. Each run started with no
database at all, and gets a single input from which the rest is
calculated. The calculations are cached (in flat files), so and it never
got to th
se or if
> it is just read-only. If the others only need read-only, let them access a
> copy of the database while you make your changes in another copy, then just
> swap the databases when done. -- Darren Duncan
>
>
> On 2016-10-15 1:21 PM, Kevin O'Gorman wrote:
>
>>
I'm new to this, and working in Python's sqlite3. So be patient, and don't
expect me to know too much. This is also a personal hobby, so there's
nobody else for me to ask.
I've got a database of a some tens of millions of positions in a board
game. It may be over a billion before I'm done (work
TE
transactions. They're quick, but conflicts are fairly likely, so it's
probably the right solution.
On Tue, Sep 20, 2016 at 3:02 PM, Richard Hipp wrote:
> On 9/20/16, Kevin O'Gorman wrote:
> > Surely, Mr. Hipp is an authority, but I'm slightly puzzled by this
>
habit of always doing isolation_level = None, and doing everything
> explicitly, but as long as you know what's going on then you're good.
>
>
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Kevin O'
ptions?
On Tue, Sep 20, 2016 at 10:09 AM, Richard Hipp wrote:
> On 9/20/16, Kevin O'Gorman wrote:
> > c.execuite("BEGIN TRANSACTION IMMEDIATE")
> >
> > and is IMMEDIATE the right thing, or do I need EXCLUSIVE.
>
> IMMEDIATE is the right thing. That le
I think I understand the basics of SQL and ACID properties, but I'm new to
SQLite and not really experienced in any of these. So I'm having some
trouble figuring out the detailed consequences of IMMEDIATE, EXCLUSIVE and
DEFERRED and the autocommit mode of python's sqlite3.
I expect my transaction
It seems to me that the simplest, most portable approach for this sort of
thing would to be having the SELECT create a temporary table of the desired
actions, and not apply them until after the select has concluded. This
would work in any database -- it does not depend on precise semantics of
WAL,
On Wed, Aug 10, 2016 at 6:50 AM, Jonathan Moules <
jonathan-li...@lightpear.com> wrote:
> Hi List,
>I'm using Python's sqlite3 library to access a SQLite db. I'd like to
> set the location for the temporary databases in a platform agnostic fashion
> (*nix or Windows).
>
> This page - https://w
On Sun, Aug 7, 2016 at 11:11 PM, Dan Kennedy wrote:
> On 08/08/2016 02:03 AM, Dominique Pellé wrote:
>
>> Kevin O'Gorman wrote:
>>
>> CREATE INDEX has two problems:
>>> 1) poor default location of temporary storage.
>>> 2) gets wedged on very large
On Mon, Aug 8, 2016 at 2:41 AM, Philip Newton
wrote:
> On 7 August 2016 at 22:37, Kevin O'Gorman wrote:
> > I use the LTS (long-term support) version of Ubuntu, and like not having
> to
> > keep up with all the latest. My current 14.04 is at end-of-life
>
> LTS ar
On Sun, Aug 7, 2016 at 12:03 PM, Dominique Pellé
wrote:
> Kevin O'Gorman wrote:
>
> > CREATE INDEX has two problems:
> > 1) poor default location of temporary storage.
> > 2) gets wedged on very large indexes.
> >
> > I'm using the sqlite that
umented. I just guessed it since the sort utility honors it and I
thought it was possible sort was being used under the covers. It's not,
but it all worked out okay.
Does anybody know where the actual defaults and controlling environment
variables are documented, by operating system? Or
On Sat, Aug 6, 2016 at 2:49 PM, Kevin O'Gorman
wrote:
> On Sat, Aug 6, 2016 at 2:09 AM, Dan Kennedy wrote:
>
>> On 08/06/2016 09:52 AM, Kevin O'Gorman wrote:
>>
>>> On Fri, Aug 5, 2016 at 2:03 PM, Dan Kennedy
>>> wrote:
>>>
>>>
On Sat, Aug 6, 2016 at 2:09 AM, Dan Kennedy wrote:
> On 08/06/2016 09:52 AM, Kevin O'Gorman wrote:
>
>> On Fri, Aug 5, 2016 at 2:03 PM, Dan Kennedy
>> wrote:
>>
>> On 08/06/2016 03:28 AM, Kevin O'Gorman wrote:
>>>
>>>
On Fri, Aug 5, 2016 at 2:03 PM, Dan Kennedy wrote:
> On 08/06/2016 03:28 AM, Kevin O'Gorman wrote:
>
>> On Fri, Aug 5, 2016 at 1:08 PM, David Raymond
>> wrote:
>>
>> ..
>>
>> Apart from the default location of the files, it reads like your n
On Fri, Aug 5, 2016 at 3:03 PM, Darren Duncan
wrote:
> On 2016-08-04 7:27 AM, Jim Callahan wrote:
>
>> Steps
>> Agree with Darren Duncan and Dr. Hipp you may want to have at least 3
>> separate steps
>> (each step should be a separate transaction):
>>
>> 1. Simple load
>> 2. Create additional col
untested error, handled poorly.
>
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Kevin O'Gorman
> Sent: Friday, August 05, 2016 3:41 PM
> To: SQLite mailing list
> Subject: Re: [sqlite] newbie has w
On Fri, Aug 5, 2016 at 12:30 PM, Igor Korot wrote:
> Hi, Kevin,
>
> On Fri, Aug 5, 2016 at 3:18 PM, Kevin O'Gorman
> wrote:
> > Okay, I followed some of the advice y'all gave and got some results.
> >
> > 1. The original problem was compromised by malfor
CREATE INDEX has two problems:
1) poor default location of temporary storage.
2) gets wedged on very large indexes.
I'm using the sqlite that came with Xubuntu 14.04, I think it's version
3.8.2.
I created a table, and used .import to populate it with records, about 1.4
billion of them. The resul
few hours I killed it manually. This is a deal-killer.
So the questions are: Where do bug reports go? I seem to be running 3.8.2;
is this fixed in any later version?
On Thu, Aug 4, 2016 at 9:27 AM, Kevin O'Gorman
wrote:
> The metric for feasability is coding ease, not runtime. I'
9 AM, R Smith wrote:
>
>
> On 2016/08/04 5:56 PM, Kevin O'Gorman wrote:
>
>> On Thu, Aug 4, 2016 at 8:29 AM, Dominique Devienne
>> wrote:
>>
>>
>> It's even less dense than that. Each character has only 3 possible
>> values,
>> and th
On Thu, Aug 4, 2016 at 8:29 AM, Dominique Devienne
wrote:
> On Thu, Aug 4, 2016 at 5:05 PM, Kevin O'Gorman
> wrote:
>
> > 3. Positions are 64 bytes always, so your size guesses are right. They
> are
> > in no particular order. I like the suggestion of a separat
uot; does not complete; then may want to load a fixed
> number of rows into separate tables (per Darren Duncan) and then combine
> using an APPEND
> or a UNION query (doing so before steps 2 and 3).
>
> HTH
>
> Jim Callahan
> Data Scientist
> Orlando, FL
>
>
>
>
&
I'm working on a hobby project, but the data has gotten a bit out of hand.
I thought I'd put it in a real database rather than flat ASCII files.
I've got a problem set of about 1 billion game positions and 187GB to work
on (no, I won't have to solve them all) that took about 4 hours for a
generato
62 matches
Mail list logo