On Thu, 01 Jun 2006 23:08:13 -0500, Kareem Badr <[EMAIL PROTECTED]>
wrote:
> Does anyone know what's going on here? Is this just user error?
It's just integer division.
D:\sqlite>sqlite3 :memory:
SQLite version 3.3.4
Enter ".help" for instructions
sqlite> create table bob (f1 varchar, f2 varchar
I tried that, but the fields ended up getting set to '', rather than 0.
The fields are defined as varchar, remember.
*But*, I don't think the NULL values are causing the problems, because
the query runs fine when more than 1 row was returned. And my query
actually returned all the rows, includ
Kareem Badr wrote:
> Bill KING wrote:
>> Kareem Badr wrote:
>>
>>> I should mention that free_space and size are defined as varchars, for
>>> some reason. Not sure if that has anything to do with the issues I'm
>>> seeing.
>>>
>>>
>> Do any of these fields contain NULL? as that will definate
Bill KING wrote:
Kareem Badr wrote:
I should mention that free_space and size are defined as varchars, for
some reason. Not sure if that has anything to do with the issues I'm
seeing.
Do any of these fields contain NULL? as that will definately have an
impact upon both joins and upon c
Kareem Badr wrote:
> I should mention that free_space and size are defined as varchars, for
> some reason. Not sure if that has anything to do with the issues I'm
> seeing.
>
Do any of these fields contain NULL? as that will definately have an
impact upon both joins and upon calculations. (when usi
I should mention that free_space and size are defined as varchars, for
some reason. Not sure if that has anything to do with the issues I'm
seeing.
Yet, the following query:
SELECT d.name, disks.name, sum(free_space), sum(size) FROM devices d
LEFT OUTER JOIN disks ON (d.id=disks.computer_id AND disks.name='G:')
WHERE d.name='kensho';
Gives the following result:
kensho|G:|206140624896|237003182080
Which I can use to calculate the percen
This is probably my own SQL rustiness, but can someone explain this to me?
If I run this query:
SELECT d.name, disks.name, free_space/size FROM devices d LEFT OUTER
JOIN disks ON (d.id=disks.computer_id AND disks.name='G:') WHERE
d.name='kensho' ;
I get the following result:
kensho|G:|0.8697
Not the question. You probably chose to store it in an SQL DBMS so that
you can use the DML to access the data in some way. Do you want to
access individual array rows, individual array elements or just store
and retrieve the entire array?
jt wrote:
From pysqlite http://initd.org/tracker/py
Steve Green <[EMAIL PROTECTED]> wrote:
> the unary + operator in front of the two utime terms did cause the optimal
> index to be used... Can you explain why this works?
A unary + in front of a column name disqualifies the term from
consideration by the optimizer, and hence from use by indices.
Th
Steve Green <[EMAIL PROTECTED]> wrote:
> CREATE UNIQUE INDEX pk_data on data( utime, r_id, u_id );
> ...
> CREATE INDEX ix_data_ut on data( u_id, utime );
Wouldn't a unique index on (u_id, utime, r_id) get you
the pk constraint and better performance on your query
with no other index to confuse t
Richard,
Thanks for the reply. I did run ANALYZE and that didn't help. However,
the unary + operator in front of the two utime terms did cause the optimal
index to be used... Can you explain why this works?
I also realized that if I change the "where" clause to something like
where u_id >= 0 a
Steve Green <[EMAIL PROTECTED]> wrote:
[...]
>
> select u_id, sum( data1 ), sum( data2 )
> from data where utime >= 1146441600 and utime < 114912
> group by u_id
> order by sum( data1 ) desc
> limit 10
>
[...]
>
> So, is there a way that I can get sqlite to use the optimal index
> without ha
Sorry, I forgot to mention that I'm using sqlite v3.3.4 on redhat linux
v7.3
Steve
Steve Green wrote:
Hi,
I have the following schema
CREATE TABLE data(
utime int4,
r_id int2,
u_id int4,
data1 int8,
data2 int8
);
Each row is uniquely defined by utime, r_id, and u_id, so
Hi,
I have the following schema
CREATE TABLE data(
utime int4,
r_id int2,
u_id int4,
data1 int8,
data2 int8
);
Each row is uniquely defined by utime, r_id, and u_id, so I have the
following index
CREATE UNIQUE INDEX pk_data on data( utime, r_id, u_id );
This index is also
From pysqlite http://initd.org/tracker/pysqlite.
On 6/2/06, John Stanton <[EMAIL PROTECTED]> wrote:
Python newsgroup wrote:
> Hi,
>
> What is the most efficient way to enter python binary data such as lists or
> dictionaries in to sqlite? Has anyone had any experiences with this? We
> will
> be
Jay Sprenkle schreef:
On 5/31/06, René Tegel <[EMAIL PROTECTED]> wrote:
Eugene Wee schreef:
>
> Oh, and if one's email address is [EMAIL PROTECTED], according to
> one of the first few emails I received from the mail manager, another
> way to unsubscribe is to send an email to:
> [EMAIL PROTECTE
Python newsgroup wrote:
Hi,
What is the most efficient way to enter python binary data such as lists or
dictionaries in to sqlite? Has anyone had any experiences with this? We
will
be inserting a list of lists of integers into our database.
For example:
[[1,2,3],[1,4,6],[1,1,1],[2,4,6],[12,32,
Try the pickle module, it dumps nearly any datastruct (except file
object) in a string.
You can load it back afterwards.
On 6/1/06, Python newsgroup <[EMAIL PROTECTED]> wrote:
Hi,
What is the most efficient way to enter python binary data such as lists or
dictionaries in to sqlite? Has anyone h
Hi,
What is the most efficient way to enter python binary data such as lists or
dictionaries in to sqlite? Has anyone had any experiences with this? We will
be inserting a list of lists of integers into our database.
For example:
[[1,2,3],[1,4,6],[1,1,1],[2,4,6],[12,32,4],...,[1,3,4]]
Any sugges
Don't know if this helps:
http://support.microsoft.com/kb/Q296264
Looks like you need to turn optimistic locking off. Same is true with MS
Access if the file is located on a share.
http://support.microsoft.com/default.aspx?scid=KB;EN-US;q300216&;
--
View this message in context:
http://www.na
>// Try and lock the file for writing
>ULARGE_INTEGER bytesToLock.QuadPart = 2147483648; // 2^31 if
(LockFile(h, 0, 0, bytesToLock.LowPart,
>bytesToLock.HighPart) == 0) { ...
>}
Sorry, finger trouble :)
Should read:
// Try and lock the file for writing
ULARGE_INTEGER bytesToLock.QuadPart = 2147
Hi,
This is more of a heads up than anything else. I'm developing an app
that writes to files using the Windows API. I actually used some code
from the SQLite file library hence this message.
The code snippet is:
// Try to open existing file
h = CreateFileA(path,
GENERIC_WRITE,
0,
NULL,
CREATE_
Nikki Locke <[EMAIL PROTECTED]> wrote:
>
> Given that I am using SQLite.Net, if I execute this pragma as the first
> command
> passed over the connection, before creating any tables, will it work? Or do I
> really have to execute it "before creating the database"?
I think you're confusing creatin
On Thu, 2006-06-01 at 09:28 -0700, Richard Battagline wrote:
> How do you unsubscribe?
>
Send an email to [EMAIL PROTECTED] which will return all the
information about the list or check your sqlite mailbox for the welcome
message.
--
G. Roderick Singleton <[EMAIL PROTECTED]>
PATH tech
smime.p7
Hi Richard,
Send an email to [EMAIL PROTECTED]
Actually, I sent an email to:
[EMAIL PROTECTED]
Hopefully it has resulted in an unsubscribe confirmation message sent to you,
and replying to that email should unsubscribe you from this list.
Regards,
Eugene Wee
Richard Battagline wrote:
How d
On 6/1/06, Richard Battagline <[EMAIL PROTECTED]> wrote:
How do you unsubscribe?
Send an email to [EMAIL PROTECTED]
--
Nemanja Corlija <[EMAIL PROTECTED]>
Richard Battagline <[EMAIL PROTECTED]> wrote:
> How do you unsubscribe?
Read the headers of any message from the group.
I was reading sqlite3_open documentation earlier this week and noticed
that the docs say:
"The returned sqlite3* can only be used in the same thread in which it
was created. It is an error to call sqlite3_open() in one thread then
pass the resulting database handle off to another thread to use. Th
How do you unsubscribe?
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
That seems to work perfectly.
How is the performance, comapared to changing the queries to check for
NULL explicitly?
I usually use the coalesce() function to fix this. For
example:
SELECT * FROm table WHERE coalesce(test_field,'') != 'test value';
The "coalesce(test_field,'')" will conve
"Peter Cunderlik" <[EMAIL PROTECTED]> wrote:
> Hello everybody,
>
> I'd like to ask a couple of silly questions before newcomers like me
> get moderated. :-) I've browsed through the documentation and this
> mailing list, trying to understand issues with multithreading. I'd
> like if someone could
Kareem Badr <[EMAIL PROTECTED]> wrote:
> I read the section on the SQLite website regarding the inconsistencies
> of how NULL values are handled in various databases. I didn't see
> anything that mentioned what I am running into, though.
>
> In a nutshell, the following queries do not return row
D. Richard Hipp wrote:
> 3.2.2 will refuse to read a database file created by 3.3.5
> unless version 3.3.5 was compiled with -DSQLITE_DEFAULT_FILE_FORMAT=1
> or the "PRAGMA legacy_file_format=ON" pragma is used prior to creating
> the database. But if the database is created in a way that 3.2.2
On 6/1/06, Kareem Badr <[EMAIL PROTECTED]> wrote:
I read the section on the SQLite website regarding the inconsistencies
of how NULL values are handled in various databases. I didn't see
anything that mentioned what I am running into, though.
In a nutshell, the following queries do not return ro
The behavior you are seeing is what is expected. NULL only every matches
the IS NULL criteria. It doesn't pass equality, inequality, greater than
or less than tests.
The best option is to define the field as NOT NULL. NULLs cause all
manner of trouble, and the best option is usually to avoid de
I read the section on the SQLite website regarding the inconsistencies
of how NULL values are handled in various databases. I didn't see
anything that mentioned what I am running into, though.
In a nutshell, the following queries do not return rows when test_field
is NULL. It seems counter-int
Hello everybody,
I'd like to ask a couple of silly questions before newcomers like me
get moderated. :-) I've browsed through the documentation and this
mailing list, trying to understand issues with multithreading. I'd
like if someone could confirm my conclusions.
1. As of SQLite 3.3.5, there i
yuyen wrote:
> And the error message is:
> "execute err: SQLSTATE[HY000]: General error: 8 attempt to write a
> readonly database"
>
> So the problem is why the database becomes read only.
>
This time, you get another message :)
Are you sure the Web Server user (Apache or...) has write access to yo
Hi, DJ
I try the following code:
--
try{
$dsn = 'sqlite:counter/cntdata.sq3';
$myConn = new PDO($dsn);
$myConn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$j = $myConn->exec('update test_item set balqty = 100');
}catch (PDOException $e){
die('execute err: '.$e->getMessage());
}
"Anthireddy, Srinivasreddy IN BLR SISL" <[EMAIL PROTECTED]> wrote:
> Hello,
>
> This is Srinivas Reddy A, working for SIEMENS INDIA.
>
> I'm wrote one application to take the performance measurements in c++ =
> using sqlite_open(), sqlite_exec(), sqlite_close() API's how you people =
> given one
>Are you volunteering for the human moderator job?
>I'm sure that Dr. Richard Hipp's time is worth much more doing
something (perhaps anything) other than
>moderating all messages posted here. I also think any human moderator
will produce long delays in posting some
>messages while they wait fo
yuyen wrote:
> HI, JC
> I use php_do_sqlite with PHP 5.1.4 and Sqlite 3.2.8 on Windows XP. The
> following is my coding:
>
> try{
> $dsn = 'sqlite:counter/cntdata.sq3';
> $myConn = new PDO($dsn);
> $myConn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
>
> $myPre = $myConn->prepare('upd
This one time, at band camp, "yuyen" <[EMAIL PROTECTED]> wrote:
> HI, JC
> I use php_do_sqlite with PHP 5.1.4 and Sqlite 3.2.8 on Windows XP. The
> following is my coding:
-8< snip
try {
$dbh = new PDO("sqlite:counter/cntdata.sq3";
/*** echo a message s
44 matches
Mail list logo