As documented, recursive table expressions treat all columns as varchar. Is
there any way to put the required casts in a subexpression, rather than
repeating them throughout the expression? Seems like it needs another table
expression, but there doesn't appear to be a way to nest them.
For
data file
problem. Got the same result. The sampler in visualvm still shows all the
time being consumed in PageStore.getPage.
Whatever the problem, it seems to be dramatically worse in 1.4. ;)
On Sunday, December 11, 2016 at 12:13:09 PM UTC-8, Brian Craft wrote:
>
> How can I select pag
How can I select pageStore in 1.4? I haven't found the option in the docs.
If I've written the file with 1.3, perhaps 1.4 will find an engine setting
in the db?
So I understand, is getPage() fetching from disk? Using a profiler,
getPage() is the top hot-spot. I'm not sure why the profiles
Also, if I set MVCC=TRUE instead, the behavior is the same except the lock
is in Command.executeQuery.
On Saturday, December 10, 2016 at 1:17:39 PM UTC-8, Brian Craft wrote:
>
> Using an older version, 1.3.175, PageStore engine, with
> MULTI_THREADED=TRUE, when multiple threads are run
Using an older version, 1.3.175, PageStore engine, with
MULTI_THREADED=TRUE, when multiple threads are running read queries, most
end up blocked in getPage.
iostat shows the disk never above 1% usage, r/s stuck at 0, cpu 95% idle,
and java thread dump shows most everything blocked in getPage.
Is there a way to parameterize the values in WHERE ... IN? Like IN (?).
I'm trying this, passing in an array of String, [Ljava.lang.String, but it
returns zero rows. Also tried passing [Ljava.lang.Object, with the same
result.
--
You received this message because you are subscribed to the
where t.id = z.id
and pass the array as parameter.
But this question raises up often and I believe we can fix this relatively
easy by allowing `IN ?` expression where parameter is expected to be an
array. Thoughts?
Sergi
2015-06-14 19:55 GMT+03:00 Brian Craft craft...@gmail.com
Is there any way to make use of a case sensitive index when doing a case
insensitive query?
I can, in fact, 'unroll' the matches if they're not very long, like
select * where x = 'foo' union
select * where x = 'foO' union
select * where x = 'fOo' union ...
which is much, much faster than doing
I'm at a loss to explain this behavior. Wondering if anyone else has an
idea. The app has a swing ui, and h2 with c3po. On occasion the h2 jobs
start hanging, and remain stuck until the ui window takes focus. After it
takes focus, things are fine until much later, when the condition recurs.
IllegalArgumentException(Non-empty
directory: +file.getFileName());
547 return file;
The problems seems to be that there are directories that are not empty.
But I don't think that's from H2. What is the complete exception message?
Regards,
Thomas
On Thu, Dec 4, 2014 at 6:10 PM, Brian Craft
The h2 jar doesn't have entries for directories. This seems to be causing
problems with down-stream build tools. Building an uber-jar, then passing
through pack200, it throws a non-empty directory exception. I suspect
it's the uber-jar build that is broken, but am also wondering why the h2
I haven't tried to reproduce this, but posting in case anyone knows
off-hand what happened.
I have an app that runs h2 in embedded mode, and I did the following steps
(reconstructed from memory shell log):
1) start app with AUTO_SERVER in the db flags, load a bunch of data
2) start h2 console
DriverManager.getConnection(jdbc:default:connection)
But note that you'll have to enable this feature by adding
;DEFAULT_CONNECTION=TRUE to your H2 URL.
On 2014-08-18 08:31 PM, Brian Craft wrote:
Managing prepared statements, which are allocated per-connection.
On Sunday
Managing prepared statements, which are allocated per-connection.
On Sunday, August 17, 2014 11:22:38 PM UTC-7, Noel Grandin wrote:
Your question has already been answered, if you read the linked docs
carefully, but why exactly are you trying to get
the original connection anyway?
The
I'm doing a select like the following, with a user function B:
SELECT B1 FROM (SELECT B(1, x) AS B1, B(2, x) AS B2 FROM system_range(0,
389)) WHERE (B2 IN (?,?,?...))
B is getting called about 130k times. There are about 680 items in the IN
clause. So, it gets called a number of times equal to
- 0.1)) IS ?1: B IN(3, 4, 5, 6, 7)
*/
/* scanCount: 6 */
WHERE B IN(3, 4, 5, 6, 7)
Is that saying it scanned 108 rows total? Or is it 102*6?
On Friday, August 15, 2014 10:06:24 AM UTC-7, Brian Craft wrote:
I'm doing a select like the following, with a user function B:
SELECT B1 FROM
also
http://docs.oracle.com/javadb/10.5.1.1/devguide/cdevspecial29620.html
Regards,
Thomas
On Tue, Aug 12, 2014 at 1:30 AM, Brian Craft craft...@gmail.com
javascript: wrote:
Opening a connection, I get an object like
JdbcConnection conn4: url=jdbc:h2:/some/file/path user=
but my
On Monday, July 14, 2014 5:13:26 AM UTC-7, Roger Thomas wrote:
H2 does not currently support cursors, but it does support variables so
rather than use a subselect you could write your example as
SET @average = SELECT avg(i) FROM table(i int = (1,2,3,4));
SELECT i - @average AS result
I suspect this is more of a sql question, but is there a way to get a
function of an aggregate and a row, like removing the mean from the value
of a column?
This throws an error, saying i must be in a group by:
select i - avg(i) from table (i int = (1,2,3,4))
group by returns the average
where n 10) select
avg(n) from T
On Sunday, July 13, 2014 9:01:31 AM UTC-7, Brian Craft wrote:
I suspect this is more of a sql question, but is there a way to get a
function of an aggregate and a row, like removing the mean from the value
of a column?
This throws an error, saying i must
I haven't tried 1.4, yet. In 1.3 it really looks like a row size issue to
me. Given the same amount of underlying data, splitting it across rows of
different lengths gives very odd results. I would expect that the overhead
of storing rows would go down as the number of rows goes down (with
Trying to better understand how implicit joins work, I tried this:
select * from table(a int = (1,2,3)), table(b int = (3,4,5))
which gives different results than
select * from (select * from table(a int = (1,2,3))), (select * from
table(b int = (3,4,5)))
Not affecting anything, but it was
Is there any way to test a parameterized query from the console?
select * from foo where x = ?
and pass in the parameter somehow? Or do I have to replace all the
parameters in-place?
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To
Heh... Or a possibly simpler explanation: is 400 bytes the minimum overhead
of a row in h2, or related in an unfortunately way to a block size? I just
found that the 2x problem goes away when I change it to 4000.
On Friday, June 20, 2014 9:20:04 PM UTC-7, Brian Craft wrote:
Following up
Following up on the large db files, I ran a few tests, loading into this
table:
CREATE TABLE IF NOT EXISTS `scores` (`id` INT NOT NULL PRIMARY KEY,`scores`
VARBINARY(400) NOT NULL)
I varied the settings of LOG, and UNDO_LOG, used one csvread or a sequence
of INSERT statements, inside a
,
Thomas
On Wednesday, June 4, 2014, Brian Craft craft...@gmail.com javascript:
wrote:
It's 1.3. I saw the earlier thread about MVStore disk space, which is a
concern because the data inflation in PageStore is already very large. I
posted an example some months ago. I have a branch
Is there a reliable way to stop the db when it's stuck on a data load
that's taking too long? Very frequently it becomes unresponsive to ^C, or
other signals. kill -9 is the only thing I've found that works, which
leaves the db in a difficult state.
This happens with LOG=1, so not just a LOG=0
? With version 1.3.x
(using the PageStore), LOG=0 it is unstable as documented.
Version 1.4.x (using the MVStore) on the other hand is known to use more
disk space right now, this will be improved in future versions.
Regards,
Thomas
On Tuesday, June 3, 2014, Brian Craft craft...@gmail.com javascript
Running a recovery, the resulting db file is 5x larger, and still growing.
I just had a bulk load deadlock. I'm not sure how to interrupt these
things, but started with kill -INT, then kill -SEGV, then kill -KILL. After
that the db could not be opened, and would dump a bunch of lines like this
, but there's no indication on start that the previous
shutdown wasn't clean, unless I check for the lock file, or something.
On Friday, May 30, 2014 5:43:39 AM UTC-7, Brian Craft wrote:
UNDO_LOG controls rollback, and LOG controls durability? Is that right?
If I'm doing inserts that should
UNDO_LOG controls rollback, and LOG controls durability? Is that right?
If I'm doing inserts that should be atomic, but the data is already
persisted elsewhere, can I safely set LOG=0? If we lose power I don't care
that h2 drops data on the floor, so long as it's atomic. I can detect that
the
I just exited the app with ^c, and h2 gave me this:
^CException in thread Thread-5 java.lang.NoClassDefFoundError:
org/h2/store/FileStoreInputStream
at
org.h2.store.LobStorageBackend.removeAllForTable(LobStorageBackend.java:219)
at
Ah, that is very likely. Thanks! I didn't realize the jar would be
re-opened after launch.
On Friday, May 30, 2014 7:10:10 AM UTC-7, Noel Grandin wrote:
On 2014-05-30 15:54, Brian Craft wrote:
I just exited the app with ^c, and h2 gave me this:
That looks like the kind of error
How can I set the cache for a primary key sequence? I haven't found a way
to specify the cache when creating the column. Also haven't found a way to
specify the sequence for the column, so I can't create the sequence with
the right cache and then create a column that uses it.
Also, the alter
PUBLIC.PROBES ADD CONSTRAINT PUBLIC.CONSTRAINT_8D2C2 FOREIGN
KEY(EID) REFERENCES PUBLIC.EXPERIMENTS(ID) NOCHECK [23505-176]
and so on for several pages.
Backing off to 171 allows it to connect again.
Not sure where to go from here.
On Wednesday, May 28, 2014 3:13:46 PM UTC-7, Brian Craft wrote:
How can
.
On Wednesday, May 28, 2014 3:13:46 PM UTC-7, Brian Craft wrote:
How can I set the cache for a primary key sequence? I haven't found a way
to specify the cache when creating the column. Also haven't found a way to
specify the sequence for the column, so I can't create the sequence
see where it is blocked?
Also which version is this, and what does your DB URL look like?
On 2014-05-25 20:24, Brian Craft wrote:
I'm doing a large insert into .. select * from CSVREAD ..., which is
in a transaction. It seems to be blocking
readers, regardless of whether MVCC is enabled
loaded batches. So, if a transaction per batch is too
aggressive, or blocks readers, then we need to implement not finished
loading flags on various tables that the readers can use to filter out the
half-loaded batches.
On Tuesday, May 27, 2014 10:45:29 AM UTC-7, Brian Craft wrote:
Version
Ah, it's actually somefile;CACHE_SIZE=65536;;MVCC=TRUE
Not sure if the double semicolon is a problem.
b.c.
On Tuesday, May 27, 2014 10:45:29 AM UTC-7, Brian Craft wrote:
Version 1.3.171. The only option I'm setting is MVCC, like
somefile;MVCC=TRUE. I'll try to get a thread dump in a few
qtp507284179-81 is the reader, which seems to be blocked indefinitely.
clojure-agent-send-off-pool-5 is the writer, doing insert into .. select
csvread
db is jdbc:h2:/data/cancer/cavm-testing;CACHE_SIZE=65536;MVCC=TRUE
--
You received this message because you are subscribed to the
Not sure what either of these do. The docs:
When using DIRECT, then the results from the query are directly applied in
the target table without any intermediate step.
When using SORTED, b-tree pages are split at the insertion point. This can
improve performance and reduce disk usage.
What
I'm doing a large insert into .. select * from CSVREAD ..., which is in a
transaction. It seems to be blocking readers, regardless of whether MVCC is
enabled. Is this expected? If I understand the threads on MVCC, it will
allow readers to execute concurrent with a running transaction, but
In larger dbs, the limiting operation for large inserts is allocating
sequence values for the primary key. What's a good strategy for allocating
sequence values in bulk? Only a single thread updates the db, so there's no
concern of values being allocated elsewhere during the inserts.
--
You
Trying to understand the array data type, I'm running this on the web
console:
select table(name array=(('foo','bar'), ('ack','blah')))
First time, the result is:
(((foo, bar)), ((ack, blah)))
(1 row, 0 ms)
Second time the result is:
General error: java.lang.NullPointerException
Doing a data load, I notice in the trace file (level 3) it's writing about
5000 of these per second:
03-14 11:10:03 lock: 2 shared read lock requesting for PROBEMAPS
03-14 11:10:03 lock: 2 shared read lock requesting for PROBEMAPS
03-14 11:10:03 lock: 2 shared read lock requesting for PROBEMAPS
Is there a way to csvread a varbinary column? I tried dumping it as hex
(from another source), but h2 barfs about half way through. It seems to be
doing the hex conversion, but then exits with
Hexadecimal string with odd number of characters:
Nevermind... the file wasn't being flushed, so the line was truncated.
On Friday, March 14, 2014 6:26:43 PM UTC-7, Brian Craft wrote:
Is there a way to csvread a varbinary column? I tried dumping it as hex
(from another source), but h2 barfs about half way through. It seems to be
doing
shutdown compact recovers about 8% of the space.
On Friday, November 1, 2013 10:56:02 AM UTC-7, Thomas Mueller wrote:
Hi,
Did you use the SQL statement SHUTDOWN COMPACT?
http://h2database.com/html/grammar.html#shutdown
Regards,
Thomas
On Fri, Nov 1, 2013 at 6:54 PM, Brian Craft craft
seeks...
will do an ssd test next), but the nearly 3X increase in data size in h2 is
problematic for data at this scale.
On Friday, October 11, 2013 9:15:48 AM UTC-7, Brian Craft wrote:
I'm storing a bunch of 400 byte varbinary objects in a table like so:
CREATE CACHED TABLE PUBLIC.SCORES
Here's the plan for a select. In the next to last step the scan count
matches the number of matching rows (494). The last step is a left join on
a primary key. The number of rows doesn't change, but the scan count
doubles, which I find curious. Why is that?
SELECT
GENE,
I,
Are indexes always created for foreign key constraints? I just noticed in
the information_schema.indexes table that I apparently have a number of
duplicate indexes because I didn't realize indexes were being created
automatically when I added a foreign key constraint.
I gather this behavior
Ah, just found in the grammar docs: The required indexes are automatically
created if required.
Still curious how common this is in different dbs.
On Thursday, October 24, 2013 4:22:44 PM UTC-7, Brian Craft wrote:
Are indexes always created for foreign key constraints? I just noticed
How does varchar indexing work? Does it index the entire length of the
column? And if so, does this mean the data size doubles?
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To unsubscribe from this group and stop receiving emails from it, send
Anyone know what causes this? Dropping an index errors out because it's
part of a constraint, only it isn't really.
drop index PROBE_NAME;
Index PROBE_NAME belongs to a constraint; SQL statement:
drop index PROBE_NAME
Is this still an issue? I'm seeing something very much like this: a loader
process that started out running quickly has now slowed dramatically, with
75% of the cpu being given to PageStore.checkpoint() while fetching blocks
of 100 ids from a sequence.
I'm using 1.3.171.
On Thursday, March
I'm storing a bunch of 400 byte varbinary objects in a table like so:
CREATE CACHED TABLE PUBLIC.SCORES(
ID INT DEFAULT (NEXT VALUE FOR
PUBLIC.SYSTEM_SEQUENCE_0D2CC30B_0ED7_4EB5_A0D4_6448B031FC02) NOT NULL
NULL_TO_DEFAULT SEQUENCE
PUBLIC.SYSTEM_SEQUENCE_0D2CC30B_0ED7_4EB5_A0D4_6448B031FC02
of
reference because all of the data belonging to a single table is now
packed tightly together.
On 2013-10-10 06:04, Brian Craft wrote:
Copying the db to a new db with PAGE_SIZE of 8k lowered the average
query time by a factor of 20: from 10 sec to 0.5 sec.
--
You received
inserts into two tables result in the data being more spatially distributed
on the disk than writing first one table, then the other?
On Tuesday, October 8, 2013 5:41:57 PM UTC-7, Brian Craft wrote:
I'm wondering if these numbers look typical of h2, or if I still have some
bottleneck I
Oh. accu is just a running sum?
On Tuesday, October 8, 2013 10:24:11 AM UTC-7, Thomas Mueller wrote:
Hi,
So, if accu is less than 100%, the rest of the time went where?
To other queries or statements that are below that line.
And if self is less than accu in hprof it has to do with
I believe this can also be caused by the OS block caching, which won't be
affected by restarting H2. I see this behavior:
1) Run query in embedded app, it takes 15 sec
2) Shut down and run the same query in console, it takes 200 ms
3) Run it in the app again after restarting, it takes 200ms
4)
I'm wondering if these numbers look typical of h2, or if I still have some
bottleneck I haven't identified.
I have a probes table (mentioned in other threads) with one index over two
columns: eid (int) and name (varchar). (eid, name) is unique. For each eid
there might be 10k-500k rows.
The
database-wide lock.
Most of the upper layers run under a connection-specific lock, which
means that multiple connections manage to exhibit quite a bit of
concurrency.
On 2013-10-06 06:29, Brian Craft wrote:
Reading over the archive on the subject of threads has left me mostly
still confused
, instead, but that's often much slower,
perhaps because it joins the whole table before apply the where criteria.
On Monday, October 7, 2013 7:47:40 AM UTC-7, Noel Grandin wrote:
On 2013-10-07 16:39, Brian Craft wrote:
Regarding the normal range of an IN query, what other way would you
write
cool.
On Monday, October 7, 2013 8:23:56 AM UTC-7, Noel Grandin wrote:
On 2013-10-07 17:22, Brian Craft wrote:
You could write it as a join, instead, but that's often much slower,
perhaps because it joins the whole table before apply the where
criteria.
No, our joins
.
On Monday, October 7, 2013 7:47:40 AM UTC-7, Noel Grandin wrote:
On 2013-10-07 16:39, Brian Craft wrote:
Regarding the normal range of an IN query, what other way would you
write a query that retrieves hundreds of rows by their keys?
That's a good point, there really isn't any other way
Are the fields in the trace file documented? Like so:
/*SQL l:1829 #:564 t:5*/
What are l, #, and t?
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
of?
On Monday, October 7, 2013 11:18:24 AM UTC-7, Brian Craft wrote:
Are the fields in the trace file documented? Like so:
/*SQL l:1829 #:564 t:5*/
What are l, #, and t?
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To unsubscribe from
From the performance docs:
PreparedStatement prep = conn.prepareStatement(
SELECT * FROM TABLE(X INT=?) T INNER JOIN TEST ON T.X=TEST.ID);
prep.setObject(1, new Object[] { 1, 2 });
ResultSet rs = prep.executeQuery();
What is the TABLE(X INT=?) bit? I don't see anything like this in the
Thanks!
On Monday, October 7, 2013 12:49:53 PM UTC-7, Thomas Mueller wrote:
Hi,
See http://h2database.com/html/functions.html#table
Regards,
Thomas
On Mon, Oct 7, 2013 at 9:06 PM, Brian Craft craft...@gmail.comjavascript:
wrote:
From the performance docs:
PreparedStatement prep
the CPU time in Java.
How are self accu related to time? What are they percentages of?
Do you know java -Xrunhprof? It's basically the same. See
http://docs.oracle.com/javase/7/docs/technotes/samples/hprof.html
Regards,
Thomas
On Mon, Oct 7, 2013 at 8:54 PM, Brian Craft craft
a problem with the data or the
query. Do you have an index on that column? See also
http://h2database.com/html/performance.html#explain_plan
Regards,
Thomas
On Sun, Oct 6, 2013 at 6:29 AM, Brian Craft craft...@gmail.comjavascript:
wrote:
Reading over the archive on the subject
Is this syntax peculiar to h2, and if not, is there a name for it? It's
like a table literal, or something.
On Monday, October 7, 2013 12:59:45 PM UTC-7, Brian Craft wrote:
Thanks!
On Monday, October 7, 2013 12:49:53 PM UTC-7, Thomas Mueller wrote:
Hi,
See http://h2database.com/html
Reading over the archive on the subject of threads has left me mostly still
confused about how h2 handles concurrency, so I've been doing some tests,
instead.
I started with largish queries that all have a where ... in clause with
500 values (like where x in [v0, v1, v2, v499]), which ran
Elapsed time: 7899.676 msecs
Elapsed time: 474.536 msecs
Elapsed time: 919.515 msecs
10
On Monday, September 16, 2013 9:31:33 AM UTC-7, Brian Craft wrote:
One more data point: if copy the table by doing a bunch of inserts from
the application code, this performance difference disappears
Is there a way to show the current values of the various tuning parameters,
like cache_size?
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
One more data point: if copy the table by doing a bunch of inserts from the
application code, this performance difference disappears. It's something
peculiar to insert into .. select
On Friday, September 13, 2013 5:22:05 PM UTC-7, Brian Craft wrote:
Added the h2 profiler calls. Here's
I'm testing different insert scenarios.
I was surprised to find that doing insert into .. select * from .. from a
large table is an order of magnitude slower than running individual
insert commands for each row from a file. That is, if I load file -
table A with insert commands, then do insert
On Friday, September 13, 2013 4:19:52 AM UTC-7, Noel Grandin wrote:
On 2013-09-13 13:12, Brian Craft wrote:
I'm testing different insert scenarios.
I was surprised to find that doing insert into .. select * from ..
from a large table is an order of magnitude slower than running
On Friday, September 13, 2013 4:55:33 AM UTC-7, Noel Grandin wrote:
On 2013-09-13 13:53, Brian Craft wrote:
Could it be java -client vs -server? I'm running the insert into ...
select interactively, and I see now that the process has -client. The file
loader I'm starting w/o
I'll try a profiler later today, but a quick question: the table has a
varbinary column. Could the insert..select be converting this to hex and
back, as it does in other scenarios?
On Friday, September 13, 2013 6:52:47 AM UTC-7, Brian Craft wrote:
To eliminate variables I might have
select * from scores
the tables look like this:
CREATE TABLE IF NOT EXISTS `scores` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`expScores` VARBINARY(400) NOT NULL)
On Friday, September 13, 2013 9:46:02 AM UTC-7, Brian Craft wrote:
I'll try a profiler later today
I could put together a test case in clojure. Do you want that?
I'm not sure what it will tell you, since it's not the loader code that is
slow.
On Friday, September 13, 2013 10:08:39 AM UTC-7, Noel Grandin wrote:
If you can post the test-case code, I can look at it on monday.
--
You
)
So, between 10k rows and 100k rows the performance of the insert into ..
select goes from being around half the 1st table load time, to being
around 30x the 1st table load time.
On Friday, September 13, 2013 11:43:11 AM UTC-7, Brian Craft wrote:
I could put together a test case in clojure
.
On Friday, September 13, 2013 1:22:13 PM UTC-7, Brian Craft wrote:
Here's a trivial clojure script to invoke this behavior. It uses sql korma
to access the db, but the code should be obvious even if you don't know
this library.
The game here is to create a table, put some rows of varbinary
Thanks!
On Thursday, September 12, 2013 9:39:16 AM UTC-7, Thomas Mueller wrote:
Hi,
For memory problems:
What I usually first use is jmap -histo pid to get an overview. It
usually works with recent versions of Java, sometimes you have to try a few
times and sometimes you have to use -F.
I don't understand the difference between these two. The docs on the set
commands are pretty sparse, but both mention transactions. Are they
explained anywhere?
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To unsubscribe from this group and
generated by jmap, and it
would consistently freeze up when tracing particular objects. I suspect it
was incorrectly handling circular dependencies in these cases.
Are there any better tools for jvm memory debugging?
On Tuesday, September 10, 2013 1:21:51 PM UTC-7, Brian Craft wrote:
Well
I've ended up using a boolean loaded column, which basically works: set
to true when all transactions have succeeded.
On Sep 10, 2013 1:32 AM, Noel Grandin noelgran...@gmail.com wrote:
On 2013-09-06 20:15, Brian Craft wrote:
I need to load about 1G of data into an existing db, while
it is getting OOM on import? It shouldn't do that should it?
I've imported several GB of data before, I did get OOM, but increasing
heap size to around 1GB worked for me. I didn't need to go to crazy
sizes...
On 10/09/2013 4:32 PM, Noel Grandin wrote:
On 2013-09-06 20:15, Brian Craft wrote
it to work.
On Tuesday, September 10, 2013 12:51:00 PM UTC-7, Brian Craft wrote:
I do it with a bunch of inserts. The tables already exist. It happens even
if I use UNDO_LOG=0, which I think disables the undo log.
I'll try jmap.
On Tuesday, September 10, 2013 11:50:43 AM UTC-7, Thomas Mueller
imported several GB of data before, I did get OOM, but increasing heap size
to around 1GB worked for me. I didn't need to go to crazy sizes...
On 10/09/2013 4:32 PM, Noel Grandin wrote:
On 2013-09-06 20:15, Brian Craft wrote:
I need to load about 1G of data into an existing db, while maintaining
Is it possible to specify a constraint that multiple columns should be
unique together? I though a unique(a, b) constraint would do it but
apparently not.
create table `foo` (`id` int, `current` boolean)
alter table `foo` add constraint `id_current` unique(`id`, `current`)
insert into `foo`
ack, nevermind. User error.
On Sunday, September 8, 2013 12:15:53 PM UTC-7, Brian Craft wrote:
Is it possible to specify a constraint that multiple columns should be
unique together? I though a unique(a, b) constraint would do it but
apparently not.
create table `foo` (`id` int, `current
I need to load about 1G of data into an existing db, while maintaining data
coherence. Wrapping the inserts in one transaction results in out-of-memory
problems in the jvm. I increased the max heap size to 8g w/o improvement. I
can split it into a bunch of smaller commits, which works fine, but
are trying to do
with your custom aggregate?
On 2013-07-26 00:34, Brian Craft wrote:
I'm unable to use order by with a custom aggregate function. E.g
with group_concat, you can do
group_concat(`foo` order by `bar`)
however this throws a syntax error with my aggregate function
does as well).
That would look as follows: select group_bconcat(binData, orderColumn) from
...
Regards,
Thomas
On Fri, Jul 26, 2013 at 5:05 PM, Brian Craft craft...@gmail.comjavascript:
wrote:
Here's the query with standard group_concat and cast:
SELECT CAST(GROUP_CONCAT(`expScores
I'm unable to use order by with a custom aggregate function. E.g with
group_concat, you can do
group_concat(`foo` order by `bar`)
however this throws a syntax error with my aggregate function.
--
You received this message because you are subscribed to the Google Groups H2
Database group.
To
even tried to run that, on current SVN head, it
generates a parse error at the cast operator.
Why are you trying to use the cast operator in this context?
On 2013-07-20 06:33, Brian Craft wrote:
create aggregate GROUP_BCONCAT for group_bconcat2 :: nil
--
You received this message
With some digging, I found java.sql.Types.
On Saturday, July 20, 2013 1:38:53 PM UTC-7, Brian Craft wrote:
The docs say inputTypes is the SQL type of the parameters. What does
that mean? It's actually an array of integers. Is there an enumeration that
maps to sql types?
--
You received
Any idea what causes this?
create aggregate GROUP_BCONCAT for group_bconcat2 :: nil
JdbcSQLException:
Message: General error: group_bconcat2; SQL statement:
create aggregate GROUP_BCONCAT for group_bconcat2 [5-171]
SQLState: HY000
Error Code: 5
ClassFormatError Duplicate method
1 - 100 of 107 matches
Mail list logo