Re: [firebird-support] Is it possible to insert Image via sql stament?

2016-09-26 Thread Mike Ro miker...@gmail.com [firebird-support]

On 27/09/2016 04:22, trsk...@yahoo.com [firebird-support] wrote:


Is it possible to insert image into FB database directly via sql 
stament using isql or flamerobin?





Hi, yes it is possible with FlameRobin.

1. Insert a new record in the normal way (for example, right click on
   the table, select "Browse data" and Grid->Insert row).
2. Now in the "Data" area of the window you will have a "[null]" in the
   column containing your blob field.
3. Right click on this [null] and there will be an option "Import BLOB
   from file".
4. Pick the image file you want to insert and the "[null]" will change
   to "[BINARY]".
5. Commit the insert and the image should be in your database.

Note: If you want to view the image, right click on the "[BINARY]" text 
and choose "Save BLOB to file..."


Hope this helps.


Re: [firebird-support] Help needed with grouping, GROUP BY and LIST()

2016-08-29 Thread Mike Ro miker...@gmail.com [firebird-support]
On 29/08/2016 00:28, 'Leyne, Sean' s...@broadviewsoftware.com 
[firebird-support] wrote:


What’s performance like?

So far I have only tested with a very limited set of data (approx. 30 
records) so haven't noticed any performance issues. FlameRobin reports 
sub 0.01s times!


The maximum number of input rows is only likely to be a in the high 
100's / low 1000's and the query won't be run very often, so even if it 
took 20 seconds it wouldn't be a problem.


Mike


Re: [firebird-support] Help needed with grouping, GROUP BY and LIST()

2016-08-28 Thread Mike Ro miker...@gmail.com [firebird-support]

Sean,


Have a look at my reply to your second question, about using CTE, that 
will be part of your answer for this problem as well.


FYI: LIST() doesn't observe ORDER BY...

Thanks to your hint and example CTE I was able to get my query fully 
working as follows. Sorry, there are a couple of name changes since the 
original question and I dispensed with the J_RESULT_GROUP table as a 
test can only belong to 0 or 1 groups.


-- Create the CTE
with

-- CTE 1 is for a flow instance
FLOW_INSTANCE(FI_ID, F_NAME, F_PATH)
as
(
select jfi.ID, f.NAME, n1.NAME || ' to ' || n2.NAME
from J_FLOW_INSTANCE jfi
join FLOW f on f.ID = jfi.FLOW
join NODE n1 on n1.ID = jfi.SRC
join NODE n2 on n2.ID = jfi.DEST
),

-- CTE 2 is for a flow test
FLOW_TEST(FT_ID, FI_ID, F_NAME, F_PATH, FT_MODE, RES_CHAR, RES_PROP, TGROUP)
as
(
select jft.ID, fi.FI_ID, fi.F_NAME, fi.F_PATH, m.NAME, c.NAME, 
p.NAME, jft.TGROUP

from J_FLOW_TEST jft
join FLOW_INSTANCE fi on fi.FI_ID = jft.INSTANCE
join MODE m on m.ID = jft.MODE
join J_TEST_RESULT_CHARS jtrc on jtrc.TEST = jft.ID
join CHARACTERISTIC c on c.ID = jtrc.RESULT
join J_CHAR_PROPERTY jcp on jcp.CHARACTERISTIC = c.ID
join PROPERTY p on p.ID = jcp.PROPERTY
)

-- Now the main query, which is the union of those tests that do not 
belong to any group and the grouping of those that are in a group


-- 1) Find all test results that have not been added to a group 
(FLOW_TEST.TGROUP = null) and combine them by test ID

select ftr_ng.FT_ID as FT_ID,
ftr_ng.FI_ID as FI_ID,
ftr_ng.F_NAME as F_NAME,
ftr_ng.FT_MODE,
ftr_ng.F_PATH as FI_PATH,
list(distinct ftr_ng.RES_CHAR) as RES_CHARS,
list(distinct ftr_ng.RES_PROP) as RES_PROPS,
'No group' as T_GROUP
from FLOW_TEST ftr_ng
where ftr_ng.TGROUP is null
group by 1, 2, 3, 4, 5

union

-- 2) Find all the test results that are in a group and combine each 
group into a single row

select  list(distinct ftr_g.FT_ID) as FT_ID,
list(distinct ftr_g.FI_ID) as FI_ID,
list(distinct ftr_g.F_NAME) as F_NAME,
ftr_g.FT_MODE,
list(distinct ftr_g.F_PATH) as FI_PATH,
list(distinct ftr_g.RES_CHAR) as RES_CHARS,
list(distinct ftr_g.RES_PROP) as RES_PROPS,
ftr_g.TGROUP
from FLOW_TEST ftr_g
where ftr_g.TGROUP is not null
group by 4, 8

Thanks again,

Mike



Re: [firebird-support] Help needed with grouping, GROUP BY and LIST()

2016-08-26 Thread Mike Ro miker...@gmail.com [firebird-support]

Hi Sean,


Look at using Common Table Expressions (CTE)

Here is a rough sample based on your details



Thank you for the pointer to CTE and the example.

I have only looked at CTE briefly once before (for representing tree 
structures), so it will take me some time to do some reading and to 
dissect your example.


BR Mike


Re: [firebird-support] Help needed with grouping, GROUP BY and LIST()

2016-08-26 Thread Mike Ro miker...@gmail.com [firebird-support]

Hi Sean, thank you for your answer.



Some questions before I can give a suggestion:

> TEST_IDJFI_IDF_NAMEN1_NAMEN2_NAME MODECHARACTERIST
> ICPROPERTY
> 11FLOW_1NODE_ANODE_BMODE_1 HIGH FLOW> 10 
litres  /

> sec
> 11FLOW_1NODE_ANODE_BMODE_1 HIGH FLOW> 50 gals /
> hour
> 11FLOW_1NODE_ANODE_BMODE_1LOW LOSSPump 
current <

> 30
> 22FLOW_2NODE_CNODE_DMODE_1LOW FLOW< 5 
litres / sec
> 443FLOW_1NODE_CNODE_DMODE_2 LOW FLOW< 5 
litres / sec


> Q1. How can I group and LIST() to produce a single row for the first 
test case

> (FLOW_1 from NODE_A to NODE_B with MODE_1). The result would look
> something like this:
> 11FLOW_1NODE_ANODE_BMODE_1 HIGH_FLOW, LOW_LOSS>
> 10 litres / sec, > 50 gals / hour, Pump current < 30

Q: Did you mean to exclude the 2nd "HIGH_FLOW" characteristic value 
from the summary?


Yes, the same information should not be repeated within a single 
'field'. The second "HIGH_FLOW" characteristic is only in the original 
result set because it has 2 properties (> 10 litres / sec and > 50 gals 
/ hour - sorry contrived I know!) via the J_CHAR_PROPERTY table.


Q: Do you require the relationship/relative position between the 
"characteristic" and "property" values be maintained in the summary 
result?



It would be nice, but not essential. Currently the 'properties' of a 
'characteristic' are not ordered to keep the example simple, but I could 
add a SEQ field to define the ordering and in that case it would be good 
for the LIST() to observe the order.


Mike


[firebird-support] Help needed with grouping, GROUP BY and LIST()

2016-08-26 Thread Mike Ro miker...@gmail.com [firebird-support]

I am modeling some abstract flow tests.

A 'flow instance' is defined as some sort of flow from Node A to Node B. 
This is simply modeled with 3 tables as follows (I have left out the 
alter table add constraints for brevity):


CREATE TABLE FLOW
(
  ID integer NOT NULL PRIMARY KEY,
  NAME varchar(20),
);

CREATE TABLE NODE
(
  ID integer NOT NULL PRIMARY KEY,
  NAME varchar(20),
);

CREATE TABLE J_FLOW_INSTANCE
(
  ID integer NOT NULL PRIMARY KEY,
  FLOW integer, -- FOREIGN KEY (FLOW) REFERENCES FLOW (ID)
  SRC integer, -- FOREIGN KEY (SRC) REFERENCES NODE (ID)
  DEST integer -- FOREIGN KEY (DEST) REFERENCES NODE (ID)
);

Each test represents a flow instance with a test mode:

CREATE TABLE MODE
(
  ID integer NOT NULL PRIMARY KEY,
  NAME varchar(20),
);

CREATE TABLE J_FLOW_TEST
(
  ID integer NOT NULL PRIMARY KEY,
  INSTANCE integer, -- FOREIGN KEY (INSTANCE) REFERENCES 
J_FLOW_INSTANCE (ID)

  MODE integer -- FOREIGN KEY (MODE) REFERENCES MODE (ID)
);

The results of each test are measured in terms of multiple 
characteristics, i.e. High Flow, Low Loss or High Flow, High Loss etc.


CREATE TABLE CHARACTERISTIC
(
  ID integer NOT NULL PRIMARY KEY,
  NAME varchar(20)
);

CREATE TABLE J_TEST_RESULT_CHARS
(
  TEST integer, -- FOREIGN KEY (TEST) REFERENCES J_FLOW_TEST (ID)
  RESULT integer -- FOREIGN KEY (RESULT) REFERENCES CHARACTERISTIC (ID)
);

With characteristics being defined by multiple properties:

CREATE TABLE PROPERTY
(
  ID integer NOT NULL PRIMARY KEY,
  NAME varchar(20)
);

CREATE TABLE J_CHAR_PROPERTY
(
  CHARACTERISTIC integer, -- FOREIGN KEY (CHARACTERISTIC) REFERENCES 
CHARACTERISTIC (ID)

  PROPERTY integer -- FOREIGN KEY (PROPERTY) REFERENCES PROPERTY (ID)
);

I can query the results perfectly using the following query:

select jft.ID as TEST_ID, JFI_ID, F_NAME, N1_NAME, N2_NAME, m.NAME as 
MODE, rc.NAME as CHARACTERISTIC, p.NAME as PROPERTY

from J_FLOW_TEST jft
join
(
select jfi.ID as JFI_ID, f.NAME as F_NAME, n1.NAME as N1_NAME, 
n2.NAME as N2_NAME

from J_FLOW_INSTANCE jfi
join FLOW f on f.ID = FLOW
join NODE n1 on n1.ID = SRC
join NODE n2 on n2.ID = DEST
)
on JFI_ID = jft.INSTANCE
join MODE m on m.ID = jft.MODE
join J_TEST_RESULT_CHARS jtrc on jtrc.TEST = jft.ID
join CHARACTERISTIC rc on rc.ID = jtrc.RESULT
join J_CHAR_PROPERTY jcp on jcp.CHARACTERISTIC = rc.ID
join PROPERTY p on p.ID = jcp.PROPERTY
order by TEST_ID, F_NAME, N1_NAME, N2_NAME, MODE, CHARACTERISTIC, PROPERTY

Which gives me results like this:

_TEST_IDJFI_IDF_NAME N1_NAMEN2_NAMEMODE
CHARACTERISTICPROPERTY_
11FLOW_1NODE_ANODE_BMODE_1HIGH FLOW > 10 litres  
/ sec

11FLOW_1NODE_ANODE_BMODE_1HIGH FLOW > 50 gals / hour
11FLOW_1NODE_ANODE_BMODE_1LOW LOSS Pump current < 30
22FLOW_2NODE_CNODE_DMODE_1LOW FLOW < 5 litres / sec
443FLOW_1NODE_CNODE_DMODE_2LOW FLOW < 5 litres / sec

I have 2 questions:

Q1. How can I group and LIST() to produce a single row for the first 
test case (FLOW_1 from NODE_A to NODE_B with MODE_1). The result would 
look something like this:


11FLOW_1NODE_ANODE_B MODE_1HIGH_FLOW, LOW_LOSS> 
10 litres / sec, > 50 gals / hour, Pump current < 30


Q2. How can I create a further table J_RESULT_GROUP and query for 
arbitrarily grouping results together where any distinct elements are 
LIST()ed whilst common elements are 'GROUP'ed (i.e. appear only once). 
For example if J_RESULT_GROUP had 2 rows:


_GROUP TEST_
12
144

It would give:

_TEST_IDJFI_IDF_NAME N1_NAMEN2_NAMEMODE
CHARACTERISTICPROPERTY
_2,442,3FLOW_1, FLOW_2NODE_CNODE_DMODE_1, MODE_2
LOW_FLOW< 5 litres / sec


I am sorry for the long question, but I have struggled to phrase it in 
simple terms.


TIA for any help!




Re: [firebird-support] Compiling clients with MinGW-W64 ibase.h, STATUS and __x86_64__

2016-03-30 Thread Mike Ro miker...@gmail.com [firebird-support]
On 30/03/2016 20:42, Dimitry Sibiryakov s...@ibphoenix.com 
[firebird-support] wrote:
> But it defines __GNUC__ that cause inttypes.h to be included and 
> _INTPTR_T_DEFINED defined. 

Thank you for your help, it got me away from a red herring.

I had missed that _WIN64 is defined by _cygwin.h, so crtdefs.h does in 
fact correctly define intptr_t *-*

In fact the problem was caused by IBPP, which has a very old ibase.h.














++

Visit http://www.firebirdsql.org and click the Documentation item
on the main (top) menu.  Try FAQ and other links from the left-side menu there.

Also search the knowledgebases at http://www.ibphoenix.com/resources/documents/ 

++


Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/firebird-support/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/firebird-support/join
(Yahoo! ID required)

<*> To change settings via email:
firebird-support-dig...@yahoogroups.com 
firebird-support-fullfeatu...@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
firebird-support-unsubscr...@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/



[firebird-support] Compiling clients with MinGW-W64 ibase.h, STATUS and __x86_64__

2016-03-30 Thread Mike Ro miker...@gmail.com [firebird-support]
I have been having a problem with unexpected status returns from calls 
to isc_dsql_prepare etc.

I think the problem is due to the definition of ISC_STATUS.

In ibase.h it is defined like this:

typedef intptr_t ISC_STATUS;

Where intptr_t is defined like this:

#if defined(_WIN64)
typedef __int64 intptr_t;
typedef unsigned __int64 uintptr_t;
#else
typedef long intptr_t;
typedef unsigned long uintptr_t;
#endif
#endif

But MinGW-W64 does not define _WIN64, so intptr_t is not typedef'd 
correctly.

Therefore I think the first line should instead be:

#if defined(_WIN64) || defined (__x86_64__)

I couldn't find this mentioned anywhere, but sorry if my searching was 
not sufficient.


Re: [firebird-support] Firebird Server on Windows 10

2015-09-12 Thread Mike Ro miker...@gmail.com [firebird-support]
On 12/09/2015 12:22, Mark Rotteveel m...@lawinegevaar.nl 
[firebird-support] wrote:


I just tried with a Firebird 2.5.4 installed in Program Files on Windows
10 and it works (I tried with command prompts with and without admin
privileges) with simply gsec -user sysdba -password masterkey

I have UAC at full, so I have no idea why it wouldn't be working for you.


Thank you for trying, I also cannot understand why not.

Unfortunately I cannot think of a way of getting more debug information 
as to exactly why the security database cannot accessed (lack of 
permissions, file not found, file invalid etc.).


The only thing I know is that the server is running properly because if 
I stop it the error changes to "Cannot attach to services manager".


I tried putting just the security database in C:\temp and pointing to it 
with the -database gsec switch, but this also failed so I am not 
convinced it is as simple as security2.fdb access permissions.


Re: [firebird-support] Firebird Server on Windows 10

2015-09-11 Thread Mike Ro miker...@gmail.com [firebird-support]

Dixon, many thanks for your help.

On 11/09/2015 20:21, 'E. D. Epperson Jr' dixonepper...@gmail.com 
[firebird-support] wrote:
At work and don't have Windows 10 or FirebirdSql here, so I can't test 
my theory.  But I suspect its permissions related.
There is a new feature on the properties dialog for files in Windows 
10.  Its a checkbox to unblock a file. Right click on the 
security.fdb, select Properties and see if that option is there.
No "unblock" check box is shown for security2.fdb. I forgot to mention 
previously that I changed the security properties to Allow: Full 
Control, Modify and Write for All Applications Packages, SYSTEM and 
Administrators.


But ...


On a more radical solution, try installing your firebirdsql to a 
folder NOT in Program Files.



Yes, this solves it!

So it is definitely a permissions thing, but I haven't been able to find 
out what yet.


Thanks again, at least I am up and running now.


[firebird-support] Firebird Server on Windows 10

2015-09-11 Thread Mike Ro miker...@gmail.com [firebird-support]
I am trying to install a Firebird server on a reasonably clean Windows 
10 machine, but I am running into the gsec "cannot attach to password 
database" issue.

Details are:

Edition: Windows 10 Pro (64-bit system, x64 based processor)
Firebird: SuperServer - both 32-bit and 64-bit versions attempted

Full error message (64-bit version) when running gsec from a command 
prompt with administrator privileges:

C:\Program Files\Firebird\Firebird_2_5\bin>gsec -user sysdba -password 
masterkey
use gsec -? to get help
cannot attach to password database
unable to open database

Also tried gsec with
-database "c:\Program Files\Firebird\Firebird_2_5\security2.fdb"
and
-database "localhost:c:\Program Files\Firebird\Firebird_2_5\security2.fdb"
and
-database localhost:"c:\Program Files\Firebird\Firebird_2_5\security2.fdb"

Server running as a service (tried both 32-bit and 64-bit - uninstalling 
and rebooting in between)

Service appears to be running correctly.

Control panel applet is not installed.

FIREBIRD environment variable not set, no Firebirds in either system or 
user PATH environment variables.

I have tried everything I could find on Google without joy, so thank you 
in advance for any help with this!




Re: [firebird-support] Advice requested on design pattern

2015-05-26 Thread Mike Ro miker...@gmail.com [firebird-support]

Once again thank you. I am very sorry it has taken me so long to reply.

On 20/05/15 21:19, Andrea Raimondi andrea.raimo...@gmail.com 
[firebird-support] wrote:


1) Re the separate database: nobody said you shouild do it from your 
own. I'd do it in the client, once the original data has been inserted.
If I understand correctly it will become basically a logging database. 
That could be fine as I have a data access layer anyway but I would need 
to make sure that a audit log is only written if a transaction is 
successfully committed and not rolled back.

3) How many blobs *per user* are you going to have?

Maximum 1000, typically 50 - 100.
6) The only real remaining problem is encryptuion - if you want that 
for some of your data. Provided that I would strongly discouragre that
especially on Firebird because of the eaae of peeking inside the 
DB which equates roughly to no ecryption, I would go for something
slightly off the ordinary: what you can do is encrypt your data in 
the app and then apply it to your tables. There are several strategies you
could employ, but I think you first need to determine whether 
that's what you really want.


Up until now I wasn't really worried about encrypting the data, however 
the more that I look into this the more it seems like an almost necessity.


If I understand correctly the downside of encrypting data in the app is 
obviously that Firebird can't index the contents. I have been looking at 
Firebird 3 in this regard and it seems to be possible, at least to 
include reasonably obfuscated data using a closed source (my own or 
third party) plug-in.


Thank you again for your advice!






Re: [firebird-support] Advice requested on design pattern

2015-05-26 Thread Mike Ro miker...@gmail.com [firebird-support]
On 20/05/15 21:15, Alexandre Benson Smith ibl...@thorsoftware.com.br 
[firebird-support] wrote:



In this way wouldn't the SP have the same problem that was mentioned 
by Louis, namely that only the index from the first table in the 
query would be used?




I did not read it in full details...

But the case of select on views be able to use the index on the first 
table I believe that's because of using of outer joins, with inner 
joins that would not be the case...



I am very sorry for my slow response, but thank you for explaining this.

I don't know anything about how Firebird treats indexes in selectable 
SP, so I have some learning to do in this area.





Re: [firebird-support] Advice requested on design pattern

2015-05-20 Thread Mike Ro miker...@gmail.com [firebird-support]

Thank you for your reply!

On 20/05/15 11:12, Andrea Raimondi andrea.raimo...@gmail.com 
[firebird-support] wrote:
On the other hand, having all fields in all tables means you do not 
have to do a join, which means you can have a richer and
more complete overview of something directly on your main screen 
without affecting the performance too much, especially if you have

some form of caching enabled.
The downside of this is that you really need caching especially if we 
are talking about blobs whose size is, so to speak, "accomplished".


I am not anticipating there to be many blobs and they are only in a few 
of the tables. It's difficult to explain the exact use but imagine a 
document that is mostly text and a few illustrations like an academic 
paper or similar.


There is one exception though and that will consist almost completely of 
blobs (actually PDF files).
Also, speaking about blobs: I do believe that - really - you should 
avoid putting them inside the tables as much as possible.
Interesting you should say that, but I also heard of users with 
terrabytes of blobs in tables? I am expecting 500 Mb - 1Gb at most.
Another alternative (and one I would highly endorse in this day and 
age) would be to be smart and store the blobs into the
users' dropbox accounts and then use an URL reference to pick it up, 
having such reference stored in the DB).
Yes, this would be an option but I would also to like the option to be 
able to read the documents 'offline' using the embedded server on a 
laptop for example.


There will only be a handful of users (50 or so - it's an internal 
project) so the scale is relatively small. Sorry I should have explained 
that in my original email.


The 'created' and 'modified' audit fields should be updated
automatically and obviously I will use a trigger to do this. One
advantage I can see of using a single COMMON_FIELDS table is that
I can write one stored procedure and call it from the 'before'
trigger of each table. Alternatively in the case where the common
fields are repeated in each table I could pass the table name as a
parameter to the SP.


No, Just no. Each table its own trigger. Do not do that. Trust me, 
that's a *REALLY* bad idea. If anything, use numeric constants that 
cannot be hijacked as easily.


Q2: Are there any pitfalls (re-entrance) in calling a common
stored procedure with a table name from a trigger and have the SP
take care of the audit field updates (not exactly sure how to do
this yet)?


Yes, you screw up the security. Doing that means you would need to use 
Dynamic SQL and that's 99% evil. Do not do that unless you are really 
obliged to for some reason.

Thank you, point taken!!


It would be good if the audit fields were not easy to fiddle with
(i.e. by someone using FlameRobin or isql). The obvious way would
be for them to be 'lightly' encrypted (i.e. obfuscated) during
write in the SP and decrypted in the application.


In my opinion, the only way to do that is to have them backed up on a 
different - inaccessible - database. This would also open up a 
business opportunity if your
product is commercial, i.e, "be able to retrieve the last audit 
details if you can't guarantee that the db has not been fiddled with" 
(such as with an intrusion of some

sort or suspicious internal activity).
This is an interesting idea. I thought about having the 'audit' table in 
a different database and inserting a table_id, record_id, action, date 
and time from the trigger. However reflecting on your earlier comments 
this probably isn't a good idea because I think it would require an 
EXECUTE STATEMENT :(.
Q3: Is it possible to hide the SP from a casual observer or would it 
be better to write a UDF for the obfuscation?


UDFs are not an obfuscation solution. Consider the following scenario: 
you use an open source UDF for encryption.
Now, a malicious user turns off the FB service and substitute your 
legittimate version of the UDF with one which also
contains a keyboard hook or worse, a global hook registering all calls 
made on a Windows server. Now you are in serious trouble.


I am hoping my users are that that malicious, but it is a good point. I 
wouldn't want it to end up as a back door in case the database was ever 
deployed somewhere that I haven't really considered.

Does it help?


YES!! Thank you!






Re: [firebird-support] Advice requested on design pattern

2015-05-20 Thread Mike Ro miker...@gmail.com [firebird-support]

Thank you for your response.

On 20/05/15 11:25, Alan J Davies alan.dav...@aldis-systems.co.uk 
[firebird-support] wrote:


Another way of doing what you want would be to use SPs instead of Views.
Then if you "select * from SP" and hide it as below, no-one can see what
you are doing with the data.

In this way wouldn't the SP have the same problem that was mentioned by 
Louis, namely that only the index from the first table in the query 
would be used?


Sorry if I misunderstood or my knowledge of SP is lacking.


Re: [firebird-support] Advice requested on design pattern

2015-05-20 Thread Mike Ro miker...@gmail.com [firebird-support]

Thank you for this very useful response.

On 20/05/2015 08:17, 'Louis van Alphen' lo...@nucleo.co.za 
[firebird-support] wrote:


Yes FB is a RDBMS and not OO DB. In a previous project I used 
‘inheritance’ where I put common fields in a ‘base’ table and other 
fields in a ‘derived’ table and then a view on top. The 2 tables are 
then linked via a PK with same value. The problem I had was when I 
queried the view, it only used indexes of one table, depending on wich 
one was first in the select in the view. It ignored any indexes from 
the other table. So this fell flat. This was in FB 2.1. I would not 
advise doing this.



Ah, this is very valuable. I will take your advice.


You also simply don’t grant access to casual users to the tables. The 
app uses a user with all grants in place. Our convention is to use a 
table name such as CUSTOMER_. Then we on top of that we put a view 
called CUSTOMER. The view also brings in some columns from ‘lookup’ 
table such as CURRENCY, etc. Normal users using reporting tools etc 
only get granted select access on the view and not the underlying table.


This makes sense, and will probably do something similar. I am thinking 
about having a table that shadows Firebird user accounts. When the user 
logs in from the app they would actually access the database with a less 
restricted account but the business logic would enforce restrictions 
(i.e. no user access to the audit fields). If the user logs in from a 
reporting tool they will use their 'real' (Firebird) user account which 
will have Firebird access control limitations.


Q3: You can clear the RDB$PROCEDURE_SOURCE field in the RDB$PROCEDURE 
system table containing the SP and trigger definition. But take care 
to keep the scripts somewhere for backup.


Thank you for this tip, I have never tried this so will do some 
experimentation.


[firebird-support] Advice requested on design pattern

2015-05-19 Thread Mike Ro miker...@gmail.com [firebird-support]

I am creating a database that consists of 30 - 40 tables.

Each table stores a specific set of data which has a specific set of 
attributes. There are also attributes common to all tables for example:


 * UID (from generator)
 * Name
 * Description
 * DateTime Created (audit field)
 * DateTime Modified (audit field)
 * User Created (audit field)
 * User Modified (audit field)

Most tables will contain a few thousands of records, some of them may be 
largish blobs such as photos but mostly it will be plain text and HTML.


Normally insertions and updates would be infrequent but retrieval needs 
to be as fast as possible. The data is being displayed in a relatively 
simple client written in C++ and using IBPP.


Q1: I understand that Firebird does not support table inheritance. 
Therefore is it better to create a 'COMMON_FIELDS' table and then join 
the 'specialised' tables to it or include the common fields (i.e. the 
list above) in each and every table?




The 'created' and 'modified' audit fields should be updated 
automatically and obviously I will use a trigger to do this. One 
advantage I can see of using a single COMMON_FIELDS table is that I can 
write one stored procedure and call it from the 'before' trigger of each 
table. Alternatively in the case where the common fields are repeated in 
each table I could pass the table name as a parameter to the SP.


Q2: Are there any pitfalls (re-entrance) in calling a common stored 
procedure with a table name from a trigger and have the SP take care of 
the audit field updates (not exactly sure how to do this yet)?




It would be good if the audit fields were not easy to fiddle with (i.e. 
by someone using FlameRobin or isql). The obvious way would be for them 
to be 'lightly' encrypted (i.e. obfuscated) during write in the SP and 
decrypted in the application.


Q3: Is it possible to hide the SP from a casual observer or would it be 
better to write a UDF for the obfuscation?


I appreciate that the answer may be "Depends ... " but I would 
appreciate general guidance or opinions where it isn't possible to 
provide a definite answer.


Many thanks for the help!!




Re: [firebird-support] Function - what is it called

2015-01-23 Thread Mike Ro miker...@gmail.com [firebird-support]
On 23/01/2015 10:25, michael.vilhelm...@microcom.dk [firebird-support] 
wrote:


Hi


I am looking for a function which are able to add values from a table 
as records are fetched.


Something like this:


ValueCalculated value

11

23

36

410

515

The calculated value is just the sum of all previous values of VALUE.

Whats the name of such function in english and Firebird?


This is sometimes called the Triangular Number, see

http://en.wikipedia.org/wiki/Triangular_number



Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-09 Thread Mike Ro miker...@gmail.com [firebird-support]

On 09/06/14 17:42, Olivier Mascia o...@integral.be [firebird-support] wrote:



> gfix -write async test.fdb -user sysdba -password secret

It, of course, has a tremendous impact on such tests. Though, 
everybody should carefully consider wether the risks of data 
corruption are acceptable (or not) using sync writes off. I know most 
people here do know, but I wouldn't like you to get mislead by this 
recent topic. There are a large number of past discussions in this 
list archive about this very topic. May I recommend reading them even 
if not all do apply as severely to recent Firebird versions as older 
Interbase(R) editions.


At the very least, to make it short, Firebird uses an ordered-writes 
technique to help preserve its on-disk structure in case of unexpected 
interruption. As soon as the OS is allowed to delay writes as it sees 
fit, that integrity security is lost. And the OS can go quite far in 
not following the initial order of the writes: plain-old common 
scatter-gather techniques can optimize the writes in such an unwanted 
way. So it is not just a matter of how much data may be lost by 
loosing computing resources right in the middle of some transactions, 
but how far the whole data is still meaningful after such an event, 
knowing that the ACID rules might have been broken by the OS 
re-ordering individual writes which did belong to various transactions.


Careful backup plans or more advanced storage architectures (async DB 
on synced volumes exposed from heavily caching SAN subsystems, for 
instance) are required to overcome the risks of integrity loss. The 
path to follow depends on the amount of unavailability of data (during 
restoration) or data loss which is acceptable, and of course the 
amount of money available for implementation.


On simple setups, using FB sync writes on SSD might be decent tradeoff 
between integrity assurance and speed.

.




Thank you for pointing out the perils of disabling forced writes. I took 
your advice and spent some hours reading up on the subject, but found as 
many new questions as answers!


So another solution (for ext4 on Linux) which is to set barrier=0 in 
/etc/fstab , but this exposes the whole partition to peril rather than 
just the database.


We are only using the database for internal purposes (i.e. its not 
widely deployed) but the data is important. Unfortunately I don't have 
control over the hardware which is generally workstations and some old 
laptops so I am not sure SSD would be possible, but I agree it is very 
desirable.


As I mentioned earlier I am curious why the UDF is so much quicker. This 
is still a possible solution for me, but I haven't worked out how to 
show a progress whilst the file is 'uploading' yet.





Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-09 Thread Mike Ro miker...@gmail.com [firebird-support]
On 09/06/14 19:11, Frank Schlottmann-Gödde fr...@schlottmann-goedde.de 
[firebird-support] wrote:


> So the only mystery (for me!) remaining is why the UDF is so much faster
> when it is writing 4096 bytes at time with forced writes on?

Usually it is faster to let the server do the dirty work, but I'm not
sure that it is the only reason here.

Anyway, it seems that your problem is solved at least for now.


Well a UDF may well be a better option for embedded applications and 
thank you for your hints on getting this to work.


Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-09 Thread Mike Ro miker...@gmail.com [firebird-support]



On 09/06/14 09:25, Olivier Mascia o...@tipgroup.com [firebird-support] wrote:


I am seeing 45 seconds for a 13Mb file using C++ / IBPP and 48 
seconds using PHP.


Exactly the same code, database structure and hardware on Windows 
inserts a 13Mb BLOB in just 2.4 seconds!


That just means the caching/flushing policy is different on both OSes.

I don't know the current state of implementation of sync-writes in 
linux versions of Firebird. Nor its relationship with OS filesystem 
cache. But reading about firebird.conf parameters 
FileSystemCacheThreshold, FileSystemCacheSize, MaxUnflushedWrites, 
MaxUnflushedWriteTimes, and synced / not-synced writes (database 
setting), will be a good starting point.


Olivier, thank you for your insightful reply here and on the 
IBPP-DISCUSS mailing list 
http://sourceforge.net/p/ibpp/mailman/message/32435680/.


This was indeed the reason for such a big difference, in fact Linux is 
now about twice the speed of Windows 8 (approximately 1.3 secs for 13Mb 
file), although this may not be like-for-like configuration parameters 
(the Windows system is still using the default firebird.conf).


Based on your comments and other information I did the following on Linux:

1. Uncomment the MaxUnflushedWrites and MaxUnflushedWriteTimes in 
firebird.conf (i.e. with values 100 and 5 respectively).


2. Make a backup of the database and  restore it with a larger page size:

 gbak -replace test.fbk test.fdb -page_size 16384 -v -user sysdba 
-password secret.


3.  Set the database mode to be asynchronous with:

 gfix -write async test.fdb -user sysdba -password secret

These tests were made using the embedded server and FlameRobin which I 
believe writes blobs in 32768 byte segments.


So the only mystery (for me!) remaining is why the UDF is so much faster 
when it is writing 4096 bytes at time with forced writes on?


Thanks again ... Mike.




Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-07 Thread Mike Ro miker...@gmail.com [firebird-support]
On 07/06/14 14:42, Frank Schlottmann-Gödde fr...@schlottmann-goedde.de 
[firebird-support] wrote:



This is probably the cause of the exception:

> declare external function f_insertblob cstring (20), blob returns int by
> value entry_point 'insertblob' module_name 'test_udf.so';

should be

DECLARE EXTERNAL FUNCTION f_insertblob CSTRING(255), BLOB RETURNS
PARAMETER 2
entry_point 'insertblob' module_name 'test_udf.so'

The function expects to get a blobcallback structure from the server,
"returns parameter 2" lets the engine provide it..




Frank, you are absolutely right, how did I miss that!

So now the results are very different:

--
SQL> insert into bloby select 5, f_insertblob ('00013.MTS') from 
rdb$database;

buffer okay
Read a total of 61765632 bytes
Current memory = 817864
Delta memory = 56428
Max memory = 943356
Elapsed time= 0.36 sec
Cpu = 0.00 sec
Buffers = 75
Reads = 8
Writes = 15346
Fetches = 44754
SQL>
--

61Mb in 0.36 secs!!

So for now my problem is solved - thank you very much.

I am still curious why the API is so much slower, but will move the 
discussion about that to the IBPP list.


Many thanks, Mike.


Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-06 Thread Mike Ro miker...@gmail.com [firebird-support]



On 06/06/14 11:04, Frank Schlottmann-Gödde fr...@schlottmann-goedde.de 
[firebird-support] wrote:


Ok, this is what I get for a 13MB file on an Intel NUC (Celeron),
database and home are on an USB Drive, so no real good hardware.

SQL> set stat;
SQL> set time;
SQL> select b_loadfromfile('/home/frank/w.mp3') from rdb$database;

B_LOADFROMFILE
=
0:1
==
B_LOADFROMFILE:
BLOB display set to subtype 1. This BLOB: subtype = 0
==

Current memory = 3784
Delta memory = 416616
Max memory = 37899608
Elapsed time= 9.423 sec
Cpu = 0.000 sec
Buffers = 2048
Reads = 0
Writes = 830
Fetches = 1767

Frank, thank you for taking the time to do this. It confirms that there 
is definitely something wrong with my setup (are you using Firebird 
version 2.5.2?).


I am seeing 45 seconds for a 13Mb file using C++ / IBPP and 48 seconds 
using PHP.


Exactly the same code, database structure and hardware on Windows 
inserts a 13Mb BLOB in just 2.4 seconds!



I could send you my udf if you want to test it.


That would be great thank you! Mine is basically the example (from 
examples/udf/udflib.c) but using BLOBCALLBACK from ibase.h instead of BLOB.


Could you also share your table definition, especially what indexes or 
constraints you have defined as I wonder if this is causing the 
exception? Also do you have anything special in your firebird.conf?

.ddd,_._,___

I am creating a brand new database for each test like this:

create database 'udftest.fdb' user 'sysdba' password 'masterkey';

declare external function f_insertblob cstring (20), blob returns int by 
value entry_point 'insertblob' module_name 'test_udf.so';


create table bloby (id int, data blob); commit;

insert into bloby (id, data) values (0, 'hello');  commit;

select f_modulo_m('BIGFILE.MP3', data) from bloby where id = 0;

select * from bloby;

Thanks once again, Mike.




Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-03 Thread Mike Ro miker...@gmail.com [firebird-support]

Yes, of course, but I never felt the need to time the operations, we


are using a udf s.th. like:

void EXPORT fn_b_load ( char* s , BLOB b )
{
unsigned char *buffer;
if ( b->blob_handle )
{
int buffsize = 4096;
FILE *file;
unsigned char *fname = intern_recode(s);
file= fopen ( fname, "rb" );
if ( file )
{
buffer= ( unsigned char * ) malloc ( buffsize );
if ( buffer )
{
int readSize;
for ( ;; )
{
readSize = fread ( buffer,1,buffsize, file );
( *b->blob_put_segment ) ( b->blob_handle, buffer,readSize );
if ( readSize != buffsize )
break;
}
free ( buffer );
}
fclose ( file );
}
free( fname);
}
}

Unfortunately I haven't had any success in trying this because 
blob_put_segment() is causing Firebird embedded to throw an instance of 
"Firebird::status_exception" followed by a core dump that I really can't 
get to the bottom of. All the other blob functions work fine (such as 
blob_get_segment) and return correct results.





Re: [firebird-support] Blob write to Embedded on Linux performance

2014-06-02 Thread Mike Ro miker...@gmail.com [firebird-support]
On 02/06/2014 12:06, Frank Schlottmann-Gödde fr...@schlottmann-goedde.de 
[firebird-support] wrote:


> I am sorry to bump this, but has anyone else got experience of using
> blobs on Linux?

Yes, of course, but I never felt the need to time the operations, we
are using a udf ... to load documents into the database.

I will try to find the time to check this with some big mp3's.

Frank, many thanks for your response, example UDF and testing. I will 
certainly try to implement the UDF later today.


Yesterday I tried the same IBPP code on Windows 8 and it is much faster 
for files ~10Mb in size, but I also tested 500Mb video and performance 
was similar to Linux at that size. As well as my IBPP code, I also 
tested FlameRobin, Database Workbench and FSQL on Windows all with 
broadly similar results.


Unfortunately apart from FlameRobin I couldn't find any simple way to 
load a blob on Linux, so your UDF will be a good comparison for me to make.


I found this interesting blog but unfortunately the code linked to is no 
longer available.


http://codicesoftware.blogspot.com/2008/09/firebird-is-faster-than-filesystem-for.html

Thanks againMike






Re: [firebird-support] Blob write to Embedded on Linux performance

2014-05-28 Thread Mike Ro miker...@gmail.com [firebird-support]
I am sorry to bump this, but has anyone else got experience of using 
blobs on Linux?


I am using Firebird embedded (2.5.2) on Linux (Ubuntu 14.04) via IBPP 
with a completely default firebird.conf.


My hardware is a Dell Optiplex 755, Intel Core 2 Duo @ 2.33GHz with 
4Gb RAM and a 2Tb Western Digital WD20EARX-32P hard disk.


When I use IBPP to write a blob (actually a music MP3) to a simple 
test database (which contains about 12 tables with 6 records in each) 
it takes around 35 seconds to write a 10Mb file.


The bottleneck seems to be when IBPP is writing a segment:

(*gds.Call()->m_put_segment)(status.Self(), &mHandle, (unsigned 
short)size, (char*)buffer);


The blob type is zero and has a segment size of 4096, I have matched 
this so the segment is written 4096 bytes at a time. I have tried 
smaller and larger sizes with little change in performance.


I see similar performance inserting a blob using Flamerobin, but that 
is of course hardly surprising. Reading the blob is instantaneous in 
all cases.


My question is: Is this the sort of performance I should be expecting 
or is there something wrong?


TIA





[firebird-support] Blob write to Embedded on Linux performance

2014-05-25 Thread miker...@gmail.com [firebird-support]
I am using Firebird embedded (2.5.2) on Linux (Ubuntu 14.04) via IBPP with a 
completely default firebird.conf.
 

 My hardware is a Dell Optiplex 755, Intel Core 2 Duo @ 2.33GHz with 4Gb RAM 
and a 2Tb Western Digital WD20EARX-32P hard disk.
 

 When I use IBPP to write a blob (actually a music MP3) to a simple test 
database (which contains about 12 tables with 6 records in each) it takes 
around 35 seconds to write a 10Mb file.
 

 The bottleneck seems to be when IBPP is writing a segment:
 

 (*gds.Call()->m_put_segment)(status.Self(), &mHandle, (unsigned short)size, 
(char*)buffer);
 

 The blob type is zero and has a segment size of 4096, I have matched this so 
the segment is written 4096 bytes at a time. I have tried smaller and larger 
sizes with little change in performance.
 

 I see similar performance inserting a blob using Flamerobin, but that is of 
course hardly surprising. Reading the blob is instantaneous in all cases.
 

 My question is: Is this the sort of performance I should be expecting or is 
there something wrong?
 

 TIA


Re: [firebird-support] loading data from a .csv file?

2014-05-22 Thread miker...@gmail.com [firebird-support]
> Is there some way to load data into a table in Firebird, like "LOAD DATA 
> INFILE..." in Mysql? I have comma-delimited files from Mysql that I want 
> to load into Firebird tables.
 

 There are some very useful tips here.
 

 
http://stackoverflow.com/questions/8213/generate-insert-sql-statements-from-a-csv-file
 
http://stackoverflow.com/questions/8213/generate-insert-sql-statements-from-a-csv-file
  
I agree with Lester this would nice to have in Flamerobin or we can ask Ivan 
nicely for a Linux version of FSQL ;)